Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...
Snell, Kym I E; Hua, Harry; Debray, Thomas P A; Ensor, Joie; Look, Maxime P; Moons, Karel G M; Riley, Richard D
2016-01-01
Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yasa, I. B. A.; Parnata, I. K.; Susilawati, N. L. N. A. S.
2018-01-01
This study aims to apply analytical review model to analyze the influence of GCG, accounting conservatism, financial distress models and company size on good and poor financial performance of LPD in Bangli Regency. Ordinal regression analysis is used to perform analytical review, so that obtained the influence and relationship between variables to be considered further audit. Respondents in this study were LPDs in Bangli Regency, which amounted to 159 LPDs of that number 100 LPDs were determined as randomly selected samples. The test results found GCG and company size have a significant effect on both the good and poor financial performance, while the conservatism and financial distress model has no significant effect. The influence of the four variables on the overall financial performance of 58.8%, while the remaining 41.2% influenced by other variables. Size, FDM and accounting conservatism are variables, which are further recommended to be audited.
Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial
The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.
Peer assessment of aviation performance: inconsistent for good reasons.
Roth, Wolff-Michael; Mavin, Timothy J
2015-03-01
Research into expertise is relatively common in cognitive science concerning expertise existing across many domains. However, much less research has examined how experts within the same domain assess the performance of their peer experts. We report the results of a modified think-aloud study conducted with 18 pilots (6 first officers, 6 captains, and 6 flight examiners). Pairs of same-ranked pilots were asked to rate the performance of a captain flying in a critical pre-recorded simulator scenario. Findings reveal (a) considerable variance within performance categories, (b) differences in the process used as evidence in support of a performance rating, (c) different numbers and types of facts (cues) identified, and (d) differences in how specific performance events affect choice of performance category and gravity of performance assessment. Such variance is consistent with low inter-rater reliability. Because raters exhibited good, albeit imprecise, reasons and facts, a fuzzy mathematical model of performance rating was developed. The model provides good agreement with observed variations. Copyright © 2014 Cognitive Science Society, Inc.
Goodness of fit of probability distributions for sightings as species approach extinction.
Vogel, Richard M; Hosking, Jonathan R M; Elphick, Chris S; Roberts, David L; Reed, J Michael
2009-04-01
Estimating the probability that a species is extinct and the timing of extinctions is useful in biological fields ranging from paleoecology to conservation biology. Various statistical methods have been introduced to infer the time of extinction and extinction probability from a series of individual sightings. There is little evidence, however, as to which of these models provide adequate fit to actual sighting records. We use L-moment diagrams and probability plot correlation coefficient (PPCC) hypothesis tests to evaluate the goodness of fit of various probabilistic models to sighting data collected for a set of North American and Hawaiian bird populations that have either gone extinct, or are suspected of having gone extinct, during the past 150 years. For our data, the uniform, truncated exponential, and generalized Pareto models performed moderately well, but the Weibull model performed poorly. Of the acceptable models, the uniform distribution performed best based on PPCC goodness of fit comparisons and sequential Bonferroni-type tests. Further analyses using field significance tests suggest that although the uniform distribution is the best of those considered, additional work remains to evaluate the truncated exponential model more fully. The methods we present here provide a framework for evaluating subsequent models.
Bouc-Wen hysteresis model identification using Modified Firefly Algorithm
NASA Astrophysics Data System (ADS)
Zaman, Mohammad Asif; Sikder, Urmita
2015-12-01
The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.
ERIC Educational Resources Information Center
Beheshti, Behzad; Desmarais, Michel C.
2015-01-01
This study investigates the issue of the goodness of fit of different skills assessment models using both synthetic and real data. Synthetic data is generated from the different skills assessment models. The results show wide differences of performances between the skills assessment models over synthetic data sets. The set of relative performances…
Prognostic models for complete recovery in ischemic stroke: a systematic review and meta-analysis.
Jampathong, Nampet; Laopaiboon, Malinee; Rattanakanokchai, Siwanon; Pattanittum, Porjai
2018-03-09
Prognostic models have been increasingly developed to predict complete recovery in ischemic stroke. However, questions arise about the performance characteristics of these models. The aim of this study was to systematically review and synthesize performance of existing prognostic models for complete recovery in ischemic stroke. We searched journal publications indexed in PUBMED, SCOPUS, CENTRAL, ISI Web of Science and OVID MEDLINE from inception until 4 December, 2017, for studies designed to develop and/or validate prognostic models for predicting complete recovery in ischemic stroke patients. Two reviewers independently examined titles and abstracts, and assessed whether each study met the pre-defined inclusion criteria and also independently extracted information about model development and performance. We evaluated validation of the models by medians of the area under the receiver operating characteristic curve (AUC) or c-statistic and calibration performance. We used a random-effects meta-analysis to pool AUC values. We included 10 studies with 23 models developed from elderly patients with a moderately severe ischemic stroke, mainly in three high income countries. Sample sizes for each study ranged from 75 to 4441. Logistic regression was the only analytical strategy used to develop the models. The number of various predictors varied from one to 11. Internal validation was performed in 12 models with a median AUC of 0.80 (95% CI 0.73 to 0.84). One model reported good calibration. Nine models reported external validation with a median AUC of 0.80 (95% CI 0.76 to 0.82). Four models showed good discrimination and calibration on external validation. The pooled AUC of the two validation models of the same developed model was 0.78 (95% CI 0.71 to 0.85). The performance of the 23 models found in the systematic review varied from fair to good in terms of internal and external validation. Further models should be developed with internal and external validation in low and middle income countries.
Fuzzy State Transition and Kalman Filter Applied in Short-Term Traffic Flow Forecasting
Ming-jun, Deng; Shi-ru, Qu
2015-01-01
Traffic flow is widely recognized as an important parameter for road traffic state forecasting. Fuzzy state transform and Kalman filter (KF) have been applied in this field separately. But the studies show that the former method has good performance on the trend forecasting of traffic state variation but always involves several numerical errors. The latter model is good at numerical forecasting but is deficient in the expression of time hysteretically. This paper proposed an approach that combining fuzzy state transform and KF forecasting model. In considering the advantage of the two models, a weight combination model is proposed. The minimum of the sum forecasting error squared is regarded as a goal in optimizing the combined weight dynamically. Real detection data are used to test the efficiency. Results indicate that the method has a good performance in terms of short-term traffic forecasting. PMID:26779258
Fuzzy State Transition and Kalman Filter Applied in Short-Term Traffic Flow Forecasting.
Deng, Ming-jun; Qu, Shi-ru
2015-01-01
Traffic flow is widely recognized as an important parameter for road traffic state forecasting. Fuzzy state transform and Kalman filter (KF) have been applied in this field separately. But the studies show that the former method has good performance on the trend forecasting of traffic state variation but always involves several numerical errors. The latter model is good at numerical forecasting but is deficient in the expression of time hysteretically. This paper proposed an approach that combining fuzzy state transform and KF forecasting model. In considering the advantage of the two models, a weight combination model is proposed. The minimum of the sum forecasting error squared is regarded as a goal in optimizing the combined weight dynamically. Real detection data are used to test the efficiency. Results indicate that the method has a good performance in terms of short-term traffic forecasting.
Testing algorithms for a passenger train braking performance model.
DOT National Transportation Integrated Search
2011-09-01
"The Federal Railroad Administrations Office of Research and Development funded a project to establish performance model to develop, analyze, and test positive train control (PTC) braking algorithms for passenger train operations. With a good brak...
Cheng, Jieyao; Hou, Jinlin; Ding, Huiguo; Chen, Guofeng; Xie, Qing; Wang, Yuming; Zeng, Minde; Ou, Xiaojuan; Ma, Hong; Jia, Jidong
2015-01-01
Background and Aims Noninvasive models have been developed for fibrosis assessment in patients with chronic hepatitis B. However, the sensitivity, specificity and diagnostic accuracy in evaluating liver fibrosis of these methods have not been validated and compared in the same group of patients. The aim of this study was to verify the diagnostic performance and reproducibility of ten reported noninvasive models in a large cohort of Asian CHB patients. Methods The diagnostic performance of ten noninvasive models (HALF index, FibroScan, S index, Zeng model, Youyi model, Hui model, APAG, APRI, FIB-4 and FibroTest) was assessed against the liver histology by ROC curve analysis in CHB patients. The reproducibility of the ten models were evaluated by recalculating the diagnostic values at the given cut-off values defined by the original studies. Results Six models (HALF index, FibroScan, Zeng model, Youyi model, S index and FibroTest) had AUROCs higher than 0.70 in predicting any fibrosis stage and 2 of them had best diagnostic performance with AUROCs to predict F≥2, F≥3 and F4 being 0.83, 0.89 and 0.89 for HALF index, 0.82, 0.87 and 0.87 for FibroScan, respectively. Four models (HALF index, FibroScan, Zeng model and Youyi model) showed good diagnostic values at given cut-offs. Conclusions HALF index, FibroScan, Zeng model, Youyi model, S index and FibroTest show a good diagnostic performance and all of them, except S index and FibroTest, have good reproducibility for evaluating liver fibrosis in CHB patients. Registration Number ChiCTR-DCS-07000039. PMID:26709706
The Five Key Questions of Human Performance Modeling.
Wu, Changxu
2018-01-01
Via building computational (typically mathematical and computer simulation) models, human performance modeling (HPM) quantifies, predicts, and maximizes human performance, human-machine system productivity and safety. This paper describes and summarizes the five key questions of human performance modeling: 1) Why we build models of human performance; 2) What the expectations of a good human performance model are; 3) What the procedures and requirements in building and verifying a human performance model are; 4) How we integrate a human performance model with system design; and 5) What the possible future directions of human performance modeling research are. Recent and classic HPM findings are addressed in the five questions to provide new thinking in HPM's motivations, expectations, procedures, system integration and future directions.
Performance analysis of Supply Chain Management with Supply Chain Operation reference model
NASA Astrophysics Data System (ADS)
Hasibuan, Abdurrozzaq; Arfah, Mahrani; Parinduri, Luthfi; Hernawati, Tri; Suliawati; Harahap, Bonar; Rahmah Sibuea, Siti; Krianto Sulaiman, Oris; purwadi, Adi
2018-04-01
This research was conducted at PT. Shamrock Manufacturing Corpora, the company is required to think creatively to implement competition strategy by producing goods/services that are more qualified, cheaper. Therefore, it is necessary to measure the performance of Supply Chain Management in order to improve the competitiveness. Therefore, the company is required to optimize its production output to meet the export quality standard. This research begins with the creation of initial dimensions based on Supply Chain Management process, ie Plan, Source, Make, Delivery, and Return with hierarchy based on Supply Chain Reference Operation that is Reliability, Responsiveness, Agility, Cost, and Asset. Key Performance Indicator identification becomes a benchmark in performance measurement whereas Snorm De Boer normalization serves to equalize Key Performance Indicator value. Analiytical Hierarchy Process is done to assist in determining priority criteria. Measurement of Supply Chain Management performance at PT. Shamrock Manufacturing Corpora produces SC. Responsiveness (0.649) has higher weight (priority) than other alternatives. The result of performance analysis using Supply Chain Reference Operation model of Supply Chain Management performance at PT. Shamrock Manufacturing Corpora looks good because its monitoring system between 50-100 is good.
Pertegal, Miguel-Ángel; Oliva, Alfredo
2017-10-10
The aim of this study was to examine a model on the contribution of school assets on the development of adolescent´s well-being and school success. The sample comprised 1944 adolescents (893 girls and 1051 boys) aged between 12 and 17 years (M = 14.4; SD = 1.13), from secondary schools in Western Andalusia, which completed some self-report questionnaires. The results of structural equation modeling showed the goodness of fit of the initial theoretical model. This model confirmed the importance of school connectedness as a key factor in the relationships between other school assets (social climate; clarity of the rules and values, and positive opportunities and empowerment) and commitment to learning, academic performance and life satisfaction. However, the re-specification of the initial model considered two complementary paths with theoretical sense: first, a direct influence between clarity of the rules and values and commitment to learning, and second, between academic performance and life satisfaction. This model obtained better goodness of fit indices than the first one: χ2 = 16.32; gl = 8; p = .038; χ2/gl = 2.04; SRMR = .018; RSMEA = .023 (95% C.I. = .005; 040); NNFI = .98; CFI = .99. From our study, the need to invest in initiatives focused on the promotion of adolescents' links with their school emerges as a key goal to contribute towards, at the same time, both a good academic performance and a better life satisfaction.
Performance Prediction of Constrained Waveform Design for Adaptive Radar
2016-11-01
Kullback - Leibler divergence. χ2 Goodness - of - Fit Test We compute the estimated CDF for both models with 10000 MC trials. For Model 1 we observed a p-value of ...was clearly similar in its physical attributes, but the measures used , ( Kullback - Leibler , Chi-Square Test and the trace of the covariance) showed...models goodness - of - fit we look at three measures (1) χ2- Test (2) Trace of the inverse
Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel
2016-10-01
We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.
Robust Temperature Control of a Thermoelectric Cooler via μ -Synthesis
NASA Astrophysics Data System (ADS)
Kürkçü, Burak; Kasnakoğlu, Coşku
2018-02-01
In this work robust temperature control of a thermoelectric cooler (TEC) via μ -synthesis is studied. An uncertain dynamical model for the TEC that is suitable for robust control methods is derived. The model captures variations in operating point due to current, load and temperature changes. A temperature controller is designed utilizing μ -synthesis, a powerful method guaranteeing robust stability and performance. For comparison two well-known control methods, namely proportional-integral-derivative (PID) and internal model control (IMC), are also realized to benchmark the proposed approach. It is observed that the stability and performance on the nominal model are satisfactory for all cases. On the other hand, under perturbations the responses of PID and IMC deteriorate and even become unstable. In contrast, the μ -synthesis controller succeeds in keeping system stability and achieving good performance under all perturbations within the operating range, while at the same time providing good disturbance rejection.
Cho, Hyun; Kwon, Min; Choi, Ji-Hye; Lee, Sang-Kyu; Choi, Jung Seok; Choi, Sam-Wook; Kim, Dai-Jin
2014-09-01
This study was conducted to develop and validate a standardized self-diagnostic Internet addiction (IA) scale based on the diagnosis criteria for Internet Gaming Disorder (IGD) in the Diagnostic and Statistical Manual of Mental Disorder, 5th edition (DSM-5). Items based on the IGD diagnosis criteria were developed using items of the previous Internet addiction scales. Data were collected from a community sample. The data were divided into two sets, and confirmatory factor analysis (CFA) was performed repeatedly. The model was modified after discussion with professionals based on the first CFA results, after which the second CFA was performed. The internal consistency reliability was generally good. The items that showed significantly low correlation values based on the item-total correlation of each factor were excluded. After the first CFA was performed, some factors and items were excluded. Seven factors and 26 items were prepared for the final model. The second CFA results showed good general factor loading, Squared Multiple Correlation (SMC) and model fit. The model fit of the final model was good, but some factors were very highly correlated. It is recommended that some of the factors be refined through further studies. Copyright © 2014. Published by Elsevier Ltd.
Simulation and performance of brushless dc motor actuators
NASA Astrophysics Data System (ADS)
Gerba, A., Jr.
1985-12-01
The simulation model for a Brushless D.C. Motor and the associated commutation power conditioner transistor model are presented. The necessary conditions for maximum power output while operating at steady-state speed and sinusoidally distributed air-gap flux are developed. Comparison of simulated model with the measured performance of a typical motor are done both on time response waveforms and on average performance characteristics. These preliminary results indicate good agreement. Plans for model improvement and testing of a motor-driven positioning device for model evaluation are outlined.
The Effect of Sensor Performance on Safe Minefield Transit
2002-12-01
the results of the simpler model are not good approximations of the results obtained with the more complex model, suggesting that even greater complexity in maneuver modeling may be desirable for some purposes.
Virtual Team Governance: Addressing the Governance Mechanisms and Virtual Team Performance
NASA Astrophysics Data System (ADS)
Zhan, Yihong; Bai, Yu; Liu, Ziheng
As technology has improved and collaborative software has been developed, virtual teams with geographically dispersed members spread across diverse physical locations have become increasingly prominent. Virtual team is supported by advancing communication technologies, which makes virtual teams able to largely transcend time and space. Virtual teams have changed the corporate landscape, which are more complex and dynamic than traditional teams since the members of virtual teams are spread on diverse geographical locations and their roles in the virtual team are different. Therefore, how to realize good governance of virtual team and arrive at good virtual team performance is becoming critical and challenging. Good virtual team governance is essential for a high-performance virtual team. This paper explores the performance and the governance mechanism of virtual team. It establishes a model to explain the relationship between the performance and the governance mechanisms in virtual teams. This paper is focusing on managing virtual teams. It aims to find the strategies to help business organizations to improve the performance of their virtual teams and arrive at the objectives of good virtual team management.
ERIC Educational Resources Information Center
Sueiro, Manuel J.; Abad, Francisco J.
2011-01-01
The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…
NASA Astrophysics Data System (ADS)
Rounds, S. A.; Sullivan, A. B.
2004-12-01
Assessing a model's ability to reproduce field data is a critical step in the modeling process. For any model, some method of determining goodness-of-fit to measured data is needed to aid in calibration and to evaluate model performance. Visualizations and graphical comparisons of model output are an excellent way to begin that assessment. At some point, however, model performance must be quantified. Goodness-of-fit statistics, including the mean error (ME), mean absolute error (MAE), root mean square error, and coefficient of determination, typically are used to measure model accuracy. Statistical tools such as the sign test or Wilcoxon test can be used to test for model bias. The runs test can detect phase errors in simulated time series. Each statistic is useful, but each has its limitations. None provides a complete quantification of model accuracy. In this study, a suite of goodness-of-fit statistics was applied to a model of Henry Hagg Lake in northwest Oregon. Hagg Lake is a man-made reservoir on Scoggins Creek, a tributary to the Tualatin River. Located on the west side of the Portland metropolitan area, the Tualatin Basin is home to more than 450,000 people. Stored water in Hagg Lake helps to meet the agricultural and municipal water needs of that population. Future water demands have caused water managers to plan for a potential expansion of Hagg Lake, doubling its storage to roughly 115,000 acre-feet. A model of the lake was constructed to evaluate the lake's water quality and estimate how that quality might change after raising the dam. The laterally averaged, two-dimensional, U.S. Army Corps of Engineers model CE-QUAL-W2 was used to construct the Hagg Lake model. Calibrated for the years 2000 and 2001 and confirmed with data from 2002 and 2003, modeled parameters included water temperature, ammonia, nitrate, phosphorus, algae, zooplankton, and dissolved oxygen. Several goodness-of-fit statistics were used to quantify model accuracy and bias. Model performance was judged to be excellent for water temperature (annual ME: -0.22 to 0.05 ° C; annual MAE: 0.62 to 0.68 ° C) and dissolved oxygen (annual ME: -0.28 to 0.18 mg/L; annual MAE: 0.43 to 0.92 mg/L), showing that the model is sufficiently accurate for future water resources planning and management.
Na, Hyuntae; Song, Guang
2015-07-01
In a recent work we developed a method for deriving accurate simplified models that capture the essentials of conventional all-atom NMA and identified two best simplified models: ssNMA and eANM, both of which have a significantly higher correlation with NMA in mean square fluctuation calculations than existing elastic network models such as ANM and ANMr2, a variant of ANM that uses the inverse of the squared separation distances as spring constants. Here, we examine closely how the performance of these elastic network models depends on various factors, namely, the presence of hydrogen atoms in the model, the quality of input structures, and the effect of crystal packing. The study reveals the strengths and limitations of these models. Our results indicate that ssNMA and eANM are the best fine-grained elastic network models but their performance is sensitive to the quality of input structures. When the quality of input structures is poor, ANMr2 is a good alternative for computing mean-square fluctuations while ANM model is a good alternative for obtaining normal modes. © 2015 Wiley Periodicals, Inc.
Unresolved Galaxy Classifier for ESA/Gaia mission: Support Vector Machines approach
NASA Astrophysics Data System (ADS)
Bellas-Velidis, Ioannis; Kontizas, Mary; Dapergolas, Anastasios; Livanou, Evdokia; Kontizas, Evangelos; Karampelas, Antonios
A software package Unresolved Galaxy Classifier (UGC) is being developed for the ground-based pipeline of ESA's Gaia mission. It aims to provide an automated taxonomic classification and specific parameters estimation analyzing Gaia BP/RP instrument low-dispersion spectra of unresolved galaxies. The UGC algorithm is based on a supervised learning technique, the Support Vector Machines (SVM). The software is implemented in Java as two separate modules. An offline learning module provides functions for SVM-models training. Once trained, the set of models can be repeatedly applied to unknown galaxy spectra by the pipeline's application module. A library of galaxy models synthetic spectra, simulated for the BP/RP instrument, is used to train and test the modules. Science tests show a very good classification performance of UGC and relatively good regression performance, except for some of the parameters. Possible approaches to improve the performance are discussed.
How motivation affects academic performance: a structural equation modelling analysis.
Kusurkar, R A; Ten Cate, Th J; Vos, C M P; Westers, P; Croiset, G
2013-03-01
Few studies in medical education have studied effect of quality of motivation on performance. Self-Determination Theory based on quality of motivation differentiates between Autonomous Motivation (AM) that originates within an individual and Controlled Motivation (CM) that originates from external sources. To determine whether Relative Autonomous Motivation (RAM, a measure of the balance between AM and CM) affects academic performance through good study strategy and higher study effort and compare this model between subgroups: males and females; students selected via two different systems namely qualitative and weighted lottery selection. Data on motivation, study strategy and effort was collected from 383 medical students of VU University Medical Center Amsterdam and their academic performance results were obtained from the student administration. Structural Equation Modelling analysis technique was used to test a hypothesized model in which high RAM would positively affect Good Study Strategy (GSS) and study effort, which in turn would positively affect academic performance in the form of grade point averages. This model fit well with the data, Chi square = 1.095, df = 3, p = 0.778, RMSEA model fit = 0.000. This model also fitted well for all tested subgroups of students. Differences were found in the strength of relationships between the variables for the different subgroups as expected. In conclusion, RAM positively correlated with academic performance through deep strategy towards study and higher study effort. This model seems valid in medical education in subgroups such as males, females, students selected by qualitative and weighted lottery selection.
The use of neural network technology to model swimming performance.
Silva, António José; Costa, Aldo Manuel; Oliveira, Paulo Moura; Reis, Victor Machado; Saavedra, José; Perl, Jurgen; Rouboa, Abel; Marinho, Daniel Almeida
2007-01-01
to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons) and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females) of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility), swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics) and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron) with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances) is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports. Key pointsThe non-linear analysis resulting from the use of feed forward neural network allowed us the development of four performance models.The mean difference between the true and estimated results performed by each one of the four neural network models constructed was low.The neural network tool can be a good approach in the resolution of the performance modeling as an alternative to the standard statistical models that presume well-defined distributions and independence among all inputs.The use of neural networks for sports sciences application allowed us to create very realistic models for swimming performance prediction based on previous selected criterions that were related with the dependent variable (performance).
NASA Astrophysics Data System (ADS)
Li, Yang; Yao, Zhao; Zhang, Chun-Wei; Fu, Xiao-Qian; Li, Zhi-Ming; Li, Nian-Qiang; Wang, Cong
2017-05-01
In order to provide excellent performance and show the development of a complicated structure in a module and system, this paper presents a double air-bridge-structured symmetrical differential inductor based on integrated passive device technology. Corresponding to the proposed complicated structure, a new manufacturing process fabricated on a high-resistivity GaAs substrate is described in detail. Frequency-independent physical models are presented with lump elements and the results of skin effect-based measurements. Finally, some key features of the inductor are compared; good agreement between the measurements and modeled circuit fully verifies the validity of the proposed modeling approach. Meanwhile, we also present a comparison of different coil turns for inductor performance. The proposed work can provide a good solution for the design, fabrication, modeling, and practical application of radio-frequency modules and systems.
ERIC Educational Resources Information Center
Swain, Jon
2000-01-01
Explores the effects of football (soccer) in the social construction of hegemonic masculine practices among a group of Year 6 English junior school boys. Argues that football (soccer) acts as a model for the boys in which they utilize the game as a means of constructing, negotiating, and performing their masculinity. (CMK)
Lower-limb kinematics of single-leg squat performance in young adults.
Horan, Sean A; Watson, Steven L; Carty, Christopher P; Sartori, Massimo; Weeks, Benjamin K
2014-01-01
To determine the kinematic parameters that characterize good and poor single-leg squat (SLS) performance. A total of 22 healthy young adults free from musculoskeletal impairment were recruited for testing. For each SLS, both two-dimensional video and three-dimensional motion analysis data were collected. Pelvis, hip, and knee angles were calculated using a reliable and validated lower-limb (LL) biomechanical model. Two-dimensional video clips of SLSs were blindly assessed in random order by eight musculoskeletal physiotherapists using a 10-point ordinal scale. To facilitate between-group comparisons, SLS performances were stratified by tertiles corresponding to poor, intermediate, and good SLS performance. Mean ratings of SLS performance assessed by physiotherapists were 8.3 (SD 0.5), 6.8 (SD 0.7), and 4.0 (SD 0.8) for good, intermediate, and poor squats, respectively. Three-dimensional analysis revealed that people whose SLS performance was assessed as poor exhibited increased hip adduction, reduced knee flexion, and increased medio-lateral displacement of the knee joint centre compared to those whose SLS performance was assessed as good (p≤0.05). Overall, poor SLS performance is characterized by inadequate knee flexion and excessive frontal plane motion of the knee and hip. Future investigations of SLS performance should consider standardizing knee flexion angle to illuminate other influential kinematic parameters.
NASA Astrophysics Data System (ADS)
Zakiyatussariroh, W. H. Wan; Said, Z. Mohammad; Norazan, M. R.
2014-12-01
This study investigated the performance of the Lee-Carter (LC) method and it variants in modeling and forecasting Malaysia mortality. These include the original LC, the Lee-Miller (LM) variant and the Booth-Maindonald-Smith (BMS) variant. These methods were evaluated using Malaysia's mortality data which was measured based on age specific death rates (ASDR) for 1971 to 2009 for overall population while those for 1980-2009 were used in separate models for male and female population. The performance of the variants has been examined in term of the goodness of fit of the models and forecasting accuracy. Comparison was made based on several criteria namely, mean square error (MSE), root mean square error (RMSE), mean absolute deviation (MAD) and mean absolute percentage error (MAPE). The results indicate that BMS method was outperformed in in-sample fitting for overall population and when the models were fitted separately for male and female population. However, in the case of out-sample forecast accuracy, BMS method only best when the data were fitted to overall population. When the data were fitted separately for male and female, LCnone performed better for male population and LM method is good for female population.
NASA Technical Reports Server (NTRS)
Arya, Vinod K.; Halford, Gary R.
1993-01-01
The feasibility of a viscoplastic model incorporating two back stresses and a drag strength is investigated for performing nonlinear finite element analyses of structural engineering problems. To demonstrate suitability for nonlinear structural analyses, the model is implemented into a finite element program and analyses for several uniaxial and multiaxial problems are performed. Good agreement is shown between the results obtained using the finite element implementation and those obtained experimentally. The advantages of using advanced viscoplastic models for performing nonlinear finite element analyses of structural components are indicated.
Development of self-control in children aged 3 to 9 years: Perspective from a dual-systems model
Tao, Ting; Wang, Ligang; Fan, Chunlei; Gao, Wenbin
2014-01-01
The current study tested a set of interrelated theoretical propositions based on a dual-systems model of self-control. Data were collected from 2135 children aged 3 to 9 years. The results suggest that (a) there was positive growth in good self-control, whereas poor control remained relatively stable; and (b) girls performed better than boys on tests of good self-control. The results are discussed in terms of their implications for a dual-systems model of self-control theory and future empirical work. PMID:25501669
Surface tension modelling of liquid Cd-Sn-Zn alloys
NASA Astrophysics Data System (ADS)
Fima, Przemyslaw; Novakovic, Rada
2018-06-01
The thermodynamic model in conjunction with Butler equation and the geometric models were used for the surface tension calculation of Cd-Sn-Zn liquid alloys. Good agreement was found between the experimental data for limiting binaries and model calculations performed with Butler model. In the case of ternary alloys, the surface tension variation with Cd content is better reproduced in the case of alloys lying on vertical sections defined by high Sn to Zn molar fraction ratio. The calculated surface tension is in relatively good agreement with the available experimental data. In addition, the surface segregation of liquid ternary Cd-Sn-Zn and constituent binaries has also been calculated.
Moving target detection method based on improved Gaussian mixture model
NASA Astrophysics Data System (ADS)
Ma, J. Y.; Jie, F. R.; Hu, Y. J.
2017-07-01
Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.
Research on quality metrics of wireless adaptive video streaming
NASA Astrophysics Data System (ADS)
Li, Xuefei
2018-04-01
With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.
A New Metric for Quantifying Performance Impairment on the Psychomotor Vigilance Test
2012-01-01
used the coefficient of determination (R2) and the P-values based on Bartelss test of randomness of the residual error to quantify the goodness - of - fit ...we used the goodness - of - fit between each metric and the corresponding individualized two-process model output (Rajaraman et al., 2008, 2009) to assess...individualized two-process model fits for each of the 12 subjects using the five metrics. The P-values are for Bartelss
Inter-comparison of time series models of lake levels predicted by several modeling strategies
NASA Astrophysics Data System (ADS)
Khatibi, R.; Ghorbani, M. A.; Naghipour, L.; Jothiprakash, V.; Fathima, T. A.; Fazelifard, M. H.
2014-04-01
Five modeling strategies are employed to analyze water level time series of six lakes with different physical characteristics such as shape, size, altitude and range of variations. The models comprise chaos theory, Auto-Regressive Integrated Moving Average (ARIMA) - treated for seasonality and hence SARIMA, Artificial Neural Networks (ANN), Gene Expression Programming (GEP) and Multiple Linear Regression (MLR). Each is formulated on a different premise with different underlying assumptions. Chaos theory is elaborated in a greater detail as it is customary to identify the existence of chaotic signals by a number of techniques (e.g. average mutual information and false nearest neighbors) and future values are predicted using the Nonlinear Local Prediction (NLP) technique. This paper takes a critical view of past inter-comparison studies seeking a superior performance, against which it is reported that (i) the performances of all five modeling strategies vary from good to poor, hampering the recommendation of a clear-cut predictive model; (ii) the performances of the datasets of two cases are consistently better with all five modeling strategies; (iii) in other cases, their performances are poor but the results can still be fit-for-purpose; (iv) the simultaneous good performances of NLP and SARIMA pull their underlying assumptions to different ends, which cannot be reconciled. A number of arguments are presented including the culture of pluralism, according to which the various modeling strategies facilitate an insight into the data from different vantages.
Performance of PRISM III and PELOD-2 scores in a pediatric intensive care unit.
Gonçalves, Jean-Pierre; Severo, Milton; Rocha, Carla; Jardim, Joana; Mota, Teresa; Ribeiro, Augusto
2015-10-01
The study aims were to compare two models (The Pediatric Risk of Mortality III (PRISM III) and Pediatric Logistic Organ Dysfunction (PELOD-2)) for prediction of mortality in a pediatric intensive care unit (PICU) and recalibrate PELOD-2 in a Portuguese population. To achieve the previous goal, a prospective cohort study to evaluate score performance (standardized mortality ratio, discrimination, and calibration) for both models was performed. A total of 556 patients consecutively admitted to our PICU between January 2011 and December 2012 were included in the analysis. The median age was 65 months, with an interquartile range of 1 month to 17 years. The male-to-female ratio was 1.5. The median length of PICU stay was 3 days. The overall predicted number of deaths using PRISM III score was 30.8 patients whereas that by PELOD-2 was 22.1 patients. The observed mortality was 29 patients. The area under the receiver operating characteristics curve for the two models was 0.92 and 0.94, respectively. The Hosmer and Lemeshow goodness-of-fit test showed a good calibration only for PRISM III (PRISM III: χ (2) = 3.820, p = 0.282; PELOD-2: χ (2) = 9.576, p = 0.022). Both scores had good discrimination. PELOD-2 needs recalibration to be a better reliable prediction tool. • PRISM III (Pediatric Risk of Mortality III) and PELOD (Pediatric Logistic Organ Dysfunction) scores are frequently used to assess the performance of intensive care units and also for mortality prediction in the pediatric population. • Pediatric Logistic Organ Dysfunction 2 is the newer version of PELOD and has recently been validated with good discrimination and calibration. What is New: • In our population, both scores had good discrimination. • PELOD-2 needs recalibration to be a better reliable prediction tool.
Ebell, Mark H; Jang, Woncheol; Shen, Ye; Geocadin, Romergryko G
2013-11-11
Informing patients and providers of the likelihood of survival after in-hospital cardiac arrest (IHCA), neurologically intact or with minimal deficits, may be useful when discussing do-not-attempt-resuscitation orders. To develop a simple prearrest point score that can identify patients unlikely to survive IHCA, neurologically intact or with minimal deficits. The study included 51,240 inpatients experiencing an index episode of IHCA between January 1, 2007, and December 31, 2009, in 366 hospitals participating in the Get With the Guidelines-Resuscitation registry. Dividing data into training (44.4%), test (22.2%), and validation (33.4%) data sets, we used multivariate methods to select the best independent predictors of good neurologic outcome, created a series of candidate decision models, and used the test data set to select the model that best classified patients as having a very low (<1%), low (1%-3%), average (>3%-15%), or higher than average (>15%) likelihood of survival after in-hospital cardiopulmonary resuscitation for IHCA with good neurologic status. The final model was evaluated using the validation data set. Survival to discharge after in-hospital cardiopulmonary resuscitation for IHCA with good neurologic status (neurologically intact or with minimal deficits) based on a Cerebral Performance Category score of 1. The best performing model was a simple point score based on 13 prearrest variables. The C statistic was 0.78 when applied to the validation set. It identified the likelihood of a good outcome as very low in 9.4% of patients (good outcome in 0.9%), low in 18.9% (good outcome in 1.7%), average in 54.0% (good outcome in 9.4%), and above average in 17.7% (good outcome in 27.5%). Overall, the score can identify more than one-quarter of patients as having a low or very low likelihood of survival to discharge, neurologically intact or with minimal deficits after IHCA (good outcome in 1.4%). The Good Outcome Following Attempted Resuscitation (GO-FAR) scoring system identifies patients who are unlikely to benefit from a resuscitation attempt should they experience IHCA. This information can be used as part of a shared decision regarding do-not-attempt-resuscitation orders.
Kaumalapau Harbor, Hawaii, Breakwater Repair
2012-05-01
agricultural economy to an economy based on tourism . Primary use of the harbor changed from the export of pineapple to the import of fuel and goods to...unit. The pulse-velocity measurement apparatus consists of a transmitter and receiver connected to electronic circuitry that generates a pulse sent...performance indices include a ME of -0.43 ft, RMSE of 0.66 ft and SI ERDC/CHL TR-12-7 86 of 0.24. In other words , the Maui SWAN model will perform as good
Distributed multi-criteria model evaluation and spatial association analysis
NASA Astrophysics Data System (ADS)
Scherer, Laura; Pfister, Stephan
2015-04-01
Model performance, if evaluated, is often communicated by a single indicator and at an aggregated level; however, it does not embrace the trade-offs between different indicators and the inherent spatial heterogeneity of model efficiency. In this study, we simulated the water balance of the Mississippi watershed using the Soil and Water Assessment Tool (SWAT). The model was calibrated against monthly river discharge at 131 measurement stations. Its time series were bisected to allow for subsequent validation at the same gauges. Furthermore, the model was validated against evapotranspiration which was available as a continuous raster based on remote sensing. The model performance was evaluated for each of the 451 sub-watersheds using four different criteria: 1) Nash-Sutcliffe efficiency (NSE), 2) percent bias (PBIAS), 3) root mean square error (RMSE) normalized to standard deviation (RSR), as well as 4) a combined indicator of the squared correlation coefficient and the linear regression slope (bR2). Conditions that might lead to a poor model performance include aridity, a very flat and steep relief, snowfall and dams, as indicated by previous research. In an attempt to explain spatial differences in model efficiency, the goodness of the model was spatially compared to these four phenomena by means of a bivariate spatial association measure which combines Pearson's correlation coefficient and Moran's index for spatial autocorrelation. In order to assess the model performance of the Mississippi watershed as a whole, three different averages of the sub-watershed results were computed by 1) applying equal weights, 2) weighting by the mean observed river discharge, 3) weighting by the upstream catchment area and the square root of the time series length. Ratings of model performance differed significantly in space and according to efficiency criterion. The model performed much better in the humid Eastern region than in the arid Western region which was confirmed by the high spatial association with the aridity index (ratio of mean annual precipitation to mean annual potential evapotranspiration). This association was still significant when controlling for slopes which manifested the second highest spatial association. In line with these findings, overall model efficiency of the entire Mississippi watershed appeared better when weighted with mean observed river discharge. Furthermore, the model received the highest rating with regards to PBIAS and was judged worst when considering NSE as the most comprehensive indicator. No universal performance indicator exists that considers all aspects of a hydrograph. Therefore, sound model evaluation must take into account multiple criteria. Since model efficiency varies in space which is masked by aggregated ratings spatially explicit model goodness should be communicated as standard praxis - at least as a measure of spatial variability of indicators. Furthermore, transparent documentation of the evaluation procedure also with regards to weighting of aggregated model performance is crucial but often lacking in published research. Finally, the high spatial association between model performance and aridity highlights the need to improve modelling schemes for arid conditions as priority over other aspects that might weaken model goodness.
Optics ellipticity performance of an unobscured off-axis space telescope.
Zeng, Fei; Zhang, Xin; Zhang, Jianping; Shi, Guangwei; Wu, Hongbo
2014-10-20
With the development of astronomy, more and more attention is paid to the survey of dark matter. Dark matter cannot be seen directly but can be detected by weak gravitational lensing measurement. Ellipticity is an important parameter used to define the shape of a galaxy. Galaxy ellipticity changes with weak gravitational lensing and nonideal optics. With our design of an unobscured off-axis telescope, we implement the simulation and calculation of optics ellipticity. With an accurate model of optics PSF, the characteristic of ellipticity is modeled and analyzed. It is shown that with good optical design, the full field ellipticity can be quite small. The spatial ellipticity change can be modeled by cubic interpolation with very high accuracy. We also modeled the ellipticity variance with time and analyzed the tolerance. It is shown that the unobscured off-axis telescope has good ellipticity performance and fulfills the requirement of dark matter survey.
An accurate behavioral model for single-photon avalanche diode statistical performance simulation
NASA Astrophysics Data System (ADS)
Xu, Yue; Zhao, Tingchen; Li, Ding
2018-01-01
An accurate behavioral model is presented to simulate important statistical performance of single-photon avalanche diodes (SPADs), such as dark count and after-pulsing noise. The derived simulation model takes into account all important generation mechanisms of the two kinds of noise. For the first time, thermal agitation, trap-assisted tunneling and band-to-band tunneling mechanisms are simultaneously incorporated in the simulation model to evaluate dark count behavior of SPADs fabricated in deep sub-micron CMOS technology. Meanwhile, a complete carrier trapping and de-trapping process is considered in afterpulsing model and a simple analytical expression is derived to estimate after-pulsing probability. In particular, the key model parameters of avalanche triggering probability and electric field dependence of excess bias voltage are extracted from Geiger-mode TCAD simulation and this behavioral simulation model doesn't include any empirical parameters. The developed SPAD model is implemented in Verilog-A behavioral hardware description language and successfully operated on commercial Cadence Spectre simulator, showing good universality and compatibility. The model simulation results are in a good accordance with the test data, validating high simulation accuracy.
Comparing spatial regression to random forests for large ...
Environmental data may be “large” due to number of records, number of covariates, or both. Random forests has a reputation for good predictive performance when using many covariates, whereas spatial regression, when using reduced rank methods, has a reputation for good predictive performance when using many records. In this study, we compare these two techniques using a data set containing the macroinvertebrate multimetric index (MMI) at 1859 stream sites with over 200 landscape covariates. Our primary goal is predicting MMI at over 1.1 million perennial stream reaches across the USA. For spatial regression modeling, we develop two new methods to accommodate large data: (1) a procedure that estimates optimal Box-Cox transformations to linearize covariate relationships; and (2) a computationally efficient covariate selection routine that takes into account spatial autocorrelation. We show that our new methods lead to cross-validated performance similar to random forests, but that there is an advantage for spatial regression when quantifying the uncertainty of the predictions. Simulations are used to clarify advantages for each method. This research investigates different approaches for modeling and mapping national stream condition. We use MMI data from the EPA's National Rivers and Streams Assessment and predictors from StreamCat (Hill et al., 2015). Previous studies have focused on modeling the MMI condition classes (i.e., good, fair, and po
Model Policies in Support of High Performance School Buildings for All Children
ERIC Educational Resources Information Center
21st Century School Fund, 2006
2006-01-01
Model Policies in Support of High Performance School Buildings for All Children is to begin to create a coherent and comprehensive set of state policies that will provide the governmental infrastructure for effective and creative practice in facility management. There are examples of good policy in many states, but no state has a coherent set of…
ERIC Educational Resources Information Center
Scheepers, Renée A.; Arah, Onyebuchi A.; Heineman, Maas Jan; Lombarts, Kiki M. J. M. H.
2015-01-01
During their development into competent medical specialists, residents benefit from their attending physicians' excellence in teaching and role modelling. Work engagement increases overall job performance, but it is unknown whether this also applies to attending physicians' teaching performance and role modelling. Attending physicians in clinical…
NASA Technical Reports Server (NTRS)
August, Richard; Kaza, Krishna Rao V.
1988-01-01
An investigation of the vibration, performance, flutter, and forced response of the large-scale propfan, SR7L, and its aeroelastic model, SR7A, has been performed by applying available structural and aeroelastic analytical codes and then correlating measured and calculated results. Finite element models of the blades were used to obtain modal frequencies, displacements, stresses and strains. These values were then used in conjunction with a 3-D, unsteady, lifting surface aerodynamic theory for the subsequent aeroelastic analyses of the blades. The agreement between measured and calculated frequencies and mode shapes for both models is very good. Calculated power coefficients correlate well with those measured for low advance ratios. Flutter results show that both propfans are stable at their respective design points. There is also good agreement between calculated and measured blade vibratory strains due to excitation resulting from yawed flow for the SR7A propfan. The similarity of structural and aeroelastic results show that the SR7A propfan simulates the SR7L characteristics.
A comprehensive mechanistic model for upward two-phase flow in wellbores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sylvester, N.D.; Sarica, C.; Shoham, O.
1994-05-01
A comprehensive model is formulated to predict the flow behavior for upward two-phase flow. This model is composed of a model for flow-pattern prediction and a set of independent mechanistic models for predicting such flow characteristics as holdup and pressure drop in bubble, slug, and annular flow. The comprehensive model is evaluated by using a well data bank made up of 1,712 well cases covering a wide variety of field data. Model performance is also compared with six commonly used empirical correlations and the Hasan-Kabir mechanistic model. Overall model performance is in good agreement with the data. In comparison withmore » other methods, the comprehensive model performed the best.« less
The Use of Neural Network Technology to Model Swimming Performance
Silva, António José; Costa, Aldo Manuel; Oliveira, Paulo Moura; Reis, Victor Machado; Saavedra, José; Perl, Jurgen; Rouboa, Abel; Marinho, Daniel Almeida
2007-01-01
The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons) and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females) of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility), swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics) and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron) with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances) is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports. Key pointsThe non-linear analysis resulting from the use of feed forward neural network allowed us the development of four performance models.The mean difference between the true and estimated results performed by each one of the four neural network models constructed was low.The neural network tool can be a good approach in the resolution of the performance modeling as an alternative to the standard statistical models that presume well-defined distributions and independence among all inputs.The use of neural networks for sports sciences application allowed us to create very realistic models for swimming performance prediction based on previous selected criterions that were related with the dependent variable (performance). PMID:24149233
A decision support model for investment on P2P lending platform.
Zeng, Xiangxiang; Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace-Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone.
A decision support model for investment on P2P lending platform
Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao
2017-01-01
Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace—Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone. PMID:28877234
An experimental investigation of hingeless helicopter rotor-body stability in hover
NASA Technical Reports Server (NTRS)
Bousman, W. G.
1978-01-01
Model tests of a 1.62 m diameter rotor were performed to investigate the aeromechanical stability of coupled rotor-body systems in hover. Experimental measurements were made of modal frequencies and damping over a wide range of rotor speeds. Good data were obtained for the frequencies of the rotor lead-lag regressing mode. The quality of the damping measurements of the body modes was poor due to nonlinear damping in the gimbal ball bearings. Simulated vacuum testing was performed using substitute blades of tantalum that reduced the effective lock number to 0.2% of the model scale value while keeping the blade inertia constant. The experimental data were compared with theoretical predictions, and the correlation was in general very good.
Hybrid ABC Optimized MARS-Based Modeling of the Milling Tool Wear from Milling Run Experimental Data
García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio
2016-01-01
Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC–MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC–MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed. PMID:28787882
García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio
2016-01-28
Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC-MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc . Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC-MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed.
NASA Astrophysics Data System (ADS)
Kim, Go-Un; Seo, Kyong-Hwan
2018-01-01
A key physical factor in regulating the performance of Madden-Julian oscillation (MJO) simulation is examined by using 26 climate model simulations from the World Meteorological Organization's Working Group for Numerical Experimentation/Global Energy and Water Cycle Experiment Atmospheric System Study (WGNE and MJO-Task Force/GASS) global model comparison project. For this, intraseasonal moisture budget equation is analyzed and a simple, efficient physical quantity is developed. The result shows that MJO skill is most sensitive to vertically integrated intraseasonal zonal wind convergence (ZC). In particular, a specific threshold value of the strength of the ZC can be used as distinguishing between good and poor models. An additional finding is that good models exhibit the correct simultaneous convection and large-scale circulation phase relationship. In poor models, however, the peak circulation response appears 3 days after peak rainfall, suggesting unfavorable coupling between convection and circulation. For an improving simulation of the MJO in climate models, we propose that this delay of circulation in response to convection needs to be corrected in the cumulus parameterization scheme.
Di, Qian; Rowland, Sebastian; Koutrakis, Petros; Schwartz, Joel
2017-01-01
Ground-level ozone is an important atmospheric oxidant, which exhibits considerable spatial and temporal variability in its concentration level. Existing modeling approaches for ground-level ozone include chemical transport models, land-use regression, Kriging, and data fusion of chemical transport models with monitoring data. Each of these methods has both strengths and weaknesses. Combining those complementary approaches could improve model performance. Meanwhile, satellite-based total column ozone, combined with ozone vertical profile, is another potential input. We propose a hybrid model that integrates the above variables to achieve spatially and temporally resolved exposure assessments for ground-level ozone. We used a neural network for its capacity to model interactions and nonlinearity. Convolutional layers, which use convolution kernels to aggregate nearby information, were added to the neural network to account for spatial and temporal autocorrelation. We trained the model with AQS 8-hour daily maximum ozone in the continental United States from 2000 to 2012 and tested it with left out monitoring sites. Cross-validated R2 on the left out monitoring sites ranged from 0.74 to 0.80 (mean 0.76) for predictions on 1 km×1 km grid cells, which indicates good model performance. Model performance remains good even at low ozone concentrations. The prediction results facilitate epidemiological studies to assess the health effect of ozone in the long term and the short term. PMID:27332675
Diagnosing Expertise: Human Capital, Decision Making, and Performance among Physicians
Currie, Janet; MacLeod, W. Bentley
2017-01-01
Expert performance is often evaluated assuming that good experts have good outcomes. We examine expertise in medicine and develop a model that allows for two dimensions of physician performance: decision making and procedural skill. Better procedural skill increases the use of intensive procedures for everyone, while better decision making results in a reallocation of procedures from fewer low-risk to high-risk cases. We show that poor diagnosticians can be identified using administrative data and that improving decision making improves birth outcomes by reducing C-section rates at the bottom of the risk distribution and increasing them at the top of the distribution. PMID:29276336
Lago-Peñas, Carlos; Sampaio, Jaime
2015-01-01
The aim of the current study was (i) to identify how important is a good season start on elite soccer teams' performance and (ii) to examine whether this impact is related to the clubs' financial budget. The match performances and annual budgets of all teams were collected from the English FA Premier League, French Ligue 1, Spanish La Liga, Italian Serie A and German Bundesliga for three consecutive seasons (2010-2011 to 2012-2013). A k-means cluster analysis classified the clubs according to their budget as High Range Budget Clubs, Upper-Mid Range Budget Clubs, Lower-Mid Range Budget Clubs and Low Range Budget Clubs. Data were examined through linear regression models. Overall, the results suggested that the better the team performance at the beginning of the season, the better the ranking at the end of the season. However, the impact of the effect depended on the clubs' annual budget, with lower budgets being associated with a greater importance of having a good season start (P < 0.01). Moreover, there were differences in trends across the different leagues. These variables can be used to develop accurate models to estimate final rankings. Conversely, Lower-Mid and Lower Range Budget Clubs can benefit from fine-tuning preseason planning in order to accelerate the acquisition of optimal performances.
What makes a good voice for radio: perceptions of radio employers and educators.
Warhurst, Samantha; McCabe, Patricia; Madill, Catherine
2013-03-01
To inform vocal training and management of voice disorders of professional radio performers in Australia by determining radio employers' and educators' qualitative perceptions on (1) what makes a good voice for radio and (2) what communication characteristics are important when employing radio performers. Radio employers and educators (n=9) participated in semistructured interviews. Interview transcripts were coded line-by-line and analyzed for qualitative themes using principles of grounded theory. Radio performers sound easy-on-the-ear, natural, and have an ability to read and produce voices that suit the station. Many of these characteristics make them sound different to radio voices in the past. Content and personality are now also more significant than voice characteristics. A multidimensional model of these characteristics is presented. The model has implications for the training and management of voice disorders in radio performers and will guide future quantitative research on the vocal features of this population. Copyright © 2013 The Voice Foundation. Published by Mosby, Inc. All rights reserved.
Local dynamic subgrid-scale models in channel flow
NASA Technical Reports Server (NTRS)
Cabot, William H.
1994-01-01
The dynamic subgrid-scale (SGS) model has given good results in the large-eddy simulation (LES) of homogeneous isotropic or shear flow, and in the LES of channel flow, using averaging in two or three homogeneous directions (the DA model). In order to simulate flows in general, complex geometries (with few or no homogeneous directions), the dynamic SGS model needs to be applied at a local level in a numerically stable way. Channel flow, which is inhomogeneous and wall-bounded flow in only one direction, provides a good initial test for local SGS models. Tests of the dynamic localization model were performed previously in channel flow using a pseudospectral code and good results were obtained. Numerical instability due to persistently negative eddy viscosity was avoided by either constraining the eddy viscosity to be positive or by limiting the time that eddy viscosities could remain negative by co-evolving the SGS kinetic energy (the DLk model). The DLk model, however, was too expensive to run in the pseudospectral code due to a large near-wall term in the auxiliary SGS kinetic energy (k) equation. One objective was then to implement the DLk model in a second-order central finite difference channel code, in which the auxiliary k equation could be integrated implicitly in time at great reduction in cost, and to assess its performance in comparison with the plane-averaged dynamic model or with no model at all, and with direct numerical simulation (DNS) and/or experimental data. Other local dynamic SGS models have been proposed recently, e.g., constrained dynamic models with random backscatter, and with eddy viscosity terms that are averaged in time over material path lines rather than in space. Another objective was to incorporate and test these models in channel flow.
Model fit evaluation in multilevel structural equation models
Ryu, Ehri
2014-01-01
Assessing goodness of model fit is one of the key questions in structural equation modeling (SEM). Goodness of fit is the extent to which the hypothesized model reproduces the multivariate structure underlying the set of variables. During the earlier development of multilevel structural equation models, the “standard” approach was to evaluate the goodness of fit for the entire model across all levels simultaneously. The model fit statistics produced by the standard approach have a potential problem in detecting lack of fit in the higher-level model for which the effective sample size is much smaller. Also when the standard approach results in poor model fit, it is not clear at which level the model does not fit well. This article reviews two alternative approaches that have been proposed to overcome the limitations of the standard approach. One is a two-step procedure which first produces estimates of saturated covariance matrices at each level and then performs single-level analysis at each level with the estimated covariance matrices as input (Yuan and Bentler, 2007). The other level-specific approach utilizes partially saturated models to obtain test statistics and fit indices for each level separately (Ryu and West, 2009). Simulation studies (e.g., Yuan and Bentler, 2007; Ryu and West, 2009) have consistently shown that both alternative approaches performed well in detecting lack of fit at any level, whereas the standard approach failed to detect lack of fit at the higher level. It is recommended that the alternative approaches are used to assess the model fit in multilevel structural equation model. Advantages and disadvantages of the two alternative approaches are discussed. The alternative approaches are demonstrated in an empirical example. PMID:24550882
Guarneri, Paolo; Rocca, Gianpiero; Gobbi, Massimiliano
2008-09-01
This paper deals with the simulation of the tire/suspension dynamics by using recurrent neural networks (RNNs). RNNs are derived from the multilayer feedforward neural networks, by adding feedback connections between output and input layers. The optimal network architecture derives from a parametric analysis based on the optimal tradeoff between network accuracy and size. The neural network can be trained with experimental data obtained in the laboratory from simulated road profiles (cleats). The results obtained from the neural network demonstrate good agreement with the experimental results over a wide range of operation conditions. The NN model can be effectively applied as a part of vehicle system model to accurately predict elastic bushings and tire dynamics behavior. Although the neural network model, as a black-box model, does not provide a good insight of the physical behavior of the tire/suspension system, it is a useful tool for assessing vehicle ride and noise, vibration, harshness (NVH) performance due to its good computational efficiency and accuracy.
Automated optimization of water-water interaction parameters for a coarse-grained model.
Fogarty, Joseph C; Chiu, See-Wing; Kirby, Peter; Jakobsson, Eric; Pandit, Sagar A
2014-02-13
We have developed an automated parameter optimization software framework (ParOpt) that implements the Nelder-Mead simplex algorithm and applied it to a coarse-grained polarizable water model. The model employs a tabulated, modified Morse potential with decoupled short- and long-range interactions incorporating four water molecules per interaction site. Polarizability is introduced by the addition of a harmonic angle term defined among three charged points within each bead. The target function for parameter optimization was based on the experimental density, surface tension, electric field permittivity, and diffusion coefficient. The model was validated by comparison of statistical quantities with experimental observation. We found very good performance of the optimization procedure and good agreement of the model with experiment.
Yeates, Peter; O'Neill, Paul; Mann, Karen; Eva, Kevin W
2012-12-05
Competency-based models of education require assessments to be based on individuals' capacity to perform, yet the nature of human judgment may fundamentally limit the extent to which such assessment is accurately possible. To determine whether recent observations of the Mini Clinical Evaluation Exercise (Mini-CEX) performance of postgraduate year 1 physicians influence raters' scores of subsequent performances, consistent with either anchoring bias (scores biased similar to previous experience) or contrast bias (scores biased away from previous experience). Internet-based randomized, blinded experiment using videos of Mini-CEX assessments of postgraduate year 1 trainees interviewing new internal medicine patients. Participants were 41 attending physicians from England and Wales experienced with the Mini-CEX, with 20 watching and scoring 3 good trainee performances and 21 watching and scoring 3 poor performances. All then watched and scored the same 3 borderline video performances. The study was completed between July and November 2011. The primary outcome was scores assigned to the borderline videos, using a 6-point Likert scale (anchors included: 1, well below expectations; 3, borderline; 6, well above expectations). Associations were tested in a multivariable analysis that included participants' sex, years of practice, and the stringency index (within-group z score of initial 3 ratings). The mean rating scores assigned by physicians who viewed borderline video performances following exposure to good performances was 2.7 (95% CI, 2.4-3.0) vs 3.4 (95% CI, 3.1-3.7) following exposure to poor performances (difference of 0.67 [95% CI, 0.28-1.07]; P = .001). Borderline videos were categorized as consistent with failing scores in 33 of 60 assessments (55%) in those exposed to good performances and in 15 of 63 assessments (24%) in those exposed to poor performances (P < .001). They were categorized as consistent with passing scores in 5 of 60 assessments (8.3%) in those exposed to good performances compared with 25 of 63 assessments (39.5%) in those exposed to poor performances (P < .001). Sex and years of attending practice were not associated with scores. The priming condition (good vs poor performances) and the stringency index jointly accounted for 45% of the observed variation in raters' scores for the borderline videos (P < .001). In an experimental setting, attending physicians exposed to videos of good medical trainee performances rated subsequent borderline performances lower than those who had been exposed to poor performances, consistent with a contrast bias.
Environmental Flow for Sungai Johor Estuary
NASA Astrophysics Data System (ADS)
Adilah, A. Kadir; Zulkifli, Yusop; Zainura, Z. Noor; Bakhiah, Baharim N.
2018-03-01
Sungai Johor estuary is a vital water body in the south of Johor and greatly affects the water quality in the Johor Straits. In the development of the hydrodynamic and water quality models for Sungai Johor estuary, the Environmental Fluid Dynamics Code (EFDC) model was selected. In this application, the EFDC hydrodynamic model was configured to simulate time varying surface elevation, velocity, salinity, and water temperature. The EFDC water quality model was configured to simulate dissolved oxygen (DO), dissolved organic carbon (DOC), chemical oxygen demand (COD), ammoniacal nitrogen (NH3-N), nitrate nitrogen (NO3-N), phosphate (PO4), and Chlorophyll a. The hydrodynamic and water quality model calibration was performed utilizing a set of site specific data acquired in January 2008. The simulated water temperature, salinity and DO showed good and fairly good agreement with observations. The calculated correlation coefficients between computed and observed temperature and salinity were lower compared with the water level. Sensitivity analysis was performed on hydrodynamic and water quality models input parameters to quantify their impact on modeling results such as water surface elevation, salinity and dissolved oxygen concentration. It is anticipated and recommended that the development of this model be continued to synthesize additional field data into the modeling process.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared wery well with the experimental data, and performed better than the Thomas model near the walls.
Comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1992-01-01
A numerical study was conducted to analyze the performance of different turbulence models when applied to the hypersonic NASA P8 inlet. Computational results from the PARC2D code, which solves the full two-dimensional Reynolds-averaged Navier-Stokes equation, were compared with experimental data. The zero-equation models considered for the study were the Baldwin-Lomax model, the Thomas model, and a combination of the Baldwin-Lomax and Thomas models; the two-equation models considered were the Chien model, the Speziale model (both low Reynolds number), and the Launder and Spalding model (high Reynolds number). The Thomas model performed best among the zero-equation models, and predicted good pressure distributions. The Chien and Speziale models compared very well with the experimental data, and performed better than the Thomas model near the walls.
NASA Astrophysics Data System (ADS)
Bhardwaj, Manish; McCaughan, Leon; Olkhovets, Anatoli; Korotky, Steven K.
2006-12-01
We formulate an analytic framework for the restoration performance of path-based restoration schemes in planar mesh networks. We analyze various switch architectures and signaling schemes and model their total restoration interval. We also evaluate the network global expectation value of the time to restore a demand as a function of network parameters. We analyze a wide range of nominally capacity-optimal planar mesh networks and find our analytic model to be in good agreement with numerical simulation data.
Development of a reactive-dispersive plume model
NASA Astrophysics Data System (ADS)
Kim, Hyun S.; Kim, Yong H.; Song, Chul H.
2017-04-01
A reactive-dispersive plume model (RDPM) was developed in this study. The RDPM can consider two main components of large-scale point source plume: i) turbulent dispersion and ii) photochemical reactions. In order to evaluate the simulation performance of newly developed RDPM, the comparisons between the model-predicted and observed mixing ratios were made using the TexAQS II 2006 (Texas Air Quality Study II 2006) power-plant experiment data. Statistical analyses show good correlation (0.61≤R≤0.92), and good agreement with the Index of Agreement (0.70≤R≤0.95). The chemical NOx lifetimes for two power-plant plumes (Monticello and Welsh power plants) were also estimated.
Goodness-of-fit tests for open capture-recapture models
Pollock, K.H.; Hines, J.E.; Nichols, J.D.
1985-01-01
General goodness-of-fit tests for the Jolly-Seber model are proposed. These tests are based on conditional arguments using minimal sufficient statistics. The tests are shown to be of simple hypergeometric form so that a series of independent contingency table chi-square tests can be performed. The relationship of these tests to other proposed tests is discussed. This is followed by a simulation study of the power of the tests to detect departures from the assumptions of the Jolly-Seber model. Some meadow vole capture-recapture data are used to illustrate the testing procedure which has been implemented in a computer program available from the authors.
ERIC Educational Resources Information Center
Vermunt, Jeroen K.
2011-01-01
Steinley and Brusco (2011) presented the results of a huge simulation study aimed at evaluating cluster recovery of mixture model clustering (MMC) both for the situation where the number of clusters is known and is unknown. They derived rather strong conclusions on the basis of this study, especially with regard to the good performance of…
The “Dry-Run” Analysis: A Method for Evaluating Risk Scores for Confounding Control
Wyss, Richard; Hansen, Ben B.; Ellis, Alan R.; Gagne, Joshua J.; Desai, Rishi J.; Glynn, Robert J.; Stürmer, Til
2017-01-01
Abstract A propensity score (PS) model's ability to control confounding can be assessed by evaluating covariate balance across exposure groups after PS adjustment. The optimal strategy for evaluating a disease risk score (DRS) model's ability to control confounding is less clear. DRS models cannot be evaluated through balance checks within the full population, and they are usually assessed through prediction diagnostics and goodness-of-fit tests. A proposed alternative is the “dry-run” analysis, which divides the unexposed population into “pseudo-exposed” and “pseudo-unexposed” groups so that differences on observed covariates resemble differences between the actual exposed and unexposed populations. With no exposure effect separating the pseudo-exposed and pseudo-unexposed groups, a DRS model is evaluated by its ability to retrieve an unconfounded null estimate after adjustment in this pseudo-population. We used simulations and an empirical example to compare traditional DRS performance metrics with the dry-run validation. In simulations, the dry run often improved assessment of confounding control, compared with the C statistic and goodness-of-fit tests. In the empirical example, PS and DRS matching gave similar results and showed good performance in terms of covariate balance (PS matching) and controlling confounding in the dry-run analysis (DRS matching). The dry-run analysis may prove useful in evaluating confounding control through DRS models. PMID:28338910
Taborri, Juri; Scalona, Emilia; Palermo, Eduardo; Rossi, Stefano; Cappa, Paolo
2015-09-23
Gait-phase recognition is a necessary functionality to drive robotic rehabilitation devices for lower limbs. Hidden Markov Models (HMMs) represent a viable solution, but they need subject-specific training, making data processing very time-consuming. Here, we validated an inter-subject procedure to avoid the intra-subject one in two, four and six gait-phase models in pediatric subjects. The inter-subject procedure consists in the identification of a standardized parameter set to adapt the model to measurements. We tested the inter-subject procedure both on scalar and distributed classifiers. Ten healthy children and ten hemiplegic children, each equipped with two Inertial Measurement Units placed on shank and foot, were recruited. The sagittal component of angular velocity was recorded by gyroscopes while subjects performed four walking trials on a treadmill. The goodness of classifiers was evaluated with the Receiver Operating Characteristic. The results provided a goodness from good to optimum for all examined classifiers (0 < G < 0.6), with the best performance for the distributed classifier in two-phase recognition (G = 0.02). Differences were found among gait partitioning models, while no differences were found between training procedures with the exception of the shank classifier. Our results raise the possibility of avoiding subject-specific training in HMM for gait-phase recognition and its implementation to control exoskeletons for the pediatric population.
Taborri, Juri; Scalona, Emilia; Palermo, Eduardo; Rossi, Stefano; Cappa, Paolo
2015-01-01
Gait-phase recognition is a necessary functionality to drive robotic rehabilitation devices for lower limbs. Hidden Markov Models (HMMs) represent a viable solution, but they need subject-specific training, making data processing very time-consuming. Here, we validated an inter-subject procedure to avoid the intra-subject one in two, four and six gait-phase models in pediatric subjects. The inter-subject procedure consists in the identification of a standardized parameter set to adapt the model to measurements. We tested the inter-subject procedure both on scalar and distributed classifiers. Ten healthy children and ten hemiplegic children, each equipped with two Inertial Measurement Units placed on shank and foot, were recruited. The sagittal component of angular velocity was recorded by gyroscopes while subjects performed four walking trials on a treadmill. The goodness of classifiers was evaluated with the Receiver Operating Characteristic. The results provided a goodness from good to optimum for all examined classifiers (0 < G < 0.6), with the best performance for the distributed classifier in two-phase recognition (G = 0.02). Differences were found among gait partitioning models, while no differences were found between training procedures with the exception of the shank classifier. Our results raise the possibility of avoiding subject-specific training in HMM for gait-phase recognition and its implementation to control exoskeletons for the pediatric population. PMID:26404309
On the use and the performance of software reliability growth models
NASA Technical Reports Server (NTRS)
Keiller, Peter A.; Miller, Douglas R.
1991-01-01
We address the problem of predicting future failures for a piece of software. The number of failures occurring during a finite future time interval is predicted from the number failures observed during an initial period of usage by using software reliability growth models. Two different methods for using the models are considered: straightforward use of individual models, and dynamic selection among models based on goodness-of-fit and quality-of-prediction criteria. Performance is judged by the relative error of the predicted number of failures over future finite time intervals relative to the number of failures eventually observed during the intervals. Six of the former models and eight of the latter are evaluated, based on their performance on twenty data sets. Many open questions remain regarding the use and the performance of software reliability growth models.
Llorens, Esther; Saaltink, Maarten W; Poch, Manel; García, Joan
2011-01-01
The performance and reliability of the CWM1-RETRASO model for simulating processes in horizontal subsurface flow constructed wetlands (HSSF CWs) and the relative contribution of different microbial reactions to organic matter (COD) removal in a HSSF CW treating urban wastewater were evaluated. Various different approaches with diverse influent configurations were simulated. According to the simulations, anaerobic processes were more widespread in the simulated wetland and contributed to a higher COD removal rate [72-79%] than anoxic [0-1%] and aerobic reactions [20-27%] did. In all the cases tested, the reaction that most contributed to COD removal was methanogenesis [58-73%]. All results provided by the model were in consonance with literature and experimental field observations, suggesting a good performance and reliability of CWM1-RETRASO. According to the good simulation predictions, CWM1-RETRASO is the first mechanistic model able to successfully simulate the processes described by the CWM1 model in HSSF CWs. Copyright © 2010 Elsevier Ltd. All rights reserved.
Composing, Analyzing and Validating Software Models
NASA Astrophysics Data System (ADS)
Sheldon, Frederick T.
1998-10-01
This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.
Composing, Analyzing and Validating Software Models
NASA Technical Reports Server (NTRS)
Sheldon, Frederick T.
1998-01-01
This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.
NASA Astrophysics Data System (ADS)
Mitra, Ashis; Majumdar, Prabal Kumar; Bannerjee, Debamalya
2013-03-01
This paper presents a comparative analysis of two modeling methodologies for the prediction of air permeability of plain woven handloom cotton fabrics. Four basic fabric constructional parameters namely ends per inch, picks per inch, warp count and weft count have been used as inputs for artificial neural network (ANN) and regression models. Out of the four regression models tried, interaction model showed very good prediction performance with a meager mean absolute error of 2.017 %. However, ANN models demonstrated superiority over the regression models both in terms of correlation coefficient and mean absolute error. The ANN model with 10 nodes in the single hidden layer showed very good correlation coefficient of 0.982 and 0.929 and mean absolute error of only 0.923 and 2.043 % for training and testing data respectively.
Grant, S.W.; Hickey, G.L.; Carlson, E.D.; McCollum, C.N.
2014-01-01
Objective/background A number of contemporary risk prediction models for mortality following elective abdominal aortic aneurysm (AAA) repair have been developed. Before a model is used either in clinical practice or to risk-adjust surgical outcome data it is important that its performance is assessed in external validation studies. Methods The British Aneurysm Repair (BAR) score, Medicare, and Vascular Governance North West (VGNW) models were validated using an independent prospectively collected sample of multicentre clinical audit data. Consecutive, data on 1,124 patients undergoing elective AAA repair at 17 hospitals in the north-west of England and Wales between April 2011 and March 2013 were analysed. The outcome measure was in-hospital mortality. Model calibration (observed to expected ratio with chi-square test, calibration plots, calibration intercept and slope) and discrimination (area under receiver operating characteristic curve [AUC]) were assessed in the overall cohort and procedural subgroups. Results The mean age of the population was 74.4 years (SD 7.7); 193 (17.2%) patients were women and the majority of patients (759, 67.5%) underwent endovascular aneurysm repair. All three models demonstrated good calibration in the overall cohort and procedural subgroups. Overall discrimination was excellent for the BAR score (AUC 0.83, 95% confidence interval [CI] 0.76–0.89), and acceptable for the Medicare and VGNW models, with AUCs of 0.78 (95% CI 0.70–0.86) and 0.75 (95% CI 0.65–0.84) respectively. Only the BAR score demonstrated good discrimination in procedural subgroups. Conclusion All three models demonstrated good calibration and discrimination for the prediction of in-hospital mortality following elective AAA repair and are potentially useful. The BAR score has a number of advantages, which include being developed on the most contemporaneous data, excellent overall discrimination, and good performance in procedural subgroups. Regular model validations and recalibration will be essential. PMID:24837173
Grant, S W; Hickey, G L; Carlson, E D; McCollum, C N
2014-07-01
A number of contemporary risk prediction models for mortality following elective abdominal aortic aneurysm (AAA) repair have been developed. Before a model is used either in clinical practice or to risk-adjust surgical outcome data it is important that its performance is assessed in external validation studies. The British Aneurysm Repair (BAR) score, Medicare, and Vascular Governance North West (VGNW) models were validated using an independent prospectively collected sample of multicentre clinical audit data. Consecutive, data on 1,124 patients undergoing elective AAA repair at 17 hospitals in the north-west of England and Wales between April 2011 and March 2013 were analysed. The outcome measure was in-hospital mortality. Model calibration (observed to expected ratio with chi-square test, calibration plots, calibration intercept and slope) and discrimination (area under receiver operating characteristic curve [AUC]) were assessed in the overall cohort and procedural subgroups. The mean age of the population was 74.4 years (SD 7.7); 193 (17.2%) patients were women and the majority of patients (759, 67.5%) underwent endovascular aneurysm repair. All three models demonstrated good calibration in the overall cohort and procedural subgroups. Overall discrimination was excellent for the BAR score (AUC 0.83, 95% confidence interval [CI] 0.76-0.89), and acceptable for the Medicare and VGNW models, with AUCs of 0.78 (95% CI 0.70-0.86) and 0.75 (95% CI 0.65-0.84) respectively. Only the BAR score demonstrated good discrimination in procedural subgroups. All three models demonstrated good calibration and discrimination for the prediction of in-hospital mortality following elective AAA repair and are potentially useful. The BAR score has a number of advantages, which include being developed on the most contemporaneous data, excellent overall discrimination, and good performance in procedural subgroups. Regular model validations and recalibration will be essential. Copyright © 2014 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Modelling of diesel engine fuelled with biodiesel using engine simulation software
NASA Astrophysics Data System (ADS)
Said, Mohd Farid Muhamad; Said, Mazlan; Aziz, Azhar Abdul
2012-06-01
This paper is about modelling of a diesel engine that operates using biodiesel fuels. The model is used to simulate or predict the performance and combustion of the engine by simplified the geometry of engine component in the software. The model is produced using one-dimensional (1D) engine simulation software called GT-Power. The fuel properties library in the software is expanded to include palm oil based biodiesel fuels. Experimental works are performed to investigate the effect of biodiesel fuels on the heat release profiles and the engine performance curves. The model is validated with experimental data and good agreement is observed. The simulation results show that combustion characteristics and engine performances differ when biodiesel fuels are used instead of no. 2 diesel fuel.
NASA Astrophysics Data System (ADS)
Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji
2002-06-01
This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright
Girardat-Rotar, Laura; Braun, Julia; Puhan, Milo A; Abraham, Alison G; Serra, Andreas L
2017-07-17
Prediction models in autosomal dominant polycystic kidney disease (ADPKD) are useful in clinical settings to identify patients with greater risk of a rapid disease progression in whom a treatment may have more benefits than harms. Mayo Clinic investigators developed a risk prediction tool for ADPKD patients using a single kidney value. Our aim was to perform an independent geographical and temporal external validation as well as evaluate the potential for improving the predictive performance by including additional information on total kidney volume. We used data from the on-going Swiss ADPKD study from 2006 to 2016. The main analysis included a sample size of 214 patients with Typical ADPKD (Class 1). We evaluated the Mayo Clinic model performance calibration and discrimination in our external sample and assessed whether predictive performance could be improved through the addition of subsequent kidney volume measurements beyond the baseline assessment. The calibration of both versions of the Mayo Clinic prediction model using continuous Height adjusted total kidney volume (HtTKV) and using risk subclasses was good, with R 2 of 78% and 70%, respectively. Accuracy was also good with 91.5% and 88.7% of the predicted within 30% of the observed, respectively. Additional information regarding kidney volume did not substantially improve the model performance. The Mayo Clinic prediction models are generalizable to other clinical settings and provide an accurate tool based on available predictors to identify patients at high risk for rapid disease progression.
Mixed H2/H Infinity Optimization with Multiple H Infinity Constraints
1994-06-01
given by (w = 1P I Ijwj, !5 1); p = 2900 The 2-norm is the energy, and the c-norm is the maximum magnitude of the signal. A good measure of performance is...the system 2-norm is not good for uncertainty management)] is conservative, especially when the uncertainty model is highly structured. In this case, g...57.6035 T [-6.4183, 3.6504] ±30.2811 Although the objective was to design a pure regulator, from Table 5-1 we see that the H2 controller provides good
The importance of understanding: Model space moderates goal specificity effects.
Kistner, Saskia; Burns, Bruce D; Vollmeyer, Regina; Kortenkamp, Ulrich
2016-01-01
The three-space theory of problem solving predicts that the quality of a learner's model and the goal specificity of a task interact on knowledge acquisition. In Experiment 1 participants used a computer simulation of a lever system to learn about torques. They either had to test hypotheses (nonspecific goal), or to produce given values for variables (specific goal). In the good- but not in the poor-model condition they saw torque depicted as an area. Results revealed the predicted interaction. A nonspecific goal only resulted in better learning when a good model of torques was provided. In Experiment 2 participants learned to manipulate the inputs of a system to control its outputs. A nonspecific goal to explore the system helped performance when compared to a specific goal to reach certain values when participants were given a good model, but not when given a poor model that suggested the wrong hypothesis space. Our findings support the three-space theory. They emphasize the importance of understanding for problem solving and stress the need to study underlying processes.
A Soil Temperature Model for Closed Canopied Forest Stands
James M. Vose; Wayne T. Swank
1991-01-01
A microcomputer-based soil temperature model was developed to predict temperature at the litter-soil interface and soil temperatures at three depths (0.10 m, 0.20 m, and 1.25 m) under closed forest canopies. Comparisons of predicted and measured soil temperatures indicated good model performance under most conditions. When generalized parameters describing soil...
Modelling Force Transfer Around Openings of Full-Scale Shear Walls
Tom Skaggs; Borjen Yeh; Frank Lam; Minghao Li; Doug Rammer; James Wacker
2011-01-01
Wood structural panel (WSP) sheathed shear walls and diaphragms are the primary lateralload-resisting elements in wood-frame construction. The historical performance of lightframe structures in North America has been very good due, in part, to model building codes that are designed to preserve life safety. These model building codes have spawned continual improvement...
Investigation of a Nonparametric Procedure for Assessing Goodness-of-Fit in Item Response Theory
ERIC Educational Resources Information Center
Wells, Craig S.; Bolt, Daniel M.
2008-01-01
Tests of model misfit are often performed to validate the use of a particular model in item response theory. Douglas and Cohen (2001) introduced a general nonparametric approach for detecting misfit under the two-parameter logistic model. However, the statistical properties of their approach, and empirical comparisons to other methods, have not…
Institutional and matrix support and its relationship with primary healthcare
dos Santos, Alaneir de Fátima; Machado, Antônio Thomaz Gonzaga da Matta; dos Reis, Clarice Magalhães Rodrigues; Abreu, Daisy Maria Xavier; de Araújo, Lucas Henrique Lobato; Rodrigues, Simone Cristina; de Lima, Ângela Maria de Lourdes Dayrell; Jorge, Alzira de Oliveira; Fonseca, Délcio
2015-01-01
OBJECTIVE To analyze whether the level of institutional and matrix support is associated with better certification of primary healthcare teams. METHODS In this cross-sectional study, we evaluated two kinds of primary healthcare support – 14,489 teams received institutional support and 14,306 teams received matrix support. Logistic regression models were applied. In the institutional support model, the independent variable was “level of support” (as calculated by the sum of supporting activities for both modalities). In the matrix support model, in turn, the independent variables were the supporting activities. The multivariate analysis has considered variables with p < 0.20. The model was adjusted by the Hosmer-Lemeshow test. RESULTS The teams had institutional and matrix supporting activities (84.0% and 85.0%), respectively, with 55.0% of them performing between six and eight activities. For the institutional support, we have observed 1.96 and 3.77 chances for teams who had medium and high levels of support to have very good or good certification, respectively. For the matrix support, the chances of their having very good or good certification were 1.79 and 3.29, respectively. Regarding to the association between institutional support activities and the certification, the very good or good certification was positively associated with self-assessment (OR = 1.95), permanent education (OR = 1.43), shared evaluation (OR = 1.40), and supervision and evaluation of indicators (OR = 1.37). In regards to the matrix support, the very good or good certification was positively associated with permanent education (OR = 1.50), interventions in the territory (OR = 1.30), and discussion in the work processes (OR = 1.23). CONCLUSIONS In Brazil, supporting activities are being incorporated in primary healthcare, and there is an association between the level of support, both matrix and institutional, and the certification result. PMID:26274872
ERIC Educational Resources Information Center
Haezendonck, Elvira; Willems, Kim; Hillemann, Jenny
2017-01-01
Universities, and higher education institutions in general, are ever more influenced by output-driven performance indicators and models that originally stem from the profit-organisational context. As a result, universities are increasingly considering management tools that support them in the (decision) process for attaining their strategic goals.…
Metal plasticity and ductile fracture modeling for cast aluminum alloy parts
Lee, Jinwoo; Kim, Se-Jong; Park, Hyeonil; ...
2018-01-06
Here in this study, plasticity and ductile fracture properties were characterized by performing various tension, shear, and compression tests. A series of 10 experiments were performed using notched round bars, flat-grooved plates, in-plane shear plates, and cylindrical bars. Two cast aluminum alloys used in automotive suspension systems were selected. Plasticity modelling was performed and the results were compared with experimental and corresponding simulation results; further, the relationships among the stress triaxiality, Lode angle parameter, and equivalent plastic strain at the onset of failure were determined to calibrate a ductile fracture model. Finally, the proposed ductile fracture model shows good agreementmore » with experimental results.« less
Metal plasticity and ductile fracture modeling for cast aluminum alloy parts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jinwoo; Kim, Se-Jong; Park, Hyeonil
Here in this study, plasticity and ductile fracture properties were characterized by performing various tension, shear, and compression tests. A series of 10 experiments were performed using notched round bars, flat-grooved plates, in-plane shear plates, and cylindrical bars. Two cast aluminum alloys used in automotive suspension systems were selected. Plasticity modelling was performed and the results were compared with experimental and corresponding simulation results; further, the relationships among the stress triaxiality, Lode angle parameter, and equivalent plastic strain at the onset of failure were determined to calibrate a ductile fracture model. Finally, the proposed ductile fracture model shows good agreementmore » with experimental results.« less
Predictive Caching Using the TDAG Algorithm
NASA Technical Reports Server (NTRS)
Laird, Philip; Saul, Ronald
1992-01-01
We describe how the TDAG algorithm for learning to predict symbol sequences can be used to design a predictive cache store. A model of a two-level mass storage system is developed and used to calculate the performance of the cache under various conditions. Experimental simulations provide good confirmation of the model.
Analysis and modeling of the seasonal South China Sea temperature cycle using remote sensing
NASA Astrophysics Data System (ADS)
Twigt, Daniel J.; de Goede, Erik D.; Schrama, Ernst J. O.; Gerritsen, Herman
2007-10-01
The present paper describes the analysis and modeling of the South China Sea (SCS) temperature cycle on a seasonal scale. It investigates the possibility to model this cycle in a consistent way while not taking into account tidal forcing and associated tidal mixing and exchange. This is motivated by the possibility to significantly increase the model’s computational efficiency when neglecting tides. The goal is to develop a flexible and efficient tool for seasonal scenario analysis and to generate transport boundary forcing for local models. Given the significant spatial extent of the SCS basin and the focus on seasonal time scales, synoptic remote sensing is an ideal tool in this analysis. Remote sensing is used to assess the seasonal temperature cycle to identify the relevant driving forces and is a valuable source of input data for modeling. Model simulations are performed using a three-dimensional baroclinic-reduced depth model, driven by monthly mean sea surface anomaly boundary forcing, monthly mean lateral temperature, and salinity forcing obtained from the World Ocean Atlas 2001 climatology, six hourly meteorological forcing from the European Center for Medium range Weather Forecasting ERA-40 dataset, and remotely sensed sea surface temperature (SST) data. A sensitivity analysis of model forcing and coefficients is performed. The model results are quantitatively assessed against climatological temperature profiles using a goodness-of-fit norm. In the deep regions, the model results are in good agreement with this validation data. In the shallow regions, discrepancies are found. To improve the agreement there, we apply a SST nudging method at the free water surface. This considerably improves the model’s vertical temperature representation in the shallow regions. Based on the model validation against climatological in situ and SST data, we conclude that the seasonal temperature cycle for the deep SCS basin can be represented to a good degree. For shallow regions, the absence of tidal mixing and exchange has a clear impact on the model’s temperature representation. This effect on the large-scale temperature cycle can be compensated to a good degree by SST nudging for diagnostic applications.
Simms, Alexander D; Reynolds, Stephanie; Pieper, Karen; Baxter, Paul D; Cattle, Brian A; Batin, Phillip D; Wilson, John I; Deanfield, John E; West, Robert M; Fox, Keith A A; Hall, Alistair S; Gale, Christopher P
2013-01-01
To evaluate the performance of the National Institute for Health and Clinical Excellence (NICE) mini-Global Registry of Acute Coronary Events (GRACE) (MG) and adjusted mini-GRACE (AMG) risk scores. Retrospective observational study. 215 acute hospitals in England and Wales. 137 084 patients discharged from hospital with a diagnosis of acute myocardial infarction (AMI) between 2003 and 2009, as recorded in the Myocardial Ischaemia National Audit Project (MINAP). Model performance indices of calibration accuracy, discriminative and explanatory performance, including net reclassification index (NRI) and integrated discrimination improvement. Of 495 263 index patients hospitalised with AMI, there were 53 196 ST elevation myocardial infarction and 83 888 non-ST elevation myocardial infarction (NSTEMI) (27.7%) cases with complete data for all AMG variables. For AMI, AMG calibration was better than MG calibration (Hosmer-Lemeshow goodness of fit test: p=0.33 vs p<0.05). MG and AMG predictive accuracy and discriminative ability were good (Brier score: 0.10 vs 0.09; C statistic: 0.82 and 0.84, respectively). The NRI of AMG over MG was 8.1% (p<0.05). Model performance was reduced in patients with NSTEMI, chronic heart failure, chronic renal failure and in patients aged ≥85 years. The AMG and MG risk scores, utilised by NICE, demonstrated good performance across a range of indices using MINAP data, but performed less well in higher risk subgroups. Although indices were better for AMG, its application may be constrained by missing predictors.
The Advanced Trauma Operative Management course--a two student to one faculty model.
Ali, Jameel; Sorvari, Anne; Henry, Sharon; Kortbeek, John; Tremblay, Lorraine
2013-09-01
The internationally recognized Advanced Trauma Operative Management (ATOM) course uses a 1:1 student-to-faculty teaching model. This study examines a two student to one faculty ATOM teaching model. We randomly assigned 16 residents to four experienced ATOM faculty members. Half started with the one-student model and the other half with the two-student model and then switched using the same faculty. Students and faculty completed forms on the educational value of the two models (1 = very poor; 2 = poor; 3 = average; 4 = good; and 5 = excellent) and identified educational preferences and recommendations. We assigned educational values for the 13 procedures as follows: All faculty rated the one-student model as excellent; six members rated the two-student model as excellent, and seven as good. Students rated 50%-75% as excellent and 12%-44% as good for the two-student model, and 56%-81% as excellent and 12%-44% as good for the one-student model. Given resource constraints, all faculty and 88% of students preferred the two-student model. With no resource constraints, 75% of students and 50% of faculty chose the two-student model. All faculty and students rated both models "acceptable." Overall, 81% of students and 50% of faculty rated the two-student model better. All faculty members recommended that the models be optional; 94% of students recommended that they be either optional (50%) or a two-student model (44%). Performing or assisting on each procedure twice was considered an advantage of the two-student model. The two-student teaching model was acceptable and generally preferred in this study. With appropriately trained faculty and students, the two-student model is feasible and should result in less animal usage and possibly wider promulgation. Copyright © 2013 Elsevier Inc. All rights reserved.
Modeling of video compression effects on target acquisition performance
NASA Astrophysics Data System (ADS)
Cha, Jae H.; Preece, Bradley; Espinola, Richard L.
2009-05-01
The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.
Channon, S B; Davis, R C; Goode, N T; May, S A
2017-03-01
Group work forms the foundation for much of student learning within higher education, and has many educational, social and professional benefits. This study aimed to explore the determinants of success or failure for undergraduate student teams and to define a 'good group' through considering three aspects of group success: the task, the individuals, and the team. We employed a mixed methodology, combining demographic data with qualitative observations and task and peer evaluation scores. We determined associations between group dynamic and behaviour, demographic composition, member personalities and attitudes towards one another, and task success. We also employed a cluster analysis to create a model outlining the attributes of a good small group learning team in veterinary education. This model highlights that student groups differ in measures of their effectiveness as teams, independent of their task performance. On the basis of this, we suggest that groups who achieve high marks in tasks cannot be assumed to have acquired team working skills, and therefore if these are important as a learning outcome, they must be assessed directly alongside the task output.
Wavefront control performance modeling with WFIRST shaped pupil coronagraph testbed
NASA Astrophysics Data System (ADS)
Zhou, Hanying; Nemati, Bijian; Krist, John; Cady, Eric; Kern, Brian; Poberezhskiy, Ilya
2017-09-01
NASA's WFIRST mission includes a coronagraph instrument (CGI) for direct imaging of exoplanets. Significant improvement in CGI model fidelity has been made recently, alongside a testbed high contrast demonstration in a simulated dynamic environment at JPL. We present our modeling method and results of comparisons to testbed's high order wavefront correction performance for the shaped pupil coronagraph. Agreement between model prediction and testbed result at better than a factor of 2 has been consistently achieved in raw contrast (contrast floor, chromaticity, and convergence), and with that comes good agreement in contrast sensitivity to wavefront perturbations and mask lateral shear.
The MSFC UNIVAC 1108 EXEC 8 simulation model
NASA Technical Reports Server (NTRS)
Williams, T. G.; Richards, F. M.; Weatherbee, J. E.; Paul, L. K.
1972-01-01
A model is presented which simulates the MSFC Univac 1108 multiprocessor system. The hardware/operating system is described to enable a good statistical measurement of the system behavior. The performance of the 1108 is evaluated by performing twenty-four different experiments designed to locate system bottlenecks and also to test the sensitivity of system throughput with respect to perturbation of the various Exec 8 scheduling algorithms. The model is implemented in the general purpose system simulation language and the techniques described can be used to assist in the design, development, and evaluation of multiprocessor systems.
NASA Astrophysics Data System (ADS)
Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George
2017-09-01
Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.
Evaluation of Multiclass Model Observers in PET LROC Studies
NASA Astrophysics Data System (ADS)
Gifford, H. C.; Kinahan, P. E.; Lartizien, C.; King, M. A.
2007-02-01
A localization ROC (LROC) study was conducted to evaluate nonprewhitening matched-filter (NPW) and channelized NPW (CNPW) versions of a multiclass model observer as predictors of human tumor-detection performance with PET images. Target localization is explicitly performed by these model observers. Tumors were placed in the liver, lungs, and background soft tissue of a mathematical phantom, and the data simulation modeled a full-3D acquisition mode. Reconstructions were performed with the FORE+AWOSEM algorithm. The LROC study measured observer performance with 2D images consisting of either coronal, sagittal, or transverse views of the same set of cases. Versions of the CNPW observer based on two previously published difference-of-Gaussian channel models demonstrated good quantitative agreement with human observers. One interpretation of these results treats the CNPW observer as a channelized Hotelling observer with implicit internal noise
Yu, Rongjie; Abdel-Aty, Mohamed
2013-07-01
The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors' effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made. Copyright © 2013 Elsevier Ltd. All rights reserved.
Monte Carlo Simulation of Plumes Spectral Emission
2005-06-07
ERIM experimental data for hot cell radiance has been performed. It has been shown that NASA standard infrared optical model [3] provides good...Influence of different optical models on predicted numerical data on hot cell radiance for ERIM experimental conditions has been studied. 7...prediction (solid line) of the Hot cell radiance. NASA Standard Infrared Radiation model ; averaged rotational line structure (JLBL=0); spectral
The Sheperd equation and chaos identification.
Gregson, Robert A M
2010-04-01
An equation created by Sheperd (1982) to model stability in exploited fish populations has been found to have a wider application, and it exhibits complicated internal dynamics, including phases of strict periodicity and of chaos. It may be potentially applicable to other psychophysiological contexts. The problems of determining goodness-of fit, and the comparative performance of alternative models including the Shephed model, are briefly addressed.
Parasitic Parameters Extraction for InP DHBT Based on EM Method and Validation up to H-Band
NASA Astrophysics Data System (ADS)
Li, Oupeng; Zhang, Yong; Wang, Lei; Xu, Ruimin; Cheng, Wei; Wang, Yuan; Lu, Haiyan
2017-05-01
This paper presents a small-signal model for InGaAs/InP double heterojunction bipolar transistor (DHBT). Parasitic parameters of access via and electrode finger are extracted by 3-D electromagnetic (EM) simulation. By analyzing the equivalent circuit of seven special structures and using the EM simulation results, the parasitic parameters are extracted systematically. Compared with multi-port s-parameter EM model, the equivalent circuit model has clear physical intension and avoids the complex internal ports setting. The model is validated on a 0.5 × 7 μm2 InP DHBT up to 325 GHz. The model provides a good fitting result between measured and simulated multi-bias s-parameters in full band. At last, an H-band amplifier is designed and fabricated for further verification. The measured amplifier performance is highly agreed with the model prediction, which indicates the model has good accuracy in submillimeterwave band.
ERIC Educational Resources Information Center
Gurtner, Andrea; Tschan, Franziska; Semmer, Norbert K.; Nagele, Christof
2007-01-01
This study examines the effect of guided reflection on team processes and performance, based on West's (1996, 2000) concept of reflexivity. Communicating via e-mail, 49 hierarchically structured teams (one commander and two specialists) performed seven 15 min shifts of a simulated team-based military air-surveillance task (TAST) in two meetings, a…
Prediction models for clustered data: comparison of a random intercept and standard regression model
2013-01-01
Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436
Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne
2013-02-15
When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.
Modified Distribution-Free Goodness-of-Fit Test Statistic.
Chun, So Yeon; Browne, Michael W; Shapiro, Alexander
2018-03-01
Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.
A Hybrid Actuation System Demonstrating Significantly Enhanced Electromechanical Performance
NASA Technical Reports Server (NTRS)
Su, Ji; Xu, Tian-Bing; Zhang, Shujun; Shrout, Thomas R.; Zhang, Qiming
2004-01-01
A hybrid actuation system (HYBAS) utilizing advantages of a combination of electromechanical responses of an electroactive polymer (EAP), an electrostrictive copolymer, and an electroactive ceramic single crystal, PZN-PT single crystal, has been developed. The system employs the contribution of the actuation elements cooperatively and exhibits a significantly enhanced electromechanical performance compared to the performances of the device made of each constituting material, the electroactive polymer or the ceramic single crystal, individually. The theoretical modeling of the performances of the HYBAS is in good agreement with experimental observation. The consistence between the theoretical modeling and experimental test make the design concept an effective route for the development of high performance actuating devices for many applications. The theoretical modeling, fabrication of the HYBAS and the initial experimental results will be presented and discussed.
ERIC Educational Resources Information Center
Wind, Stefanie A.; Engelhard, George, Jr.; Wesolowski, Brian
2016-01-01
When good model-data fit is observed, the Many-Facet Rasch (MFR) model acts as a linking and equating model that can be used to estimate student achievement, item difficulties, and rater severity on the same linear continuum. Given sufficient connectivity among the facets, the MFR model provides estimates of student achievement that are equated to…
Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method
2015-01-05
rehearsal mechanism in the model. To evaluate the model we developed a simple new goodness-of-fit test based on analysis of variance that offers an...repeated step). Sequen- tial constraints are common in medicine, equipment maintenance, computer programming and technical support, data analysis ...legal analysis , accounting, and many other home and workplace environ- ments. Sequential constraints also play a role in such basic cognitive processes
Improving a DSM Obtained by Unmanned Aerial Vehicles for Flood Modelling
NASA Astrophysics Data System (ADS)
Mourato, Sandra; Fernandez, Paulo; Pereira, Luísa; Moreira, Madalena
2017-12-01
According to the EU flood risks directive, flood hazard map must be used to assess the flood risk. These maps can be developed with hydraulic modelling tools using a Digital Surface Runoff Model (DSRM). During the last decade, important evolutions of the spatial data processing has been developed which will certainly improve the hydraulic models results. Currently, images acquired with Red/Green/Blue (RGB) camera transported by Unmanned Aerial Vehicles (UAV) are seen as a good alternative data sources to represent the terrain surface with a high level of resolution and precision. The question is if the digital surface model obtain with this data is adequate enough for a good representation of the hydraulics flood characteristics. For this purpose, the hydraulic model HEC-RAS was run with 4 different DSRM for an 8.5 km reach of the Lis River in Portugal. The computational performance of the 4 modelling implementations is evaluated. Two hydrometric stations water level records were used as boundary conditions of the hydraulic model. The records from a third hydrometric station were used to validate the optimal DSRM. The HEC-RAS results had the best performance during the validation step were the ones where the DSRM with integration of the two altimetry data sources.
USEEIO: a New and Transparent United States ...
National-scope environmental life cycle models of goods and services may be used for many purposes, not limited to quantifying impacts of production and consumption of nations, assessing organization-wide impacts, identifying purchasing hot spots, analyzing environmental impacts of policies, and performing streamlined life cycle assessment. USEEIO is a new environmentally extended input-output model of the United States fit for such purposes and other sustainable materials management applications. USEEIO melds data on economic transactions between 389 industry sectors with environmental data for these sectors covering land, water, energy and mineral usage and emissions of greenhouse gases, criteria air pollutants, nutrients and toxics, to build a life cycle model of 385 US goods and services. In comparison with existing US input-output models, USEEIO is more current with most data representing year 2013, more extensive in its coverage of resources and emissions, more deliberate and detailed in its interpretation and combination of data sources, and includes formal data quality evaluation and description. USEEIO was assembled with a new Python module called the IO Model Builder capable of assembling and calculating results of user-defined input-output models and exporting the models into LCA software. The model and data quality evaluation capabilities are demonstrated with an analysis of the environmental performance of an average hospital in the US. All USEEIO f
Arefi-Oskoui, Samira; Khataee, Alireza; Vatanpour, Vahid
2017-07-10
In this research, MgAl-CO 3 2- nanolayered double hydroxide (NLDH) was synthesized through a facile coprecipitation method, followed by a hydrothermal treatment. The prepared NLDHs were used as a hydrophilic nanofiller for improving the performance of the PVDF-based ultrafiltration membranes. The main objective of this research was to obtain the optimized formula of NLDH/PVDF nanocomposite membrane presenting the best performance using computational techniques as a cost-effective method. For this aim, an artificial neural network (ANN) model was developed for modeling and expressing the relationship between the performance of the nanocomposite membrane (pure water flux, protein flux and flux recovery ratio) and the affecting parameters including the NLDH, PVP 29000 and polymer concentrations. The effects of the mentioned parameters and the interaction between the parameters were investigated using the contour plot predicted with the developed model. Scanning electron microscopy (SEM), atomic force microscopy (AFM), and water contact angle techniques were applied to characterize the nanocomposite membranes and to interpret the predictions of the ANN model. The developed ANN model was introduced to genetic algorithm (GA) as a bioinspired optimizer to determine the optimum values of input parameters leading to high pure water flux, protein flux, and flux recovery ratio. The optimum values for NLDH, PVP 29000 and the PVDF concentration were determined to be 0.54, 1, and 18 wt %, respectively. The performance of the nanocomposite membrane prepared using the optimum values proposed by GA was investigated experimentally, in which the results were in good agreement with the values predicted by ANN model with error lower than 6%. This good agreement confirmed that the nanocomposite membranes prformance could be successfully modeled and optimized by ANN-GA system.
Improvements on NYMTC Data Products
DOT National Transportation Integrated Search
2009-11-11
Just like any other scientific research field, the value of data quality is undisputed in the field of transportation. From policy planning to performance evaluation, from model development to impact studies, good quality data is essential to generat...
Good initialization model with constrained body structure for scene text recognition
NASA Astrophysics Data System (ADS)
Zhu, Anna; Wang, Guoyou; Dong, Yangbo
2016-09-01
Scene text recognition has gained significant attention in the computer vision community. Character detection and recognition are the promise of text recognition and affect the overall performance to a large extent. We proposed a good initialization model for scene character recognition from cropped text regions. We use constrained character's body structures with deformable part-based models to detect and recognize characters in various backgrounds. The character's body structures are achieved by an unsupervised discriminative clustering approach followed by a statistical model and a self-build minimum spanning tree model. Our method utilizes part appearance and location information, and combines character detection and recognition in cropped text region together. The evaluation results on the benchmark datasets demonstrate that our proposed scheme outperforms the state-of-the-art methods both on scene character recognition and word recognition aspects.
Analyzing Strategic Business Rules through Simulation Modeling
NASA Astrophysics Data System (ADS)
Orta, Elena; Ruiz, Mercedes; Toro, Miguel
Service Oriented Architecture (SOA) holds promise for business agility since it allows business process to change to meet new customer demands or market needs without causing a cascade effect of changes in the underlying IT systems. Business rules are the instrument chosen to help business and IT to collaborate. In this paper, we propose the utilization of simulation models to model and simulate strategic business rules that are then disaggregated at different levels of an SOA architecture. Our proposal is aimed to help find a good configuration for strategic business objectives and IT parameters. The paper includes a case study where a simulation model is built to help business decision-making in a context where finding a good configuration for different business parameters and performance is too complex to analyze by trial and error.
Radar cross section models for limited aspect angle windows
NASA Astrophysics Data System (ADS)
Robinson, Mark C.
1992-12-01
This thesis presents a method for building Radar Cross Section (RCS) models of aircraft based on static data taken from limited aspect angle windows. These models statistically characterize static RCS. This is done to show that a limited number of samples can be used to effectively characterize static aircraft RCS. The optimum models are determined by performing both a Kolmogorov and a Chi-Square goodness-of-fit test comparing the static RCS data with a variety of probability density functions (pdf) that are known to be effective at approximating the static RCS of aircraft. The optimum parameter estimator is also determined by the goodness of-fit tests if there is a difference in pdf parameters obtained by the Maximum Likelihood Estimator (MLE) and the Method of Moments (MoM) estimators.
Yilmaz Soylu, Meryem; Zeleny, Mary G.; Zhao, Ruomeng; Bruning, Roger H.; Dempsey, Michael S.; Kauffman, Douglas F.
2017-01-01
The two studies reported here explored the factor structure of the newly constructed Writing Achievement Goal Scale (WAGS), and examined relationships among secondary students' writing achievement goals, writing self-efficacy, affect for writing, and writing achievement. In the first study, 697 middle school students completed the WAGS. A confirmatory factor analysis revealed a good fit for this data with a three-factor model that corresponds with mastery, performance approach, and performance avoidance goals. The results of Study 1 were an indication for the researchers to move forward with Study 2, which included 563 high school students. The secondary students completed the WAGS, as well as the Self-efficacy for Writing Scale, and the Liking Writing Scale. Students also self-reported grades for writing and for language arts courses. Approximately 6 weeks later, students completed a statewide writing assessment. We tested a theoretical model representing relationships among Study 2 variables using structural equation modeling including students' responses to the study scales and students' scores on the statewide assessment. Results from Study 2 revealed a good fit between a model depicting proposed relationships among the constructs and the data. Findings are discussed relative to achievement goal theory and writing. PMID:28878707
Tsai, Jason Sheng-Hong; Du, Yan-Yi; Huang, Pei-Hsiang; Guo, Shu-Mei; Shieh, Leang-San; Chen, Yuhua
2011-07-01
In this paper, a digital redesign methodology of the iterative learning-based decentralized adaptive tracker is proposed to improve the dynamic performance of sampled-data linear large-scale control systems consisting of N interconnected multi-input multi-output subsystems, so that the system output will follow any trajectory which may not be presented by the analytic reference model initially. To overcome the interference of each sub-system and simplify the controller design, the proposed model reference decentralized adaptive control scheme constructs a decoupled well-designed reference model first. Then, according to the well-designed model, this paper develops a digital decentralized adaptive tracker based on the optimal analog control and prediction-based digital redesign technique for the sampled-data large-scale coupling system. In order to enhance the tracking performance of the digital tracker at specified sampling instants, we apply the iterative learning control (ILC) to train the control input via continual learning. As a result, the proposed iterative learning-based decentralized adaptive tracker not only has robust closed-loop decoupled property but also possesses good tracking performance at both transient and steady state. Besides, evolutionary programming is applied to search for a good learning gain to speed up the learning process of ILC. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
Developing a Suitable Model for Water Uptake for Biodegradable Polymers Using Small Training Sets.
Valenzuela, Loreto M; Knight, Doyle D; Kohn, Joachim
2016-01-01
Prediction of the dynamic properties of water uptake across polymer libraries can accelerate polymer selection for a specific application. We first built semiempirical models using Artificial Neural Networks and all water uptake data, as individual input. These models give very good correlations (R (2) > 0.78 for test set) but very low accuracy on cross-validation sets (less than 19% of experimental points within experimental error). Instead, using consolidated parameters like equilibrium water uptake a good model is obtained (R (2) = 0.78 for test set), with accurate predictions for 50% of tested polymers. The semiempirical model was applied to the 56-polymer library of L-tyrosine-derived polyarylates, identifying groups of polymers that are likely to satisfy design criteria for water uptake. This research demonstrates that a surrogate modeling effort can reduce the number of polymers that must be synthesized and characterized to identify an appropriate polymer that meets certain performance criteria.
Analysis of two-equation turbulence models for recirculating flows
NASA Technical Reports Server (NTRS)
Thangam, S.
1991-01-01
The two-equation kappa-epsilon model is used to analyze turbulent separated flow past a backward-facing step. It is shown that if the model constraints are modified to be consistent with the accepted energy decay rate for isotropic turbulence, the dominant features of the flow field, namely the size of the separation bubble and the streamwise component of the mean velocity, can be accurately predicted. In addition, except in the vicinity of the step, very good predictions for the turbulent shear stress, the wall pressure, and the wall shear stress are obtained. The model is also shown to provide good predictions for the turbulence intensity in the region downstream of the reattachment point. Estimated long time growth rates for the turbulent kinetic energy and dissipation rate of homogeneous shear flow are utilized to develop an optimal set of constants for the two equation kappa-epsilon model. The physical implications of the model performance are also discussed.
WHO expert committee on specifications for pharmaceutical preparations. Fortieth report.
2006-01-01
This report presents the recommendations of an international group of experts convened by the World Health Organization to consider matters concerning the quality assurance of pharmaceuticals and specifications for drug substances and dosage forms. The report is complemented by a number of annexes. These include: a list of available International Chemical Reference Substances and International Infrared Spectra; supplementary guidelines on good manufacturing practices for heating, ventilation and air-conditioning systems for non-sterile pharmaceutical dosage forms; updated supplementary guidelines on good manufacturing practices for the manufacture of herbal medicines; supplementary guidelines on good manufacturing practices for validation; good distribution practices for pharmaceutical products; a model quality assurance system for procurement agencies (recommendations for quality assurance systems focusing on prequalification of products and manufacturers, purchasing, storage and distribution of pharmaceutical products); multisource (generic) pharmaceutical products: guidelines on registration requirements to establish interchangeability; a proposal to waive in vivo bioequivalence requirements for WHO Model List of Essential Medicines immediate-release, solid oral dosage forms; and additional guidance for organizations performing in vivo bioequivalence studies.
Calculation of the Aerodynamic Behavior of the Tilt Rotor Aeroacoustic Model (TRAM) in the DNW
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
Comparisons of measured and calculated aerodynamic behavior of a tiltrotor model are presented. The test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single, 1/4-scale V- 22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. The calculations were performed using the rotorcraft comprehensive analysis CAMRAD II. Presented are comparisons of measured and calculated performance and airloads for helicopter mode operation, as well as calculated induced and profile power. An aerodynamic and wake model and calculation procedure that reflects the unique geometry and phenomena of tiltrotors has been developed. There are major differences between this model and the corresponding aerodynamic and wake model that has been established for helicopter rotors. In general, good correlation between measured and calculated performance and airloads behavior has been shown. Two aspects of the analysis that clearly need improvement are the stall delay model and the trailed vortex formation model.
Study of unsteady performance of a twin-entry mixed flow turbine
NASA Astrophysics Data System (ADS)
Bencherif, M. M.; Hamidou, M. K.; Hamel, M.; Abidat, M.
2016-03-01
The aim of this investigation is to study the performance of a twin-entry turbine under pulsed flow conditions. The ANSYS-CFX code is used to solve three-dimensional compressible turbulent flow equations. The computational results are compared with those of a one-dimensional model and experimental data, and good agreement is found.
Water surface modeling from a single viewpoint video.
Li, Chuan; Pickup, David; Saunders, Thomas; Cosker, Darren; Marshall, David; Hall, Peter; Willis, Philip
2013-07-01
We introduce a video-based approach for producing water surface models. Recent advances in this field output high-quality results but require dedicated capturing devices and only work in limited conditions. In contrast, our method achieves a good tradeoff between the visual quality and the production cost: It automatically produces a visually plausible animation using a single viewpoint video as the input. Our approach is based on two discoveries: first, shape from shading (SFS) is adequate to capture the appearance and dynamic behavior of the example water; second, shallow water model can be used to estimate a velocity field that produces complex surface dynamics. We will provide qualitative evaluation of our method and demonstrate its good performance across a wide range of scenes.
Zhang, Yongsheng; Wei, Heng; Zheng, Kangning
2017-01-01
Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188
Parameter Extraction Method for the Electrical Model of a Silicon Photomultiplier
NASA Astrophysics Data System (ADS)
Licciulli, Francesco; Marzocca, Cristoforo
2016-10-01
The availability of an effective electrical model, able to accurately reproduce the signals generated by a Silicon Photo-Multiplier coupled to the front-end electronics, is mandatory when the performance of a detection system based on this kind of detector has to be evaluated by means of reliable simulations. We propose a complete extraction procedure able to provide the whole set of the parameters involved in a well-known model of the detector, which includes the substrate ohmic resistance. The technique allows achieving very good quality of the fit between simulation results provided by the model and experimental data, thanks to accurate discrimination between the quenching and substrate resistances, which results in a realistic set of extracted parameters. The extraction procedure has been applied to a commercial device considering a wide range of different conditions in terms of input resistance of the front-end electronics and interconnection parasitics. In all the considered situations, very good correspondence has been found between simulations and measurements, especially for what concerns the leading edge of the current pulses generated by the detector, which strongly affects the timing performance of the detection system, thus confirming the effectiveness of the model and the associated parameter extraction technique.
An Investigation of Unified Memory Access Performance in CUDA
Landaverde, Raphael; Zhang, Tiansheng; Coskun, Ayse K.; Herbordt, Martin
2015-01-01
Managing memory between the CPU and GPU is a major challenge in GPU computing. A programming model, Unified Memory Access (UMA), has been recently introduced by Nvidia to simplify the complexities of memory management while claiming good overall performance. In this paper, we investigate this programming model and evaluate its performance and programming model simplifications based on our experimental results. We find that beyond on-demand data transfers to the CPU, the GPU is also able to request subsets of data it requires on demand. This feature allows UMA to outperform full data transfer methods for certain parallel applications and small data sizes. We also find, however, that for the majority of applications and memory access patterns, the performance overheads associated with UMA are significant, while the simplifications to the programming model restrict flexibility for adding future optimizations. PMID:26594668
Regression Models for Identifying Noise Sources in Magnetic Resonance Images
Zhu, Hongtu; Li, Yimei; Ibrahim, Joseph G.; Shi, Xiaoyan; An, Hongyu; Chen, Yashen; Gao, Wei; Lin, Weili; Rowe, Daniel B.; Peterson, Bradley S.
2009-01-01
Stochastic noise, susceptibility artifacts, magnetic field and radiofrequency inhomogeneities, and other noise components in magnetic resonance images (MRIs) can introduce serious bias into any measurements made with those images. We formally introduce three regression models including a Rician regression model and two associated normal models to characterize stochastic noise in various magnetic resonance imaging modalities, including diffusion-weighted imaging (DWI) and functional MRI (fMRI). Estimation algorithms are introduced to maximize the likelihood function of the three regression models. We also develop a diagnostic procedure for systematically exploring MR images to identify noise components other than simple stochastic noise, and to detect discrepancies between the fitted regression models and MRI data. The diagnostic procedure includes goodness-of-fit statistics, measures of influence, and tools for graphical display. The goodness-of-fit statistics can assess the key assumptions of the three regression models, whereas measures of influence can isolate outliers caused by certain noise components, including motion artifacts. The tools for graphical display permit graphical visualization of the values for the goodness-of-fit statistic and influence measures. Finally, we conduct simulation studies to evaluate performance of these methods, and we analyze a real dataset to illustrate how our diagnostic procedure localizes subtle image artifacts by detecting intravoxel variability that is not captured by the regression models. PMID:19890478
A new bio-inspired stimulator to suppress hyper-synchronized neural firing in a cortical network.
Amiri, Masoud; Amiri, Mahmood; Nazari, Soheila; Faez, Karim
2016-12-07
Hyper-synchronous neural oscillations are the character of several neurological diseases such as epilepsy. On the other hand, glial cells and particularly astrocytes can influence neural synchronization. Therefore, based on the recent researches, a new bio-inspired stimulator is proposed which basically is a dynamical model of the astrocyte biophysical model. The performance of the new stimulator is investigated on a large-scale, cortical network. Both excitatory and inhibitory synapses are also considered in the simulated spiking neural network. The simulation results show that the new stimulator has a good performance and is able to reduce recurrent abnormal excitability which in turn avoids the hyper-synchronous neural firing in the spiking neural network. In this way, the proposed stimulator has a demand controlled characteristic and is a good candidate for deep brain stimulation (DBS) technique to successfully suppress the neural hyper-synchronization. Copyright © 2016 Elsevier Ltd. All rights reserved.
[A new low-cost webcam-based laparoscopic training model].
Langeron, A; Mercier, G; Lima, S; Chauleur, C; Golfier, F; Seffert, P; Chêne, G
2012-01-01
To validate a new laparoscopy home training model (GYN Trainer®) in order to practise and learn basic laparoscopic surgery. Ten junior surgical residents and six experienced operators were timed and assessed during six laparoscopic exercises performed on the home training model. Acquisition of skill was 35%. All the novices significantly improved performance in surgical skills despite an 8% partial loss of acquisition between two training sessions. Qualitative evaluation of the system was good (3.8/5). This low-cost personal laparoscopic model seems to be a useful tool to assist surgical novices in learning basic laparoscopic skills. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
A control method for bilateral teleoperating systems
NASA Astrophysics Data System (ADS)
Strassberg, Yesayahu
1992-01-01
The thesis focuses on control of bilateral master-slave teleoperators. The bilateral control issue of teleoperators is studied and a new scheme that overcomes basic unsolved problems is proposed. A performance measure, based on the multiport modeling method, is introduced in order to evaluate and understand the limitations of earlier published bilateral control laws. Based on the study evaluating the different methods, the objective of the thesis is stated. The proposed control law is then introduced, its ideal performance is demonstrated, and conditions for stability and robustness are derived. It is shown that stability, desired performance, and robustness can be obtained under the assumption that the deviation of the model from the actual system satisfies certain norm inequalities and the measurement uncertainties are bounded. The proposed scheme is validated by numerical simulation. The simulated system is based on the configuration of the RAL (Robotics and Automation Laboratory) telerobot. From the simulation results it is shown that good tracking performance can be obtained. In order to verify the performance of the proposed scheme when applied to a real hardware system, an experimental setup of a three degree of freedom master-slave teleoperator (i.e. three degree of freedom master and three degree of freedom slave robot) was built. Three basic experiments were conducted to verify the performance of the proposed control scheme. The first experiment verified the master control law and its contribution to the robustness and performance of the entire system. The second experiment demonstrated the actual performance of the system while performing a free motion teleoperating task. From the experimental results, it is shown that the control law has good performance and is robust to uncertainties in the models of the master and slave.
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.
Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong
2016-04-15
Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.
Data mining of tree-based models to analyze freeway accident frequency.
Chang, Li-Yen; Chen, Wen-Chieh
2005-01-01
Statistical models, such as Poisson or negative binomial regression models, have been employed to analyze vehicle accident frequency for many years. However, these models have their own model assumptions and pre-defined underlying relationship between dependent and independent variables. If these assumptions are violated, the model could lead to erroneous estimation of accident likelihood. Classification and Regression Tree (CART), one of the most widely applied data mining techniques, has been commonly employed in business administration, industry, and engineering. CART does not require any pre-defined underlying relationship between target (dependent) variable and predictors (independent variables) and has been shown to be a powerful tool, particularly for dealing with prediction and classification problems. This study collected the 2001-2002 accident data of National Freeway 1 in Taiwan. A CART model and a negative binomial regression model were developed to establish the empirical relationship between traffic accidents and highway geometric variables, traffic characteristics, and environmental factors. The CART findings indicated that the average daily traffic volume and precipitation variables were the key determinants for freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies.
A generic framework to simulate realistic lung, liver and renal pathologies in CT imaging
NASA Astrophysics Data System (ADS)
Solomon, Justin; Samei, Ehsan
2014-11-01
Realistic three-dimensional (3D) mathematical models of subtle lesions are essential for many computed tomography (CT) studies focused on performance evaluation and optimization. In this paper, we develop a generic mathematical framework that describes the 3D size, shape, contrast, and contrast-profile characteristics of a lesion, as well as a method to create lesion models based on CT data of real lesions. Further, we implemented a technique to insert the lesion models into CT images in order to create hybrid CT datasets. This framework was used to create a library of realistic lesion models and corresponding hybrid CT images. The goodness of fit of the models was assessed using the coefficient of determination (R2) and the visual appearance of the hybrid images was assessed with an observer study using images of both real and simulated lesions and receiver operator characteristic (ROC) analysis. The average R2 of the lesion models was 0.80, implying that the models provide a good fit to real lesion data. The area under the ROC curve was 0.55, implying that the observers could not readily distinguish between real and simulated lesions. Therefore, we conclude that the lesion-modeling framework presented in this paper can be used to create realistic lesion models and hybrid CT images. These models could be instrumental in performance evaluation and optimization of novel CT systems.
1979-12-01
faction, occupational preference, or the desirability of good performance . Proposition 2, as formulated by Vroom , predicts the force to act in a...Human Performance , 9: 482-503 (1973). Lewis, Logan M. "Expectancy Theory as a Predictive Model of Career Intent, Job Satisfaction , and Institution... Satisfaction , Effort, Performance , and Retention of Naval Aviation Officers," Organizational Behavior and Human Performance , 8: 1-20 (1972). 102 and Lee Roy
High-performance heat pipes for heat recovery applications
NASA Technical Reports Server (NTRS)
Saaski, E. W.; Hartl, J. H.
1980-01-01
Methods to improve the performance of reflux heat pipes for heat recovery applications were examined both analytically and experimentally. Various models for the estimation of reflux heat pipe transport capacity were surveyed in the literature and compared with experimental data. A high transport capacity reflux heat pipe was developed that provides up to a factor of 10 capacity improvement over conventional open tube designs; analytical models were developed for this device and incorporated into a computer program HPIPE. Good agreement of the model predictions with data for R-11 and benzene reflux heat pipes was obtained.
Composite panel development at JPL
NASA Technical Reports Server (NTRS)
Mcelroy, Paul; Helms, Rich
1988-01-01
Parametric computer studies can be use in a cost effective manner to determine optimized composite mirror panel designs. An InterDisciplinary computer Model (IDM) was created to aid in the development of high precision reflector panels for LDR. The materials properties, thermal responses, structural geometries, and radio/optical precision are synergistically analyzed for specific panel designs. Promising panels designs are fabricated and tested so that comparison with panel test results can be used to verify performance prediction models and accommodate design refinement. The iterative approach of computer design and model refinement with performance testing and materials optimization has shown good results for LDR panels.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-01-01
Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. PMID:26369671
Speech reconstruction using a deep partially supervised neural network.
McLoughlin, Ian; Li, Jingjie; Song, Yan; Sharifzadeh, Hamid R
2017-08-01
Statistical speech reconstruction for larynx-related dysphonia has achieved good performance using Gaussian mixture models and, more recently, restricted Boltzmann machine arrays; however, deep neural network (DNN)-based systems have been hampered by the limited amount of training data available from individual voice-loss patients. The authors propose a novel DNN structure that allows a partially supervised training approach on spectral features from smaller data sets, yielding very good results compared with the current state-of-the-art.
Evaluation of Force Transfer Around Openings - Experimental and Analytical Studies
Borjen Yeh; Tom Skaggs; Frank Lam; Minghao Li; Douglas Rammer; James Wacker
2011-01-01
Wood structural panel (WSP) sheathed shear walls and diaphragms are the primary lateral-load-resistingelements in wood-frame construction. The historical performance of light-frame structures in North America is very good due, in part, to model building codes that are designed to safeguard life safety. These model building codes have spawned continual improvement and...
Model Plan of Merit Pay in Ferment
ERIC Educational Resources Information Center
Honawar, Vaishali
2008-01-01
Denver's performance-pay system for teachers has long been hailed as a model, in good part because it was jointly conceived and implemented by the school district and the local teachers' union. However, that collaborative spirit is now in jeopardy, with union and district leaders engaged in a protracted battle over proposed changes to the system.…
Validation of WRF forecasts for the Chajnantor region
NASA Astrophysics Data System (ADS)
Pozo, Diana; Marín, J. C.; Illanes, L.; Curé, M.; Rabanus, D.
2016-06-01
This study assesses the performance of the Weather Research and Forecasting (WRF) model to represent the near-surface weather conditions and the precipitable water vapour (PWV) in the Chajnantor plateau, in the north of Chile, from 2007 April to December. The WRF model shows a very good performance forecasting the near-surface temperature and zonal wind component, although it overestimates the 2 m water vapour mixing ratio and underestimates the 10 m meridional wind component. The model represents very well the seasonal, intraseasonal and the diurnal variation of PWV. However, the PWV errors increase after the 12 h of simulation. Errors in the simulations are larger than 1.5 mm only during 10 per cent of the study period, they do not exceed 0.5 mm during 65 per cent of the time and they are below 0.25 mm more than 45 per cent of the time, which emphasizes the good performance of the model to forecast the PWV over the region. The misrepresentation of the near-surface humidity in the region by the WRF model may have a negative impact on the PWV forecasts. Thus, having accurate forecasts of humidity near the surface may result in more accurate PWV forecasts. Overall, results from this, as well as recent studies, supports the use of the WRF model to provide accurate weather forecasts for the region, particularly for the PWV, which can be of great benefit for astronomers in the planning of their scientific operations and observing time.
Assessing the Performance of a Computer-Based Policy Model of HIV and AIDS
Rydzak, Chara E.; Cotich, Kara L.; Sax, Paul E.; Hsu, Heather E.; Wang, Bingxia; Losina, Elena; Freedberg, Kenneth A.; Weinstein, Milton C.; Goldie, Sue J.
2010-01-01
Background Model-based analyses, conducted within a decision analytic framework, provide a systematic way to combine information about the natural history of disease and effectiveness of clinical management strategies with demographic and epidemiological characteristics of the population. Among the challenges with disease-specific modeling include the need to identify influential assumptions and to assess the face validity and internal consistency of the model. Methods and Findings We describe a series of exercises involved in adapting a computer-based simulation model of HIV disease to the Women's Interagency HIV Study (WIHS) cohort and assess model performance as we re-parameterized the model to address policy questions in the U.S. relevant to HIV-infected women using data from the WIHS. Empiric calibration targets included 24-month survival curves stratified by treatment status and CD4 cell count. The most influential assumptions in untreated women included chronic HIV-associated mortality following an opportunistic infection, and in treated women, the ‘clinical effectiveness’ of HAART and the ability of HAART to prevent HIV complications independent of virologic suppression. Good-fitting parameter sets required reductions in the clinical effectiveness of 1st and 2nd line HAART and improvements in 3rd and 4th line regimens. Projected rates of treatment regimen switching using the calibrated cohort-specific model closely approximated independent analyses published using data from the WIHS. Conclusions The model demonstrated good internal consistency and face validity, and supported cohort heterogeneities that have been reported in the literature. Iterative assessment of model performance can provide information about the relative influence of uncertain assumptions and provide insight into heterogeneities within and between cohorts. Description of calibration exercises can enhance the transparency of disease-specific models. PMID:20844741
Assessing the performance of a computer-based policy model of HIV and AIDS.
Rydzak, Chara E; Cotich, Kara L; Sax, Paul E; Hsu, Heather E; Wang, Bingxia; Losina, Elena; Freedberg, Kenneth A; Weinstein, Milton C; Goldie, Sue J
2010-09-09
Model-based analyses, conducted within a decision analytic framework, provide a systematic way to combine information about the natural history of disease and effectiveness of clinical management strategies with demographic and epidemiological characteristics of the population. Among the challenges with disease-specific modeling include the need to identify influential assumptions and to assess the face validity and internal consistency of the model. We describe a series of exercises involved in adapting a computer-based simulation model of HIV disease to the Women's Interagency HIV Study (WIHS) cohort and assess model performance as we re-parameterized the model to address policy questions in the U.S. relevant to HIV-infected women using data from the WIHS. Empiric calibration targets included 24-month survival curves stratified by treatment status and CD4 cell count. The most influential assumptions in untreated women included chronic HIV-associated mortality following an opportunistic infection, and in treated women, the 'clinical effectiveness' of HAART and the ability of HAART to prevent HIV complications independent of virologic suppression. Good-fitting parameter sets required reductions in the clinical effectiveness of 1st and 2nd line HAART and improvements in 3rd and 4th line regimens. Projected rates of treatment regimen switching using the calibrated cohort-specific model closely approximated independent analyses published using data from the WIHS. The model demonstrated good internal consistency and face validity, and supported cohort heterogeneities that have been reported in the literature. Iterative assessment of model performance can provide information about the relative influence of uncertain assumptions and provide insight into heterogeneities within and between cohorts. Description of calibration exercises can enhance the transparency of disease-specific models.
A critical evaluation of various turbulence models as applied to internal fluid flows
NASA Technical Reports Server (NTRS)
Nallasamy, M.
1985-01-01
Models employed in the computation of turbulent flows are described and their application to internal flows is evaluated by examining the predictions of various turbulence models in selected flow configurations. The main conclusions are: (1) the k-epsilon model is used in a majority of all the two-dimensional flow calculations reported in the literature; (2) modified forms of the k-epsilon model improve the performance for flows with streamline curvature and heat transfer; (3) for flows with swirl, the k-epsilon model performs rather poorly; the algebraic stress model performs better in this case; and (4) for flows with regions of secondary flow (noncircular duct flows), the algebraic stress model performs fairly well for fully developed flow, for developing flow, the algebraic stress model performance is not good; a Reynolds stress model should be used. False diffusion and inlet boundary conditions are discussed. Countergradient transport and its implications in turbulence modeling is mentioned. Two examples of recirculating flow predictions obtained using PHOENICS code are discussed. The vortex method, large eddy simulation (modeling of subgrid scale Reynolds stresses), and direct simulation, are considered. Some recommendations for improving the model performance are made. The need for detailed experimental data in flows with strong curvature is emphasized.
Model Performance Evaluation and Scenario Analysis ...
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too
NASA Astrophysics Data System (ADS)
Makungo, Rachel; Odiyo, John O.
2017-08-01
This study was focused on testing the ability of a coupled linear and non-linear system identification model in estimating groundwater levels. System identification provides an alternative approach for estimating groundwater levels in areas that lack data required by physically-based models. It also overcomes the limitations of physically-based models due to approximations, assumptions and simplifications. Daily groundwater levels for 4 boreholes, rainfall and evaporation data covering the period 2005-2014 were used in the study. Seventy and thirty percent of the data were used to calibrate and validate the model, respectively. Correlation coefficient (R), coefficient of determination (R2), root mean square error (RMSE), percent bias (PBIAS), Nash Sutcliffe coefficient of efficiency (NSE) and graphical fits were used to evaluate the model performance. Values for R, R2, RMSE, PBIAS and NSE ranged from 0.8 to 0.99, 0.63 to 0.99, 0.01-2.06 m, -7.18 to 1.16 and 0.68 to 0.99, respectively. Comparisons of observed and simulated groundwater levels for calibration and validation runs showed close agreements. The model performance mostly varied from satisfactory, good, very good and excellent. Thus, the model is able to estimate groundwater levels. The calibrated models can reasonably capture description between input and output variables and can, thus be used to estimate long term groundwater levels.
NASA Technical Reports Server (NTRS)
Evans, Austin Lewis
1987-01-01
A computer code to model the steady-state performance of a monogroove heat pipe for the NASA Space Station is presented, including the effects on heat pipe performance of a screen in the evaporator section which deals with transient surges in the heat input. Errors in a previous code have been corrected, and the new code adds additional loss terms in order to model several different working fluids. Good agreement with existing performance curves is obtained. From a preliminary evaluation of several of the radiator design parameters it is found that an optimum fin width could be achieved but that structural considerations limit the thickness of the fin to a value above optimum.
Corrected goodness-of-fit test in covariance structure analysis.
Hayakawa, Kazuhiko
2018-05-17
Many previous studies report simulation evidence that the goodness-of-fit test in covariance structure analysis or structural equation modeling suffers from the overrejection problem when the number of manifest variables is large compared with the sample size. In this study, we demonstrate that one of the tests considered in Browne (1974) can address this long-standing problem. We also propose a simple modification of Satorra and Bentler's mean and variance adjusted test for non-normal data. A Monte Carlo simulation is carried out to investigate the performance of the corrected tests in the context of a confirmatory factor model, a panel autoregressive model, and a cross-lagged panel (panel vector autoregressive) model. The simulation results reveal that the corrected tests overcome the overrejection problem and outperform existing tests in most cases. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Hart, F X
1990-01-01
The current-density distribution produced inside irregularly shaped, homogeneous human and rat models by low-frequency electric fields is obtained by a two-stage finite-difference procedure. In the first stage the model is assumed to be equipotential. Laplace's equation is solved by iteration in the external region to obtain the capacitive-current densities at the model's surface elements. These values then provide the boundary conditions for the second-stage relaxation solution, which yields the internal current-density distribution. Calculations were performed with the Excel spread-sheet program on a Macintosh-II microcomputer. A spread sheet is a two-dimensional array of cells. Each cell of the sheet can represent a square element of space. Equations relating the values of the cells can represent the relationships between the potentials in the corresponding spatial elements. Extension to three dimensions is readily made. Good agreement was obtained with current densities measured on human models with both, one, or no legs grounded and on rat models in four different grounding configurations. The results also compared well with predictions of more sophisticated numerical analyses. Spread sheets can provide an inexpensive and relatively simple means to perform good, approximate dosimetric calculations on irregularly shaped objects.
Modeling ready biodegradability of fragrance materials.
Ceriani, Lidia; Papa, Ester; Kovarich, Simona; Boethling, Robert; Gramatica, Paola
2015-06-01
In the present study, quantitative structure activity relationships were developed for predicting ready biodegradability of approximately 200 heterogeneous fragrance materials. Two classification methods, classification and regression tree (CART) and k-nearest neighbors (kNN), were applied to perform the modeling. The models were validated with multiple external prediction sets, and the structural applicability domain was verified by the leverage approach. The best models had good sensitivity (internal ≥80%; external ≥68%), specificity (internal ≥80%; external 73%), and overall accuracy (≥75%). Results from the comparison with BIOWIN global models, based on group contribution method, show that specific models developed in the present study perform better in prediction than BIOWIN6, in particular for the correct classification of not readily biodegradable fragrance materials. © 2015 SETAC.
Multiconjugate adaptive optics for the Swedish ELT
NASA Astrophysics Data System (ADS)
Gontcharov, Alexander; Owner-Petersen, Mette
2000-08-01
The Swedish ELT is intended to be a 50 m telescope with multiconjugate adaptive optics integrated directly as a crucial part of the optical design. In this paper we discuss the effects of the distributed atmospheric turbulence with regard to the choice of optimal geometry of the telescope. Originally the basic system was foreseen to be a Gregorian with an adaptive secondary correcting adequately for nearby turbulences in both the infrared and visual regions, but if the performance degradation expected from changing the basic system to a Cassegrain keeping the adaptive secondary could be accepted, the constructional costs would be significantly reduced. In order to clarify this question, a simple analytical model describing the performance employing a single deformable mirror for adaptive correction has been developed and used for analysis. The quantitative results shown here relates to a wavelength of 2.2 micrometers and are based on the seven layer atmospheric model for the Cerro Pachon site, which is believed to be a good representative of most good astronomical sites. As a consequence of the analysis no performance degradation is expected from changing the core telescope to a Cassegrain (Ritchey- Chretien). The paper presents the layout and optical performance of the new design.
[Modeling and implementation method for the automatic biochemistry analyzer control system].
Wang, Dong; Ge, Wan-cheng; Song, Chun-lin; Wang, Yun-guang
2009-03-01
In this paper the system structure The automatic biochemistry analyzer is a necessary instrument for clinical diagnostics. First of is analyzed. The system problems description and the fundamental principles for dispatch are brought forward. Then this text puts emphasis on the modeling for the automatic biochemistry analyzer control system. The objects model and the communications model are put forward. Finally, the implementation method is designed. It indicates that the system based on the model has good performance.
Syfert, Mindy M; Smith, Matthew J; Coomes, David A
2013-01-01
Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.
Neal, Andrew; Kwantes, Peter J
2009-04-01
The aim of this article is to develop a formal model of conflict detection performance. Our model assumes that participants iteratively sample evidence regarding the state of the world and accumulate it over time. A decision is made when the evidence reaches a threshold that changes over time in response to the increasing urgency of the task. Two experiments were conducted to examine the effects of conflict geometry and timing on response proportions and response time. The model is able to predict the observed pattern of response times, including a nonmonotonic relationship between distance at point of closest approach and response time, as well as effects of angle of approach and relative velocity. The results demonstrate that evidence accumulation models provide a good account of performance on a conflict detection task. Evidence accumulation models are a form of dynamic signal detection theory, allowing for the analysis of response times as well as response proportions, and can be used for simulating human performance on dynamic decision tasks.
NASA Astrophysics Data System (ADS)
Cortés, J.-C.; Colmenar, J.-M.; Hidalgo, J.-I.; Sánchez-Sánchez, A.; Santonja, F.-J.; Villanueva, R.-J.
2016-01-01
Academic performance is a concern of paramount importance in Spain, where around of 30 % of the students in the last two courses in high school, before to access to the labor market or to the university, do not achieve the minimum knowledge required according to the Spanish educational law in force. In order to analyze this problem, we propose a random network model to study the dynamics of the academic performance in Spain. Our approach is based on the idea that both, good and bad study habits, are a mixture of personal decisions and influence of classmates. Moreover, in order to consider the uncertainty in the estimation of model parameters, we perform a lot of simulations taking as the model parameters the ones that best fit data returned by the Differential Evolution algorithm. This technique permits to forecast model trends in the next few years using confidence intervals.
Abou-El-Enein, Mohamed; Römhild, Andy; Kaiser, Daniel; Beier, Carola; Bauer, Gerhard; Volk, Hans-Dieter; Reinke, Petra
2013-03-01
Advanced therapy medicinal products (ATMP) have gained considerable attention in academia due to their therapeutic potential. Good Manufacturing Practice (GMP) principles ensure the quality and sterility of manufacturing these products. We developed a model for estimating the manufacturing costs of cell therapy products and optimizing the performance of academic GMP-facilities. The "Clean-Room Technology Assessment Technique" (CTAT) was tested prospectively in the GMP facility of BCRT, Berlin, Germany, then retrospectively in the GMP facility of the University of California-Davis, California, USA. CTAT is a two-level model: level one identifies operational (core) processes and measures their fixed costs; level two identifies production (supporting) processes and measures their variable costs. The model comprises several tools to measure and optimize performance of these processes. Manufacturing costs were itemized using adjusted micro-costing system. CTAT identified GMP activities with strong correlation to the manufacturing process of cell-based products. Building best practice standards allowed for performance improvement and elimination of human errors. The model also demonstrated the unidirectional dependencies that may exist among the core GMP activities. When compared to traditional business models, the CTAT assessment resulted in a more accurate allocation of annual expenses. The estimated expenses were used to set a fee structure for both GMP facilities. A mathematical equation was also developed to provide the final product cost. CTAT can be a useful tool in estimating accurate costs for the ATMPs manufactured in an optimized GMP process. These estimates are useful when analyzing the cost-effectiveness of these novel interventions. Copyright © 2013 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
Rodríguez-Sánchez, Alma M; Hakanen, Jari J; Perhoniemi, Riku; Salanova, Marisa
2013-10-01
In this study, we hypothesized that dentist' interpersonal resources (good cooperation with one's assistant) together with their personal resources (optimism) buffer the negative effects of emotional dissonance (a demand that occurs when there is a difference between felt and displayed emotions) on job performance (in-role and extra-role performance) over time. We carried out Hierarchical Regression Modeling on a sample of 1954 Finnish dentists who participated in a two-wave 4-year longitudinal study. Results showed that good cooperation with dental assistants buffered the negative effects of emotional dissonance on both in-role and extra-role performance among the dentists in the long term. However, unexpectedly, dentists' high optimism did not buffer their in-role nor extra-role performance over time under conditions of experiencing high emotional dissonance. We conclude that interpersonal job resources such as good cooperation with one's colleagues may buffer the negative effect of emotional dissonance on dentists' job performance even in the long term, whereas the role of personal resources (e.g., optimism) may be less important for maintaining high job performance under conditions of emotional dissonance. The study novelties include the test of the negative effects of emotional dissonance on long-term performance in dentistry and the identification of the job rather than personal resources as the buffers against the negative effects of emotional dissonance on long-term performance. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
GUIDING PRINCIPLES FOR GOOD PRACTICES IN HOSPITAL-BASED HEALTH TECHNOLOGY ASSESSMENT UNITS.
Sampietro-Colom, Laura; Lach, Krzysztof; Pasternack, Iris; Wasserfallen, Jean-Blaise; Cicchetti, Americo; Marchetti, Marco; Kidholm, Kristian; Arentz-Hansen, Helene; Rosenmöller, Magdalene; Wild, Claudia; Kahveci, Rabia; Ulst, Margus
2015-01-01
Health technology assessment (HTA) carried out for policy decision making has well-established principles unlike hospital-based HTA (HB-HTA), which differs from the former in the context characteristics and ways of operation. This study proposes principles for good practices in HB-HTA units. A framework for good practice criteria was built inspired by the EFQM excellence business model and information from six literature reviews, 107 face-to-face interviews, forty case studies, large-scale survey, focus group, Delphi survey, as well as local and international validation. In total, 385 people from twenty countries have participated in defining the principles for good practices in HB-HTA units. Fifteen guiding principles for good practices in HB-HTA units are grouped in four dimensions. Dimension 1 deals with principles of the assessment process aimed at providing contextualized information for hospital decision makers. Dimension 2 describes leadership, strategy and partnerships of HB-HTA units which govern and facilitate the assessment process. Dimension 3 focuses on adequate resources that ensure the operation of HB-HTA units. Dimension 4 deals with measuring the short- and long-term impact of the overall performance of HB-HTA units. Finally, nine core guiding principles were selected as essential requirements for HB-HTA units based on the expertise of the HB-HTA units participating in the project. Guiding principles for good practices set up a benchmark for HB-HTA because they represent the ideal performance of HB-HTA units; nevertheless, when performing HTA at hospital level, context also matters; therefore, they should be adapted to ensure their applicability in the local context.
Walker, Kimberly; Jackson, Richard
2015-01-01
There is limited understanding of children's behavioral decisions for practicing good oral hygiene. The purpose of this study was to identify factors that may motivate children to practice good oral hygiene. Guided by the Health Belief Model (HBM), eight focus groups of 42 American children (second through fifth graders) were interviewed concerning their histories with caries, perceived confidence in brushing, self-perceived susceptibility and vulnerability for caries and/or poor oral health, and perceived benefits and barriers to practicing oral hygiene. Most children equated good oral health as being central to their overall health; however, some viewed poor oral health as occurring only in the elderly while others believed poor oral health could begin at any age. Children cited esthetic appearance of teeth and the desire to please others by brushing without reminders as motivators of good oral hygiene. The greatest barriers to performing oral hygiene were a perceived lack of time and limited access to toothbrushes and dentifrice when away. To motivate children in this age range, emphasis should be placed on the positive aspects of maintaining good oral hygiene for its contribution to appearance and its implication for an overall healthy body and self-image.
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network
Yu, Ying; Wang, Yirui; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient. PMID:28246527
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network.
Yu, Ying; Wang, Yirui; Gao, Shangce; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.
NASA Astrophysics Data System (ADS)
Min, Sa Hoon; Berkowitz, Max L.
2018-04-01
We performed molecular dynamics simulations to study how well some of the water models used in simulations describe shocked states. Water in our simulations was described using three different models. One was an often-used all-atom TIP4P/2005 model, while the other two were coarse-grained models used with the MARTINI force field: non-polarizable and polarizable MARTINI water. The all-atom model provided results in good agreement with Hugoniot curves (for data on pressure versus specific volume or, equivalently, on shock wave velocity versus "piston" velocity) describing shocked states in the whole range of pressures (up to 11 GPa) under study. If simulations of shocked states of water using coarse-grained models were performed for short time periods, we observed that data obtained for shocked states at low pressure were fairly accurate compared to experimental Hugoniot curves. Polarizable MARTINI water still provided a good description of Hugoniot curves for pressures up to 11 GPa, while the results for the non-polarizable MARTINI water substantially deviated from the Hugoniot curves. We also calculated the temperature of the Hugoniot states and observed that for TIP4P/2005 water, they were consistent with those from theoretical calculations, while both coarse-grained models predicted much higher temperatures. These high temperatures for MARTINI water can be explained by the loss of degrees of freedom due to coarse-graining procedure.
Classification of Company Performance using Weighted Probabilistic Neural Network
NASA Astrophysics Data System (ADS)
Yasin, Hasbi; Waridi Basyiruddin Arifin, Adi; Warsito, Budi
2018-05-01
Classification of company performance can be judged by looking at its financial status, whether good or bad state. Classification of company performance can be achieved by some approach, either parametric or non-parametric. Neural Network is one of non-parametric methods. One of Artificial Neural Network (ANN) models is Probabilistic Neural Network (PNN). PNN consists of four layers, i.e. input layer, pattern layer, addition layer, and output layer. The distance function used is the euclidean distance and each class share the same values as their weights. In this study used PNN that has been modified on the weighting process between the pattern layer and the addition layer by involving the calculation of the mahalanobis distance. This model is called the Weighted Probabilistic Neural Network (WPNN). The results show that the company's performance modeling with the WPNN model has a very high accuracy that reaches 100%.
NASA Astrophysics Data System (ADS)
Wong, Pak-kin; Vong, Chi-man; Wong, Hang-cheong; Li, Ke
2010-05-01
Modern automotive spark-ignition (SI) power performance usually refers to output power and torque, and they are significantly affected by the setup of control parameters in the engine management system (EMS). EMS calibration is done empirically through tests on the dynamometer (dyno) because no exact mathematical engine model is yet available. With an emerging nonlinear function estimation technique of Least squares support vector machines (LS-SVM), the approximate power performance model of a SI engine can be determined by training the sample data acquired from the dyno. A novel incremental algorithm based on typical LS-SVM is also proposed in this paper, so the power performance models built from the incremental LS-SVM can be updated whenever new training data arrives. With updating the models, the model accuracies can be continuously increased. The predicted results using the estimated models from the incremental LS-SVM are good agreement with the actual test results and with the almost same average accuracy of retraining the models from scratch, but the incremental algorithm can significantly shorten the model construction time when new training data arrives.
Establishing Good Practices for Exposure–Response Analysis of Clinical Endpoints in Drug Development
Overgaard, RV; Ingwersen, SH; Tornøe, CW
2015-01-01
This tutorial aims at promoting good practices for exposure–response (E-R) analyses of clinical endpoints in drug development. The focus is on practical aspects of E-R analyses to assist modeling scientists with a process of performing such analyses in a consistent manner across individuals and projects and tailored to typical clinical drug development decisions. This includes general considerations for planning, conducting, and visualizing E-R analyses, and how these are linked to key questions. PMID:26535157
NASA Astrophysics Data System (ADS)
Xu, Haoran; Chen, Bin; Zhang, Houcheng; Tan, Peng; Yang, Guangming; Irvine, John T. S.; Ni, Meng
2018-04-01
In this paper, 2D models for direct carbon solid oxide fuel cells (DC-SOFCs) with in situ catalytic steam-carbon gasification reaction are developed. The simulation results are found to be in good agreement with experimental data. The performance of DC-SOFCs with and without catalyst are compared at different operating potential, anode inlet gas flow rate and operating temperature. It is found that adding suitable catalyst can significantly speed up the in situ steam-carbon gasification reaction and improve the performance of DC-SOFC with H2O as gasification agent. The potential of syngas and electricity co-generation from the fuel cell is also evaluated, where the composition of H2 and CO in syngas can be adjusted by controlling the anode inlet gas flow rate. In addition, the performance DC-SOFCs and the percentage of fuel in the outlet gas are both increased with increasing operating temperature. At a reduced temperature (below 800 °C), good performance of DC-SOFC can still be obtained with in-situ catalytic carbon gasification by steam. The results of this study form a solid foundation to understand the important effect of catalyst and related operating conditions on H2O-assisted DC-SOFCs.
Saline-filled laparoscopic surgery: A basic study on partial hepatectomy in a rabbit model.
Shimada, Masanari; Kawaguchi, Masahiko; Ishikawa, Norihiko; Watanabe, Go
2015-01-01
There is still a poor understanding of the effects of pneumoperitoneum with insufflation of carbon dioxide gas (CO2) on malignant cells, and pneumoperitoneum has a negative impact on cardiopulmonary responses. A novel saline-filled laparoscopic surgery (SAFLS) is proposed, and the technical feasibility of performing saline-filled laparoscopic partial hepatectomy (LPH) was evaluated in a rabbit model. Twelve LPH were performed in rabbits, with six procedures performed using an ultrasonic device with CO2 pneumoperitoneum (CO2 group) and six procedures performed using a bipolar resectoscope (RS) in a saline-filled environment (saline group). Resection time, CO2 and saline consumption, vital signs, blood gas analysis, complications, interleukin-1 beta (IL-1β) and C-reactive protein (CRP) levels were measured. The effectiveness of the resections was evaluated by the pathological findings. LPH was successfully performed with clear observation by irrigation and good control of bleeding by coagulation with RS. There were no significant differences in all perioperative values, IL-1βand CRP levels between the two groups. All pathological specimens of the saline group showed that the resected lesions were coagulated and regenerated as well as in the CO2 group. SAFLS is feasible and provides a good surgical view with irrigation and identification of bleeding sites.
Wei, Lan; Qian, Quan; Wang, Zhi-Qiang; Glass, Gregory E.; Song, Shao-Xia; Zhang, Wen-Yi; Li, Xiu-Jun; Yang, Hong; Wang, Xian-Jun; Fang, Li-Qun; Cao, Wu-Chun
2011-01-01
Hemorrhagic fever with renal syndrome (HFRS) is an important public health problem in Shandong Province, China. In this study, we combined ecologic niche modeling with geographic information systems (GIS) and remote sensing techniques to identify the risk factors and affected areas of hantavirus infections in rodent hosts. Land cover and elevation were found to be closely associated with the presence of hantavirus-infected rodent hosts. The averaged area under the receiver operating characteristic curve was 0.864, implying good performance. The predicted risk maps based on the model were validated both by the hantavirus-infected rodents' distribution and HFRS human case localities with a good fit. These findings have the applications for targeting control and prevention efforts. PMID:21363991
Feature extraction inspired by V1 in visual cortex
NASA Astrophysics Data System (ADS)
Lv, Chao; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Xin, Peng; Zhu, Mingning; Ma, Hongqiang
2018-04-01
Target feature extraction plays an important role in pattern recognition. It is the most complicated activity in the brain mechanism of biological vision. Inspired by high properties of primary visual cortex (V1) in extracting dynamic and static features, a visual perception model was raised. Firstly, 28 spatial-temporal filters with different orientations, half-squaring operation and divisive normalization were adopted to obtain the responses of V1 simple cells; then, an adjustable parameter was added to the output weight so that the response of complex cells was got. Experimental results indicate that the proposed V1 model can perceive motion information well. Besides, it has a good edge detection capability. The model inspired by V1 has good performance in feature extraction and effectively combines brain-inspired intelligence with computer vision.
Latash, M; Gottleib, G
1990-01-01
Problems of single-joint movement variability are analysed in the framework of the equilibrium-point hypothesis (the lambda-model). Control of the movements is described with three parameters related to movement amplitude speed, and time. Three strategies emerge from this description. Only one of them is likely to lead to a Fitts' type speed-accuracy trade-off. Experiments were performed to test one of the predictions of the model. Subjects performed identical sets of single-joint fast movements with open or closed eyes and some-what different instructions. Movements performed with closed eyes were characterized with higher peak speeds and unchanged variability in seeming violation of the Fitt's law and in a good correspondence to the model.
Rolland, Y; Bézy-Wendling, J; Duvauferrier, R; Coatrieux, J L
1999-03-01
To demonstrate the usefulness of a model of the parenchymous vascularization to evaluate texture analysis methods. Slices with thickness varying from 1 to 4 mm were reformatted from a 3D vascular model corresponding to either normal tissue perfusion or local hypervascularization. Parameters of statistical methods were measured on 16128x128 regions of interest, and mean values and standard deviation were calculated. For each parameter, the performances (discrimination power and stability) were evaluated. Among 11 calculated statistical parameters, three (homogeneity, entropy, mean of gradients) were found to have a good discriminating power to differentiate normal perfusion from hypervascularization, but only the gradient mean was found to have a good stability with respect to the thickness. Five parameters (run percentage, run length distribution, long run emphasis, contrast, and gray level distribution) were found to have intermediate results. In the remaining three, curtosis and correlation was found to have little discrimination power, skewness none. This 3D vascular model, which allows the generation of various examples of vascular textures, is a powerful tool to assess the performance of texture analysis methods. This improves our knowledge of the methods and should contribute to their a priori choice when designing clinical studies.
NASA Astrophysics Data System (ADS)
GABA, C. O. U.; Alamou, E.; Afouda, A.; Diekkrüger, B.
2016-12-01
Assessing water resources is still an important challenge especially in the context of climatic changes. Although numerous hydrological models exist, new approaches are still under investigation. In this context, we investigate a new modelling approach based on the Physics Principle of Least Action which was first applied to the Bétérou catchment in Benin and gave very good results. The study presents new hypotheses to go further in the model development with a view of widening its application. The improved version of the model MODHYPMA was applied to sixteen (16) subcatchments in Bénin, West Africa. Its performance was compared to two well-known lumped conceptual models, the GR4J and HBV models. The model was successfully calibrated and validated and showed a good performance in most catchments. The analysis revealed that the three models have similar performance and timing errors. But in contrary to other models, MODHYMA is subject to a less loss of performance from calibration to validation. In order to evaluate the usefulness of our model for the prediction of runoff in ungauged basins, model parameters were estimated from the physical catchments characteristics. We relied on statistical methods applied on calibrated model parameters to deduce relationships between parameters and physical catchments characteristics. These relationships were further tested and validated on gauged basins that were considered ungauged. This regionalization was also performed for GR4J model.We obtained NSE values greater than 0.7 for MODHYPMA while the NSE values for GR4J were inferior to 0.5. In the presented study, the effects of climate change on water resources in the Ouémé catchment at the outlet of Savè (about 23 500 km2) are quantified. The output of a regional climate model was used as input to the hydrological models.Computed within the GLOWA-IMPETUS project, the future climate projections (describing a rainfall reduction of up to 15%) are derived from the regional climate model REMO driven by the global ECHAM model.The results reveal a significant decrease in future water resources (of -66% to -53% for MODHYPMA and of -59% to -46% for GR4J) for the IPCC climate scenarios A1B and B1.
Robust tracking control of a magnetically suspended rigid body
NASA Technical Reports Server (NTRS)
Lim, Kyong B.; Cox, David E.
1994-01-01
This study is an application of H-infinity and micro-synthesis for designing robust tracking controllers for the Large Angle Magnetic Suspension Test Facility. The modeling, design, analysis, simulation, and testing of a control law that guarantees tracking performance under external disturbances and model uncertainties is investigated. The type of uncertainties considered and the tracking performance metric used is discussed. This study demonstrates the tradeoff between tracking performance at low frequencies and robustness at high frequencies. Two sets of controllers were designed and tested. The first set emphasized performance over robustness, while the second set traded off performance for robustness. Comparisons of simulation and test results are also included. Current simulation and experimental results indicate that reasonably good robust tracking performance can be attained for this system using multivariable robust control approach.
NASA Astrophysics Data System (ADS)
van Daal-Rombouts, Petra; Sun, Siao; Langeveld, Jeroen; Bertrand-Krajewski, Jean-Luc; Clemens, François
2016-07-01
Optimisation or real time control (RTC) studies in wastewater systems increasingly require rapid simulations of sewer systems in extensive catchments. To reduce the simulation time calibrated simplified models are applied, with the performance generally based on the goodness of fit of the calibration. In this research the performance of three simplified and a full hydrodynamic (FH) model for two catchments are compared based on the correct determination of CSO event occurrences and of the total discharged volumes to the surface water. Simplified model M1 consists of a rainfall runoff outflow (RRO) model only. M2 combines the RRO model with a static reservoir model for the sewer behaviour. M3 comprises the RRO model and a dynamic reservoir model. The dynamic reservoir characteristics were derived from FH model simulations. It was found that M2 and M3 are able to describe the sewer behaviour of the catchments, contrary to M1. The preferred model structure depends on the quality of the information (geometrical database and monitoring data) available for the design and calibration of the model. Finally, calibrated simplified models are shown to be preferable to uncalibrated FH models when performing optimisation or RTC studies.
Dynamics Modelling of Biolistic Gene Guns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, M.; Tao, W.; Pianetta, P.A.
2009-06-04
The gene transfer process using biolistic gene guns is a highly dynamic process. To achieve good performance, the process needs to be well understood and controlled. Unfortunately, no dynamic model is available in the open literature for analysing and controlling the process. This paper proposes such a model. Relationships of the penetration depth with the helium pressure, the penetration depth with the acceleration distance, and the penetration depth with the micro-carrier radius are presented. Simulations have also been conducted. The results agree well with experimental results in the open literature. The contribution of this paper includes a dynamic model formore » improving and manipulating performance of the biolistic gene gun.« less
Modeling the target acquisition performance of active imaging systems
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Jacobs, Eddie L.; Halford, Carl E.; Vollmerhausen, Richard; Tofsted, David H.
2007-04-01
Recent development of active imaging system technology in the defense and security community have driven the need for a theoretical understanding of its operation and performance in military applications such as target acquisition. In this paper, the modeling of active imaging systems, developed at the U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate, is presented with particular emphasis on the impact of coherent effects such as speckle and atmospheric scintillation. Experimental results from human perception tests are in good agreement with the model results, validating the modeling of coherent effects as additional noise sources. Example trade studies on the design of a conceptual active imaging system to mitigate deleterious coherent effects are shown.
Modeling the target acquisition performance of active imaging systems.
Espinola, Richard L; Jacobs, Eddie L; Halford, Carl E; Vollmerhausen, Richard; Tofsted, David H
2007-04-02
Recent development of active imaging system technology in the defense and security community have driven the need for a theoretical understanding of its operation and performance in military applications such as target acquisition. In this paper, the modeling of active imaging systems, developed at the U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate, is presented with particular emphasis on the impact of coherent effects such as speckle and atmospheric scintillation. Experimental results from human perception tests are in good agreement with the model results, validating the modeling of coherent effects as additional noise sources. Example trade studies on the design of a conceptual active imaging system to mitigate deleterious coherent effects are shown.
Terakado, Shingo; Glass, Thomas R; Sasaki, Kazuhiro; Ohmura, Naoya
2014-01-01
A simple new model for estimating the screening performance (false positive and false negative rates) of a given test for a specific sample population is presented. The model is shown to give good results on a test population, and is used to estimate the performance on a sampled population. Using the model developed in conjunction with regulatory requirements and the relative costs of the confirmatory and screening tests allows evaluation of the screening test's utility in terms of cost savings. Testers can use the methods developed to estimate the utility of a screening program using available screening tests with their own sample populations.
Evaluation of annual, global seismicity forecasts, including ensemble models
NASA Astrophysics Data System (ADS)
Taroni, Matteo; Zechar, Jeremy; Marzocchi, Warner
2013-04-01
In 2009, the Collaboratory for the Study of the Earthquake Predictability (CSEP) initiated a prototype global earthquake forecast experiment. Three models participated in this experiment for 2009, 2010 and 2011—each model forecast the number of earthquakes above magnitude 6 in 1x1 degree cells that span the globe. Here we use likelihood-based metrics to evaluate the consistency of the forecasts with the observed seismicity. We compare model performance with statistical tests and a new method based on the peer-to-peer gambling score. The results of the comparisons are used to build ensemble models that are a weighted combination of the individual models. Notably, in these experiments the ensemble model always performs significantly better than the single best-performing model. Our results indicate the following: i) time-varying forecasts, if not updated after each major shock, may not provide significant advantages with respect to time-invariant models in 1-year forecast experiments; ii) the spatial distribution seems to be the most important feature to characterize the different forecasting performances of the models; iii) the interpretation of consistency tests may be misleading because some good models may be rejected while trivial models may pass consistency tests; iv) a proper ensemble modeling seems to be a valuable procedure to get the best performing model for practical purposes.
NASA Astrophysics Data System (ADS)
Ege, Kerem; Roozen, N. B.; Leclère, Quentin; Rinaldi, Renaud G.
2018-07-01
In the context of aeronautics, automotive and construction applications, the design of light multilayer plates with optimized vibroacoustical damping and isolation performances remains a major industrial challenge and a hot topic of research. This paper focuses on the vibrational behavior of three-layered sandwich composite plates in a broad-band frequency range. Several aspects are studied through measurement techniques and analytical modelling of a steel/polymer/steel plate sandwich system. A contactless measurement of the velocity field of plates using a scanning laser vibrometer is performed, from which the equivalent single layer complex rigidity (apparent bending stiffness and apparent damping) in the mid/high frequency ranges is estimated. The results are combined with low/mid frequency estimations obtained with a high-resolution modal analysis method so that the frequency dependent equivalent Young's modulus and equivalent loss factor of the composite plate are identified for the whole [40 Hz-20 kHz] frequency band. The results are in very good agreement with an equivalent single layer analytical modelling based on wave propagation analysis (model of Guyader). The comparison with this model allows identifying the frequency dependent complex modulus of the polymer core layer through inverse resolution. Dynamical mechanical analysis measurements are also performed on the polymer layer alone and compared with the values obtained through the inverse method. Again, a good agreement between these two estimations over the broad-band frequency range demonstrates the validity of the approach.
Recent Progress Towards Predicting Aircraft Ground Handling Performance
NASA Technical Reports Server (NTRS)
Yager, T. J.; White, E. J.
1981-01-01
The significant progress which has been achieved in development of aircraft ground handling simulation capability is reviewed and additional improvements in software modeling identified. The problem associated with providing necessary simulator input data for adequate modeling of aircraft tire/runway friction behavior is discussed and efforts to improve this complex model, and hence simulator fidelity, are described. Aircraft braking performance data obtained on several wet runway surfaces is compared to ground vehicle friction measurements and, by use of empirically derived methods, good agreement between actual and estimated aircraft braking friction from ground vehilce data is shown. The performance of a relatively new friction measuring device, the friction tester, showed great promise in providing data applicable to aircraft friction performance. Additional research efforts to improve methods of predicting tire friction performance are discussed including use of an instrumented tire test vehicle to expand the tire friction data bank and a study of surface texture measurement techniques.
Airloads and Wake Geometry Calculations for an Isolated Tiltrotor Model in a Wind Tunnel
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2001-01-01
Comparisons of measured and calculated aerodynamic behavior of a tiltrotor model are presented. The test of the Tilt Rotor Aeroacoustic Model (TRAM) with a single, 0.25-scale V-22 rotor in the German-Dutch Wind Tunnel (DNW) provides an extensive set of aeroacoustic, performance, and structural loads data. The calculations were performed using the rotorcraft comprehensive analysis CAMRAD II. Presented are comparisons of measured and calculated performance for hover and helicopter mode operation, and airloads for helicopter mode. Calculated induced power, profile power, and wake geometry provide additional information about the aerodynamic behavior. An aerodynamic and wake model and calculation procedure that reflects the unique geometry and phenomena of tiltrotors has been developed. There are major differences between this model and the corresponding aerodynamic and wake model that has been established for helicopter rotors. In general, good correlation between measured and calculated performance and airloads behavior has been shown. Two aspects of the analysis that clearly need improvement are the stall delay model and the trailed vortex formation model.
Koenecke, Christian; Göhring, Gudrun; de Wreede, Liesbeth C.; van Biezen, Anja; Scheid, Christof; Volin, Liisa; Maertens, Johan; Finke, Jürgen; Schaap, Nicolaas; Robin, Marie; Passweg, Jakob; Cornelissen, Jan; Beelen, Dietrich; Heuser, Michael; de Witte, Theo; Kröger, Nicolaus
2015-01-01
The aim of this study was to determine the impact of the revised 5-group International Prognostic Scoring System cytogenetic classification on outcome after allogeneic stem cell transplantation in patients with myelodysplastic syndromes or secondary acute myeloid leukemia who were reported to the European Society for Blood and Marrow Transplantation database. A total of 903 patients had sufficient cytogenetic information available at stem cell transplantation to be classified according to the 5-group classification. Poor and very poor risk according to this classification was an independent predictor of shorter relapse-free survival (hazard ratio 1.40 and 2.14), overall survival (hazard ratio 1.38 and 2.14), and significantly higher cumulative incidence of relapse (hazard ratio 1.64 and 2.76), compared to patients with very good, good or intermediate risk. When comparing the predictive performance of a series of Cox models both for relapse-free survival and for overall survival, a model with simplified 5-group cytogenetics (merging very good, good and intermediate cytogenetics) performed best. Furthermore, monosomal karyotype is an additional negative predictor for outcome within patients of the poor, but not the very poor risk group of the 5-group classification. The revised International Prognostic Scoring System cytogenetic classification allows patients with myelodysplastic syndromes to be separated into three groups with clearly different outcomes after stem cell transplantation. Poor and very poor risk cytogenetics were strong predictors of poor patient outcome. The new cytogenetic classification added value to prediction of patient outcome compared to prediction models using only traditional risk factors or the 3-group International Prognostic Scoring System cytogenetic classification. PMID:25552702
Development of a multicomponent force and moment balance for water tunnel applications, volume 2
NASA Technical Reports Server (NTRS)
Suarez, Carlos J.; Malcolm, Gerald N.; Kramer, Brian R.; Smith, Brooke C.; Ayers, Bert F.
1994-01-01
The principal objective of this research effort was to develop a multicomponent strain gauge balance to measure forces and moments on models tested in flow visualization water tunnels. Static experiments (which are discussed in Volume 1 of this report) were conducted, and the results showed good agreement with wind tunnel data on similar configurations. Dynamic experiments, which are the main topic of this Volume, were also performed using the balance. Delta wing models and two F/A-18 models were utilized in a variety of dynamic tests. This investigation showed that, as expected, the values of the inertial tares are very small due to the low rotating rates required in a low-speed water tunnel and can, therefore, be ignored. Oscillations in pitch, yaw and roll showed hysteresis loops that compared favorably to data from dynamic wind tunnel experiments. Pitch-up and hold maneuvers revealed the long persistence, or time-lags, of some of the force components in response to the motion. Rotary-balance experiments were also successfully performed. The good results obtained in these dynamic experiments bring a whole new dimension to water tunnel testing and emphasize the importance of having the capability to perform simultaneous flow visualization and force/moment measurements during dynamic situations.
Aerodynamic performance investigation on waverider with variable blunt radius in hypersonic flows
NASA Astrophysics Data System (ADS)
Li, Shibin; Wang, Zhenguo; Huang, Wei; Xu, Shenren; Yan, Li
2017-08-01
Waverider is an important candidate for the design of hypersonic vehicles. However, the ideal waverider cannot be manufactured because of its sharp leading edge, so the leading edge should be blunted. In the paper, the HMB solver and laminar flow model have been utilized to obtain the flow field properties around the blunt waverider with the freestream Mach number being 8.0, and several novel strategies have been suggested to improve the aerodynamic performance of blunt waverider. The numerical method has been validated against experimental data, and the Stanton number(St) of the predicted result has been analyzed. The obtained results show good agreement with the experimental data. Stmax decreases by 58% and L/D decreases by 8.2% when the blunt radius increases from 0.0002 m to 0.001 m. ;Variable blunt waverider; is a good compromise for aerodynamic performance and thermal insulation. The aero-heating characteristics are very sensitive to Rmax. The position of the smallest blunt radius has a great effect on the aerodynamic performance. In addition, the type of blunt leading edge has a great effect on the aero-heating characteristics when Rmax is fixed. Therefore, out of several designs, Type 4is the best way to achieve the good overall performance. The ;Variable blunt waverider; not only improves the aerodynamic performance, but also makes the aero-heating become evenly-distributed, leading to better aero-heating characteristics.
Goodness-Of-Fit Test for Nonparametric Regression Models: Smoothing Spline ANOVA Models as Example.
Teran Hidalgo, Sebastian J; Wu, Michael C; Engel, Stephanie M; Kosorok, Michael R
2018-06-01
Nonparametric regression models do not require the specification of the functional form between the outcome and the covariates. Despite their popularity, the amount of diagnostic statistics, in comparison to their parametric counter-parts, is small. We propose a goodness-of-fit test for nonparametric regression models with linear smoother form. In particular, we apply this testing framework to smoothing spline ANOVA models. The test can consider two sources of lack-of-fit: whether covariates that are not currently in the model need to be included, and whether the current model fits the data well. The proposed method derives estimated residuals from the model. Then, statistical dependence is assessed between the estimated residuals and the covariates using the HSIC. If dependence exists, the model does not capture all the variability in the outcome associated with the covariates, otherwise the model fits the data well. The bootstrap is used to obtain p-values. Application of the method is demonstrated with a neonatal mental development data analysis. We demonstrate correct type I error as well as power performance through simulations.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2016-09-01
Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. Proteins 2016; 84(Suppl 1):247-259. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
DeepQA: improving the estimation of single protein model quality with deep belief networks.
Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin
2016-12-05
Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. We introduce a novel single-model quality assessment method DeepQA based on deep belief network that utilizes a number of selected features describing the quality of a model from different perspectives, such as energy, physio-chemical characteristics, and structural information. The deep belief network is trained on several large datasets consisting of models from the Critical Assessment of Protein Structure Prediction (CASP) experiments, several publicly available datasets, and models generated by our in-house ab initio method. Our experiments demonstrate that deep belief network has better performance compared to Support Vector Machines and Neural Networks on the protein model quality assessment problem, and our method DeepQA achieves the state-of-the-art performance on CASP11 dataset. It also outperformed two well-established methods in selecting good outlier models from a large set of models of mostly low quality generated by ab initio modeling methods. DeepQA is a useful deep learning tool for protein single model quality assessment and protein structure prediction. The source code, executable, document and training/test datasets of DeepQA for Linux is freely available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/ .
Advanced analytical modeling of double-gate Tunnel-FETs - A performance evaluation
NASA Astrophysics Data System (ADS)
Graef, Michael; Hosenfeld, Fabian; Horst, Fabian; Farokhnejad, Atieh; Hain, Franziska; Iñíguez, Benjamín; Kloes, Alexander
2018-03-01
The Tunnel-FET is one of the most promising devices to be the successor of the standard MOSFET due to its alternative current transport mechanism, which allows a smaller subthreshold slope than the physically limited 60 mV/dec of the MOSFET. Recently fabricated devices show smaller slopes already but mostly not over multiple decades of the current transfer characteristics. In this paper the performance limiting effects, occurring during the fabrication process of the device, such as doping profiles and midgap traps are analyzed by physics-based analytical models and their performance limiting abilities are determined. Additionally, performance enhancing possibilities, such as hetero-structures and ambipolarity improvements are introduced and discussed. An extensive double-gate n-Tunnel-FET model is presented, which meets the versatile device requirements and shows a good fit with TCAD simulations and measurement data.
Pulsed CO2 characterization for lidar use
NASA Technical Reports Server (NTRS)
Jaenisch, Holger M.
1992-01-01
An account is given of a scaled functional testbed laser for space-qualified coherent-detection lidar applications which employs a CO2 laser. This laser has undergone modification and characterization for inherent performance capabilities as a model of coherent detection. While characterization results show good overall performance that is in agreement with theoretical predictions, frequency-stability and pulse-length limitations severely limit the laser's use in coherent detection.
Three-dimensional charge transport in organic semiconductor single crystals.
He, Tao; Zhang, Xiying; Jia, Jiong; Li, Yexin; Tao, Xutang
2012-04-24
Three-dimensional charge transport anisotropy in organic semiconductor single crystals - both plates and rods (above and below, respectively, in the figure) - is measured in well-performing organic field-effect transistors for the first time. The results provide an excellent model for molecular design and device preparation that leads to good performance. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
2007-12-05
yield record setting carrier lifetime values and very low concentrations of point defects. Epiwafers delivered for fabrication of RF static induction ...boules and on improved furnace uniformity (adding rotation, etc.). Pareto analysis was performed on wafer yield loss at the start of every quarter...100mm PVT process. Work focused on modeling the process for longer (50 mm) boules and on improved furnace uniformity. Pareto analysis was performed
Approaches to modelling uranium (VI) adsorption on natural mineral assemblages
Waite, T.D.; Davis, J.A.; Fenton, B.R.; Payne, T.E.
2000-01-01
Component additivity (CA) and generalised composite (GC) approaches to deriving a suitable surface complexation model for description of U(VI) adsorption to natural mineral assemblages are pursued in this paper with good success. A single, ferrihydrite-like component is found to reasonably describe uranyl uptake to a number of kaolinitic iron-rich natural substrates at pH > 4 in the CA approach with previously published information on nature of surface complexes, acid-base properties of surface sites and electrostatic effects used in the model. The GC approach, in which little pre-knowledge about generic surface sites is assumed, gives even better fits and would appear to be a method of particular strength for application in areas such as performance assessment provided the model is developed in a careful, stepwise manner with simplicity and goodness of fit as the major criteria for acceptance.
Jordan recurrent neural network versus IHACRES in modelling daily streamflows
NASA Astrophysics Data System (ADS)
Carcano, Elena Carla; Bartolini, Paolo; Muselli, Marco; Piroddi, Luigi
2008-12-01
SummaryA study of possible scenarios for modelling streamflow data from daily time series, using artificial neural networks (ANNs), is presented. Particular emphasis is devoted to the reconstruction of drought periods where water resource management and control are most critical. This paper considers two connectionist models: a feedforward multilayer perceptron (MLP) and a Jordan recurrent neural network (JNN), comparing network performance on real world data from two small catchments (192 and 69 km 2 in size) with irregular and torrential regimes. Several network configurations are tested to ensure a good combination of input features (rainfall and previous streamflow data) that capture the variability of the physical processes at work. Tapped delayed line (TDL) and memory effect techniques are introduced to recognize and reproduce temporal dependence. Results show a poor agreement when using TDL only, but a remarkable improvement can be obtained with JNN and its memory effect procedures, which are able to reproduce the system memory over a catchment in a more effective way. Furthermore, the IHACRES conceptual model, which relies on both rainfall and temperature input data, is introduced for comparative study. The results suggest that when good input data is unavailable, metric models perform better than conceptual ones and, in general, it is difficult to justify substantial conceptualization of complex processes.
Sfakiotakis, Stelios; Vamvuka, Despina
2015-12-01
The pyrolysis of six waste biomass samples was studied and the fuels were kinetically evaluated. A modified independent parallel reactions scheme (IPR) and a distributed activation energy model (DAEM) were developed and their validity was assessed and compared by checking their accuracy of fitting the experimental results, as well as their prediction capability in different experimental conditions. The pyrolysis experiments were carried out in a thermogravimetric analyzer and a fitting procedure, based on least squares minimization, was performed simultaneously at different experimental conditions. A modification of the IPR model, considering dependence of the pre-exponential factor on heating rate, was proved to give better fit results for the same number of tuned kinetic parameters, comparing to the known IPR model and very good prediction results for stepwise experiments. Fit of calculated data to the experimental ones using the developed DAEM model was also proved to be very good. Copyright © 2015 Elsevier Ltd. All rights reserved.
Markgraf, Rainer; Deutschinoff, Gerd; Pientka, Ludger; Scholten, Theo; Lorenz, Cristoph
2001-01-01
Background: Mortality predictions calculated using scoring scales are often not accurate in populations other than those in which the scales were developed because of differences in case-mix. The present study investigates the effect of first-level customization, using a logistic regression technique, on discrimination and calibration of the Acute Physiology and Chronic Health Evaluation (APACHE) II and III scales. Method: Probabilities of hospital death for patients were estimated by applying APACHE II and III and comparing these with observed outcomes. Using the split sample technique, a customized model to predict outcome was developed by logistic regression. The overall goodness-of-fit of the original and the customized models was assessed. Results: Of 3383 consecutive intensive care unit (ICU) admissions over 3 years, 2795 patients could be analyzed, and were split randomly into development and validation samples. The discriminative powers of APACHE II and III were unchanged by customization (areas under the receiver operating characteristic [ROC] curve 0.82 and 0.85, respectively). Hosmer-Lemeshow goodness-of-fit tests showed good calibration for APACHE II, but insufficient calibration for APACHE III. Customization improved calibration for both models, with a good fit for APACHE III as well. However, fit was different for various subgroups. Conclusions: The overall goodness-of-fit of APACHE III mortality prediction was improved significantly by customization, but uniformity of fit in different subgroups was not achieved. Therefore, application of the customized model provides no advantage, because differences in case-mix still limit comparisons of quality of care. PMID:11178223
Determining team cognition from delay analysis using cross recurrence plot.
Hajari, Nasim; Cheng, Irene; Bin Zheng; Basu, Anup
2016-08-01
Team cognition is an important factor in evaluating and determining team performance. Forming a team with good shared cognition is even more crucial for laparoscopic surgery applications. In this study, we analyzed the eye tracking data of two surgeons during a laparoscopic simulation operation, then performed Cross Recurrence Analysis (CRA) on the recorded data to study the delay behaviour for good performer and poor performer teams. Dual eye tracking data for twenty two dyad teams were recorded during a laparoscopic task and then the teams were divided into good performer and poor performer teams based on the task times. Eventually we studied the delay between two team members for good and poor performer teams. The results indicated that the good performer teams show a smaller delay comparing to poor performer teams. This study is compatible with gaze overlap analysis between team members and therefore it is a good evidence of shared cognition between team members.
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong; Kim, Keunwoo
2013-03-01
The Neural Networks is mostly used to engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measuring performance data, and proposes a fault diagnostic system using the base performance model and artificial intelligent methods such as Fuzzy and Neural Networks. Each real engine performance model, which is named as the base performance model that can simulate a new engine performance, is inversely made using its performance test data. Therefore the condition monitoring of each engine can be more precisely carried out through comparison with measuring performance data. The proposed diagnostic system identifies firstly the faulted components using Fuzzy Logic, and then quantifies faults of the identified components using Neural Networks leaned by fault learning data base obtained from the developed base performance model. In leaning the measuring performance data of the faulted components, the FFBP (Feed Forward Back Propagation) is used. In order to user's friendly purpose, the proposed diagnostic program is coded by the GUI type using MATLAB.
Pyroelectric effect in tryglicyne sulphate single crystals - Differential measurement method
NASA Astrophysics Data System (ADS)
Trybus, M.
2018-06-01
A simple mathematical model of the pyroelectric phenomenon was used to explain the electric response of the TGS (triglycine sulphate) samples in the linear heating process in ferroelectric and paraelectric phases. Experimental verification of mathematical model was realized. TGS single crystals were grown and four electrode samples were fabricated. Differential measurements of the pyroelectric response of two different regions of the samples were performed and the results were compared with data obtained from the model. Experimental results are in good agreement with model calculations.
Khan, Nabeel; Patel, Dhruvan; Shah, Yash; Yang, Yu-Xiao
2017-05-01
Anemia and iron deficiency are common complications of ulcerative colitis (UC). We aimed to develop and internally validate a prediction model for the incidence of moderate to severe anemia and iron deficiency anemia (IDA) in newly diagnosed patients with UC. Multivariable logistic regression was performed among a nationwide cohort of patients who were newly diagnosed with UC in the VA health-care system. Model development was performed in a random two-third of the total cohort and then validated in the remaining one-third of the cohort. As candidate predictors, we examined routinely available data at the time of UC diagnosis including demographics, medications, laboratory results, and endoscopy findings. A total of 789 patients met the inclusion criteria. For the outcome of moderate to severe anemia, age, albumin level and mild anemia at UC diagnosis were predictors selected for the model. The AUC for this model was 0.69 (95% CI 0.64-0.74). For the outcome of moderate to severe anemia with evidence of iron deficiency, the predictors included African-American ethnicity, mild anemia, age, and albumin level at UC diagnosis. The AUC was 0.76, (95% CI 0.69-0.82). Calibration was consistently good in all models (Hosmer-Lemeshow goodness of fit p > 0.05). The models performed similarly in the internal validation cohort. We developed and internally validated a prognostic model for predicting the risk of moderate to severe anemia and IDA among newly diagnosed patients with UC. This will help identify patients at high risk of these complications, who could benefit from surveillance and preventive measures.
A groundwater data assimilation application study in the Heihe mid-reach
NASA Astrophysics Data System (ADS)
Ragettli, S.; Marti, B. S.; Wolfgang, K.; Li, N.
2017-12-01
The present work focuses on modelling of the groundwater flow in the mid-reach of the endorheic river Heihe in the Zhangye oasis (Gansu province) in arid north-west China. In order to optimise the water resources management in the oasis, reliable forecasts of groundwater level development under different management options and environmental boundary conditions have to be produced. For this means, groundwater flow is modelled with Modflow and coupled to an Ensemble Kalman Filter programmed in Matlab. The model is updated with monthly time steps, featuring perturbed boundary conditions to account for uncertainty in model forcing. Constant biases between model and observations have been corrected prior to updating and compared to model runs without bias correction. Different options for data assimilation (states and/or parameters), updating frequency, and measures against filter inbreeding (damping factor, covariance inflation, spatial localization) have been tested against each other. Results show a high dependency of the Ensemble Kalman filter performance on the selection of observations for data assimilation. For the present regional model, bias correction is necessary for a good filter performance. A combination of spatial localization and covariance inflation is further advisable to reduce filter inbreeding problems. Best performance is achieved if parameter updates are not large, an indication for good prior model calibration. Asynchronous updating of parameter values once every five years (with data of the past five years) and synchronous updating of the groundwater levels is better suited for this groundwater system with not or slow changing parameter values than synchronous updating of both groundwater levels and parameters at every time step applying a damping factor. The filter is not able to correct time lags of signals.
Tree-based flood damage modeling of companies: Damage processes and model performance
NASA Astrophysics Data System (ADS)
Sieg, Tobias; Vogel, Kristin; Merz, Bruno; Kreibich, Heidi
2017-07-01
Reliable flood risk analyses, including the estimation of damage, are an important prerequisite for efficient risk management. However, not much is known about flood damage processes affecting companies. Thus, we conduct a flood damage assessment of companies in Germany with regard to two aspects. First, we identify relevant damage-influencing variables. Second, we assess the prediction performance of the developed damage models with respect to the gain by using an increasing amount of training data and a sector-specific evaluation of the data. Random forests are trained with data from two postevent surveys after flood events occurring in the years 2002 and 2013. For a sector-specific consideration, the data set is split into four subsets corresponding to the manufacturing, commercial, financial, and service sectors. Further, separate models are derived for three different company assets: buildings, equipment, and goods and stock. Calculated variable importance values reveal different variable sets relevant for the damage estimation, indicating significant differences in the damage process for various company sectors and assets. With an increasing number of data used to build the models, prediction errors decrease. Yet the effect is rather small and seems to saturate for a data set size of several hundred observations. In contrast, the prediction improvement achieved by a sector-specific consideration is more distinct, especially for damage to equipment and goods and stock. Consequently, sector-specific data acquisition and a consideration of sector-specific company characteristics in future flood damage assessments is expected to improve the model performance more than a mere increase in data.
Sathe, Prachee M; Bapat, Sharda N
2014-01-01
To assess the performance and utility of two mortality prediction models viz. Acute Physiology and Chronic Health Evaluation II (APACHE II) and Simplified Acute Physiology Score II (SAPS II) in a single Indian mixed tertiary intensive care unit (ICU). Secondary objectives were bench-marking and setting a base line for research. In this observational cohort, data needed for calculation of both scores were prospectively collected for all consecutive admissions to 28-bedded ICU in the year 2011. After excluding readmissions, discharges within 24 h and age <18 years, the records of 1543 patients were analyzed using appropriate statistical methods. Both models overpredicted mortality in this cohort [standardized mortality ratio (SMR) 0.88 ± 0.05 and 0.95 ± 0.06 using APACHE II and SAPS II respectively]. Patterns of predicted mortality had strong association with true mortality (R (2) = 0.98 for APACHE II and R (2) = 0.99 for SAPS II). Both models performed poorly in formal Hosmer-Lemeshow goodness-of-fit testing (Chi-square = 12.8 (P = 0.03) for APACHE II, Chi-square = 26.6 (P = 0.001) for SAPS II) but showed good discrimination (area under receiver operating characteristic curve 0.86 ± 0.013 SE (P < 0.001) and 0.83 ± 0.013 SE (P < 0.001) for APACHE II and SAPS II, respectively). There were wide variations in SMRs calculated for subgroups based on International Classification of Disease, 10(th) edition (standard deviation ± 0.27 for APACHE II and 0.30 for SAPS II). Lack of fit of data to the models and wide variation in SMRs in subgroups put a limitation on utility of these models as tools for assessing quality of care and comparing performances of different units without customization. Considering comparable performance and simplicity of use, efforts should be made to adapt SAPS II.
Mathematical Modeling of Loop Heat Pipes
NASA Technical Reports Server (NTRS)
Kaya, Tarik; Ku, Jentung; Hoang, Triem T.; Cheung, Mark L.
1998-01-01
The primary focus of this study is to model steady-state performance of a Loop Heat Pipe (LHP). The mathematical model is based on the steady-state energy balance equations at each component of the LHP. The heat exchange between each LHP component and the surrounding is taken into account. Both convection and radiation environments are modeled. The loop operating temperature is calculated as a function of the applied power at a given loop condition. Experimental validation of the model is attempted by using two different LHP designs. The mathematical model is tested at different sink temperatures and at different elevations of the loop. Tbc comparison of the calculations and experimental results showed very good agreement (within 3%). This method proved to be a useful tool in studying steady-state LHP performance characteristics.
NASA Astrophysics Data System (ADS)
Ojo, A. O.; Xie, Jun; Olorunfemi, M. O.
2018-01-01
To reduce ambiguity related to nonlinearities in the resistivity model-data relationships, an efficient direct-search scheme employing the Neighbourhood Algorithm (NA) was implemented to solve the 1-D resistivity problem. In addition to finding a range of best-fit models which are more likely to be global minimums, this method investigates the entire multi-dimensional model space and provides additional information about the posterior model covariance matrix, marginal probability density function and an ensemble of acceptable models. This provides new insights into how well the model parameters are constrained and make assessing trade-offs between them possible, thus avoiding some common interpretation pitfalls. The efficacy of the newly developed program is tested by inverting both synthetic (noisy and noise-free) data and field data from other authors employing different inversion methods so as to provide a good base for comparative performance. In all cases, the inverted model parameters were in good agreement with the true and recovered model parameters from other methods and remarkably correlate with the available borehole litho-log and known geology for the field dataset. The NA method has proven to be useful whilst a good starting model is not available and the reduced number of unknowns in the 1-D resistivity inverse problem makes it an attractive alternative to the linearized methods. Hence, it is concluded that the newly developed program offers an excellent complementary tool for the global inversion of the layered resistivity structure.
Faulhammer, E; Llusa, M; Wahl, P R; Paudel, A; Lawrence, S; Biserni, S; Calzolari, V; Khinast, J G
2016-01-01
The objectives of this study were to develop a predictive statistical model for low-fill-weight capsule filling of inhalation products with dosator nozzles via the quality by design (QbD) approach and based on that to create refined models that include quadratic terms for significant parameters. Various controllable process parameters and uncontrolled material attributes of 12 powders were initially screened using a linear model with partial least square (PLS) regression to determine their effect on the critical quality attributes (CQA; fill weight and weight variability). After identifying critical material attributes (CMAs) and critical process parameters (CPPs) that influenced the CQA, model refinement was performed to study if interactions or quadratic terms influence the model. Based on the assessment of the effects of the CPPs and CMAs on fill weight and weight variability for low-fill-weight inhalation products, we developed an excellent linear predictive model for fill weight (R(2 )= 0.96, Q(2 )= 0.96 for powders with good flow properties and R(2 )= 0.94, Q(2 )= 0.93 for cohesive powders) and a model that provides a good approximation of the fill weight variability for each powder group. We validated the model, established a design space for the performance of different types of inhalation grade lactose on low-fill weight capsule filling and successfully used the CMAs and CPPs to predict fill weight of powders that were not included in the development set.
Turbulence modeling of free shear layers for high-performance aircraft
NASA Technical Reports Server (NTRS)
Sondak, Douglas L.
1993-01-01
The High Performance Aircraft (HPA) Grand Challenge of the High Performance Computing and Communications (HPCC) program involves the computation of the flow over a high performance aircraft. A variety of free shear layers, including mixing layers over cavities, impinging jets, blown flaps, and exhaust plumes, may be encountered in such flowfields. Since these free shear layers are usually turbulent, appropriate turbulence models must be utilized in computations in order to accurately simulate these flow features. The HPCC program is relying heavily on parallel computers. A Navier-Stokes solver (POVERFLOW) utilizing the Baldwin-Lomax algebraic turbulence model was developed and tested on a 128-node Intel iPSC/860. Algebraic turbulence models run very fast, and give good results for many flowfields. For complex flowfields such as those mentioned above, however, they are often inadequate. It was therefore deemed that a two-equation turbulence model will be required for the HPA computations. The k-epsilon two-equation turbulence model was implemented on the Intel iPSC/860. Both the Chien low-Reynolds-number model and a generalized wall-function formulation were included.
Model for the separate collection of packaging waste in Portuguese low-performing recycling regions.
Oliveira, V; Sousa, V; Vaz, J M; Dias-Ferreira, C
2018-06-15
Separate collection of packaging waste (glass; plastic/metals; paper/cardboard), is currently a widespread practice throughout Europe. It enables the recovery of good quality recyclable materials. However, separate collection performance are quite heterogeneous, with some countries reaching higher levels than others. In the present work, separate collection of packaging waste has been evaluated in a low-performance recycling region in Portugal in order to investigate which factors are most affecting the performance in bring-bank collection system. The variability of separate collection yields (kg per inhabitant per year) among 42 municipalities was scrutinized for the year 2015 against possible explanatory factors. A total of 14 possible explanatory factors were analysed, falling into two groups: socio-economic/demographic and waste collection service related. Regression models were built in an attempt to evaluate the individual effect of each factor on separate collection yields and predict changes on the collection yields by acting on those factors. The best model obtained is capable to explain 73% of the variation found in the separate collection yields. The model includes the following statistically significant indicators affecting the success of separate collection yields: i) inhabitants per bring-bank; ii) relative accessibility to bring-banks; iii) degree of urbanization; iv) number of school years attended; and v) area. The model presented in this work was developed specifically for the bring-bank system, has an explanatory power and quantifies the impact of each factor on separate collection yields. It can therefore be used as a support tool by local and regional waste management authorities in the definition of future strategies to increase collection of recyclables of good quality and to achieve national and regional targets. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tsugawa, Yusuke; Ohbu, Sadayoshi; Cruess, Richard; Cruess, Sylvia; Okubo, Tomoya; Takahashi, Osamu; Tokuda, Yasuharu; Heist, Brian S; Bito, Seiji; Itoh, Toshiyuki; Aoki, Akiko; Chiba, Tsutomu; Fukui, Tsuguya
2011-08-01
Despite the growing importance of and interest in medical professionalism, there is no standardized tool for its measurement. The authors sought to verify the validity, reliability, and generalizability of the Professionalism Mini-Evaluation Exercise (P-MEX), a previously developed and tested tool, in the context of Japanese hospitals. A multicenter, cross-sectional evaluation study was performed to investigate the validity, reliability, and generalizability of the P-MEX in seven Japanese hospitals. In 2009-2010, 378 evaluators (attending physicians, nurses, peers, and junior residents) completed 360-degree assessments of 165 residents and fellows using the P-MEX. The content validity and criterion-related validity were examined, and the construct validity of the P-MEX was investigated by performing confirmatory factor analysis through a structural equation model. The reliability was tested using generalizability analysis. The contents of the P-MEX achieved good acceptance in a preliminary working group, and the poststudy survey revealed that 302 (79.9%) evaluators rated the P-MEX items as appropriate, indicating good content validity. The correlation coefficient between P-MEX scores and external criteria was 0.78 (P < .001), demonstrating good criterion-related validity. Confirmatory factor analysis verified high path coefficient (0.60-0.99) and adequate goodness of fit of the model. The generalizability analysis yielded a high dependability coefficient, suggesting good reliability, except when evaluators were peers or junior residents. Findings show evidence of adequate validity, reliability, and generalizability of the P-MEX in Japanese hospital settings. The P-MEX is the only evaluation tool for medical professionalism verified in both a Western and East Asian cultural context.
Burns, Ryan D; Hannon, James C; Brusseau, Timothy A; Eisenman, Patricia A; Shultz, Barry B; Saint-Maurice, Pedro F; Welk, Gregory J; Mahar, Matthew T
2016-01-01
A popular algorithm to predict VO2Peak from the one-mile run/walk test (1MRW) includes body mass index (BMI), which manifests practical issues in school settings. The purpose of this study was to develop an aerobic capacity model from 1MRW in adolescents independent of BMI. Cardiorespiratory endurance data were collected on 90 adolescents aged 13-16 years. The 1MRW was administered on an outside track and a laboratory VO2Peak test was conducted using a maximal treadmill protocol. Multiple linear regression was employed to develop the prediction model. Results yielded the following algorithm: VO2Peak = 7.34 × (1MRW speed in m s(-1)) + 0.23 × (age × sex) + 17.75. The New Model displayed a multiple correlation and prediction error of R = 0.81, standard error of the estimate = 4.78 ml kg(-1) · min(-1), with measured VO2Peak and good criterion-referenced (CR) agreement into FITNESSGRAM's Healthy Fitness Zone (Kappa = 0.62; percentage agreement = 84.4%; Φ = 0.62). The New Model was validated using k-fold cross-validation and showed homoscedastic residuals across the range of predicted scores. The omission of BMI did not compromise accuracy of the model. In conclusion, the New Model displayed good predictive accuracy and good CR agreement with measured VO2Peak in adolescents aged 13-16 years.
Peng, Yuyang; Choi, Jaeho
2014-01-01
Improving the energy efficiency in wireless sensor networks (WSN) has attracted considerable attention nowadays. The multiple-input multiple-output (MIMO) technique has been proved as a good candidate for improving the energy efficiency, but it may not be feasible in WSN which is due to the size limitation of the sensor node. As a solution, the cooperative multiple-input multiple-output (CMIMO) technique overcomes this constraint and shows a dramatically good performance. In this paper, a new CMIMO scheme based on the spatial modulation (SM) technique named CMIMO-SM is proposed for energy-efficiency improvement. We first establish the system model of CMIMO-SM. Based on this model, the transmission approach is introduced graphically. In order to evaluate the performance of the proposed scheme, a detailed analysis in terms of energy consumption per bit of the proposed scheme compared with the conventional CMIMO is presented. Later, under the guide of this new scheme we extend our proposed CMIMO-SM to a multihop clustered WSN for further achieving energy efficiency by finding an optimal hop-length. Equidistant hop as the traditional scheme will be compared in this paper. Results from the simulations and numerical experiments indicate that by the use of the proposed scheme, significant savings in terms of total energy consumption can be achieved. Combining the proposed scheme with monitoring sensor node will provide a good performance in arbitrary deployed WSN such as forest fire detection system.
On the Evaluation of Teaching and Learning in Higher Education: A Multicultural Inquiry
ERIC Educational Resources Information Center
White, Christopher J.
2011-01-01
The aim of the study was to develop a model of positive word-of-mouth (WoM) intentions in a higher education context. WoM was found to be directly influenced by satisfaction levels and indirectly by antecedents of satisfaction, namely positive and negative emotions and perceptions of performance. The model provided a good fit to the data and…
Improving parallel I/O autotuning with performance modeling
Behzad, Babak; Byna, Surendra; Wild, Stefan M.; ...
2014-01-01
Various layers of the parallel I/O subsystem offer tunable parameters for improving I/O performance on large-scale computers. However, searching through a large parameter space is challenging. We are working towards an autotuning framework for determining the parallel I/O parameters that can achieve good I/O performance for different data write patterns. In this paper, we characterize parallel I/O and discuss the development of predictive models for use in effectively reducing the parameter space. Furthermore, applying our technique on tuning an I/O kernel derived from a large-scale simulation code shows that the search time can be reduced from 12 hours to 2more » hours, while achieving 54X I/O performance speedup.« less
Simons, Jessica P; Goodney, Philip P; Flahive, Julie; Hoel, Andrew W; Hallett, John W; Kraiss, Larry W; Schanzer, Andres
2016-04-01
Providing patients and payers with publicly reported risk-adjusted quality metrics for the purpose of benchmarking physicians and institutions has become a national priority. Several prediction models have been developed to estimate outcomes after lower extremity revascularization for critical limb ischemia, but the optimal model to use in contemporary practice has not been defined. We sought to identify the highest-performing risk-adjustment model for amputation-free survival (AFS) at 1 year after lower extremity bypass (LEB). We used the national Society for Vascular Surgery Vascular Quality Initiative (VQI) database (2003-2012) to assess the performance of three previously validated risk-adjustment models for AFS. The Bypass versus Angioplasty in Severe Ischaemia of the Leg (BASIL), Finland National Vascular (FINNVASC) registry, and the modified Project of Ex-vivo vein graft Engineering via Transfection III (PREVENT III [mPIII]) risk scores were applied to the VQI cohort. A novel model for 1-year AFS was also derived using the VQI data set and externally validated using the PIII data set. The relative discrimination (Harrell c-index) and calibration (Hosmer-May goodness-of-fit test) of each model were compared. Among 7754 patients in the VQI who underwent LEB for critical limb ischemia, the AFS was 74% at 1 year. Each of the previously published models for AFS demonstrated similar discriminative performance: c-indices for BASIL, FINNVASC, mPIII were 0.66, 0.60, and 0.64, respectively. The novel VQI-derived model had improved discriminative ability with a c-index of 0.71 and appropriate generalizability on external validation with a c-index of 0.68. The model was well calibrated in both the VQI and PIII data sets (goodness of fit P = not significant). Currently available prediction models for AFS after LEB perform modestly when applied to national contemporary VQI data. Moreover, the performance of each model was inferior to that of the novel VQI-derived model. Because the importance of risk-adjusted outcome reporting continues to increase, national registries such as VQI should begin using this novel model for benchmarking quality of care. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Visser, V S; Hermes, W; Twisk, J; Franx, A; van Pampus, M G; Koopmans, C; Mol, B W J; de Groot, C J M
2017-10-01
The association between hypertensive pregnancy disorders and cardiovascular disease later in life is well described. In this study we aim to develop a prognostic model from patients characteristics known before, early in, during and after pregnancy to identify women at increased risk of cardiovascular disease e.g. chronic hypertension years after pregnancy complicated by hypertension at term. We included women with a history of singleton pregnancy complicated by hypertension at term. Women using antihypertensive medication before pregnancy were excluded. We measured hypertension in these women more than 2years postpartum. Different patients characteristics before, early in, during and after pregnancy were considered to develop a prognostic model of chronic hypertension at 2-years. These included amongst others maternal age, blood pressure at pregnancy intake and blood pressure six weeks post-partum. Univariable analyses followed by a multivariable logistic regression analysis was performed to determine which combination of predictors best predicted chronic hypertension. Model performance was assessed by calibration (graphical plot) and discrimination (area under the receiver operating characteristic (AUC)). Of the 305 women in who blood pressure 2.5years after pregnancy was assessed, 105 women (34%) had chronic hypertension. The following patient characteristics were significant associated with chronic hypertension: higher maternal age, lower education, negative family history on hypertensive pregnancy disorders, higher BMI at booking, higher diastolic blood pressure at pregnancy intake, higher systolic blood pressure during pregnancy and higher diastolic blood pressure at six weeks post-partum. These characteristics were included in the prognostic model for chronic hypertension. Model performance was good as indicated by good calibration and good discrimination (AUC; 0.83 (95% CI 0.75 - 0.92). Chronic hypertension can be expected from patient characteristics before, early in, during and after pregnancy. These data underline the importance and awareness of detectable risk factors both for increased risk of complicated pregnancy as well as increased risk of cardiovascular disease later in life. Copyright © 2017 International Society for the Study of Hypertension in Pregnancy. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Arumugam, S.; Ramakrishna, P.; Sangavi, S.
2018-02-01
Improvements in heating technology with solar energy is gaining focus, especially solar parabolic collectors. Solar heating in conventional parabolic collectors is done with the help of radiation concentration on receiver tubes. Conventional receiver tubes are open to atmosphere and loose heat by ambient air currents. In order to reduce the convection losses and also to improve the aperture area, we designed a tube with cavity. This study is a comparative performance behaviour of conventional tube and cavity model tube. The performance formulae were derived for the cavity model based on conventional model. Reduction in overall heat loss coefficient was observed for cavity model, though collector heat removal factor and collector efficiency were nearly same for both models. Improvement in efficiency was also observed in the cavity model’s performance. The approach towards the design of a cavity model tube as the receiver tube in solar parabolic collectors gave improved results and proved as a good consideration.
An alternative method for centrifugal compressor loading factor modelling
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.
2017-08-01
The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.
Levey, Janet A
2017-08-01
Nurse educators might be unknowingly excluding learners secondary to teaching practices. Universal design for instruction (UDI) prepares and delivers accessible content and learning environments for diverse learners; however, it is not well known in nursing education. The aim of the study was to examine the psychometric properties of the Inclusive Teaching Strategies in Nursing Education (ITSinNE) 55-item instrument. Confirmatory factor analysis was performed on a sample of 311 educators in prelicensure programs. The ITSinNE scales had good to adequate estimates of reliability. The exogenous model fit the sample and model-implied covariance matrix; however, the endogenous model was not a good fit. Further instrument development is required. Measuring factors influencing nurse educators' willingness to adopt UDI will enable intervention research to enhance professional development fostering content and environmental access for all learners.
A local-circulation model for Darrieus vertical-axis wind turbines
NASA Astrophysics Data System (ADS)
Masse, B.
1986-04-01
A new computational model for the aerodynamics of the vertical-axis wind turbine is presented. Based on the local-circulation method generalized for curved blades, combined with a wake model for the vertical-axis wind turbine, it differs markedly from current models based on variations in the streamtube momentum and vortex models using the lifting-line theory. A computer code has been developed to calculate the loads and performance of the Darrieus vertical-axis wind turbine. The results show good agreement with experimental data and compare well with other methods.
Manufacturing stresses and strains in filament wound cylinders
NASA Technical Reports Server (NTRS)
Calius, E. P.; Kidron, M.; Lee, S. Y.; Springer, G. S.
1988-01-01
Tests were performed to verify a previously developed model for simulating the manufacturing process of filament wound cylinders. The axial and hoop strains were measured during cure inside a filament wound Fiberite T300/976 graphite-epoxy cylinder. The measured strains were compared to those computed by the model. Good agreements were found between the data and the model, indicating that the model is a useful representation of the process. For the conditions of the test, the manufacturing stresses inside the cylinder were also calculated using the model.
NASA Astrophysics Data System (ADS)
Obeidat, Abdalla; Abu-Ghazleh, Hind
2018-06-01
Two intermolecular potential models of methanol (TraPPE-UA and OPLS-AA) have been used in order to examine their validity in reproducing the selected structural, dynamical, and thermodynamic properties in the unary and binary systems. These two models are combined with two water models (SPC/E and TIP4P). The temperature dependence of density, surface tension, diffusion and structural properties for the unary system has been computed over specific range of temperatures (200-300K). The very good performance of the TraPPE-UA potential model in predicting surface tension, diffusion, structure, and density of the unary system led us to examine its accuracy and performance in its aqueous solution. In the binary system the same properties were examined, using different mole fractions of methanol. The TraPPE-UA model combined with TIP4P-water shows a very good agreement with the experimental results for density and surface tension properties; whereas the OPLS-AA combined with SPCE-water shows a very agreement with experimental results regarding the diffusion coefficients. Two different approaches have been used in calculating the diffusion coefficient in the mixture, namely the Einstein equation (EE) and Green-Kubo (GK) method. Our results show the advantageous of applying GK over EE in reproducing the experimental results and in saving computer time.
Louie, Arnold; Liu, Weiguo; VanGuilder, Michael; Neely, Michael N.; Schumitzky, Alan; Jelliffe, Roger; Fikes, Steven; Kurhanewicz, Stephanie; Robbins, Nichole; Brown, David; Baluya, Dodge; Drusano, George L.
2015-01-01
Background. Meropenem plus levofloxacin treatment was shown to be a promising combination in our in vitro hollow fiber infection model. We strove to validate this finding in a murine Pseudomonas pneumonia model. Methods. A dose-ranging study with meropenem and levofloxacin alone and in combination against Pseudomonas aeruginosa was performed in a granulocytopenic murine pneumonia model. Meropenem and levofloxacin were administered to partially humanize their pharmacokinetic profiles in mouse serum. Total and resistant bacterial populations were estimated after 24 hours of therapy. Pharmacokinetic profiling of both drugs was performed in plasma and epithelial lining fluid, using a population model. Results. Meropenem and levofloxacin penetrations into epithelial lining fluid were 39.3% and 64.3%, respectively. Both monotherapies demonstrated good exposure responses. An innovative combination-therapy analytic approach demonstrated that the combination was statistically significantly synergistic (α = 2.475), as was shown in the hollow fiber infection model. Bacterial resistant to levofloxacin and meropenem was seen in the control arm. Levofloxacin monotherapy selected for resistance to itself. No resistant subpopulations were observed in any combination therapy arm. Conclusions. The combination of meropenem plus levofloxacin was synergistic, producing good bacterial kill and resistance suppression. Given the track record of safety of each agent, this combination may be worthy of clinical trial. PMID:25362196
Influence of Lift Offset on Rotorcraft Performance
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2009-01-01
The influence of lift offset on the performance of several rotorcraft configurations is explored. A lift-offset rotor, or advancing blade concept, is a hingeless rotor that can attain good efficiency at high speed by operating with more lift on the advancing side than on the retreating side of the rotor disk. The calculated performance capability of modern-technology coaxial rotors utilizing a lift offset is examined, including rotor performance optimized for hover and high-speed cruise. The ideal induced power loss of coaxial rotors in hover and twin rotors in forward flight is presented. The aerodynamic modeling requirements for performance calculations are evaluated, including wake and drag models for the high-speed flight condition. The influence of configuration on the performance of rotorcraft with lift-offset rotors is explored, considering tandem and side-by-side rotorcraft as well as wing-rotor lift share.
Influence of Lift Offset on Rotorcraft Performance
NASA Technical Reports Server (NTRS)
Johnson, Wayne
2008-01-01
The influence of lift offset on the performance of several rotorcraft configurations is explored. A lift-offset rotor, or advancing blade concept, is a hingeless rotor that can attain good efficiency at high speed, by operating with more lift on the advancing side than on the retreating side of the rotor disk. The calculated performance capability of modern-technology coaxial rotors utilizing a lift offset is examined, including rotor performance optimized for hover and high-speed cruise. The ideal induced power loss of coaxial rotors in hover and twin rotors in forward flight is presented. The aerodynamic modeling requirements for performance calculations are evaluated, including wake and drag models for the high speed flight condition. The influence of configuration on the performance of rotorcraft with lift-offset rotors is explored, considering tandem and side-by-side rotorcraft as well as wing-rotor lift share.
NASA Astrophysics Data System (ADS)
Jitsuhiro, Takatoshi; Toriyama, Tomoji; Kogure, Kiyoshi
We propose a noise suppression method based on multi-model compositions and multi-pass search. In real environments, input speech for speech recognition includes many kinds of noise signals. To obtain good recognized candidates, suppressing many kinds of noise signals at once and finding target speech is important. Before noise suppression, to find speech and noise label sequences, we introduce multi-pass search with acoustic models including many kinds of noise models and their compositions, their n-gram models, and their lexicon. Noise suppression is frame-synchronously performed using the multiple models selected by recognized label sequences with time alignments. We evaluated this method using the E-Nightingale task, which contains voice memoranda spoken by nurses during actual work at hospitals. The proposed method obtained higher performance than the conventional method.
Predicting the names of the best teams after the knock-out phase of a cricket series.
Lemmer, Hermanus Hofmeyr
2014-01-01
Cricket players' performances can best be judged after a large number of matches had been played. For test or one-day international (ODI) players, career data are normally used to calculate performance measures. These are normally good indicators of future performances, although various factors influence the performance of a player in a specific match. It is often necessary to judge players' performances based on a small number of scores, e.g. to identify the best players after a short series of matches. The challenge then is to use the best available criteria in order to assess performances as accurately and fairly as possible. In the present study the results of the knock-out phase of an International Cricket Council (ICC) World Cup ODI Series are used to predict the names of the best teams by means of a suitably formulated logistic regression model. Despite using very sparse data, the methods used are reasonably successful. It is also shown that if the same technique is applied to career ratings, very good results are obtained.
Comprehensive decision tree models in bioinformatics.
Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter
2012-01-01
Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly redundant attributes that are very common in bioinformatics.
Comprehensive Decision Tree Models in Bioinformatics
Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter
2012-01-01
Purpose Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. Methods This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. Results The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. Conclusions The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly redundant attributes that are very common in bioinformatics. PMID:22479449
Goodness-of-Fit Tests and Nonparametric Adaptive Estimation for Spike Train Analysis
2014-01-01
When dealing with classical spike train analysis, the practitioner often performs goodness-of-fit tests to test whether the observed process is a Poisson process, for instance, or if it obeys another type of probabilistic model (Yana et al. in Biophys. J. 46(3):323–330, 1984; Brown et al. in Neural Comput. 14(2):325–346, 2002; Pouzat and Chaffiol in Technical report, http://arxiv.org/abs/arXiv:0909.2785, 2009). In doing so, there is a fundamental plug-in step, where the parameters of the supposed underlying model are estimated. The aim of this article is to show that plug-in has sometimes very undesirable effects. We propose a new method based on subsampling to deal with those plug-in issues in the case of the Kolmogorov–Smirnov test of uniformity. The method relies on the plug-in of good estimates of the underlying model that have to be consistent with a controlled rate of convergence. Some nonparametric estimates satisfying those constraints in the Poisson or in the Hawkes framework are highlighted. Moreover, they share adaptive properties that are useful from a practical point of view. We show the performance of those methods on simulated data. We also provide a complete analysis with these tools on single unit activity recorded on a monkey during a sensory-motor task. Electronic Supplementary Material The online version of this article (doi:10.1186/2190-8567-4-3) contains supplementary material. PMID:24742008
Estimation and prediction under local volatility jump-diffusion model
NASA Astrophysics Data System (ADS)
Kim, Namhyoung; Lee, Younhee
2018-02-01
Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Brown, Halley J; Andreason, Hope; Melling, Amy K; Imel, Zac E; Simon, Gregory E
2015-08-01
Retention, or its opposite, dropout, is a common metric of psychotherapy quality, but using it to assess provider performance can be problematic. Differences among providers in numbers of general dropouts, "good" dropouts (patients report positive treatment experiences and outcome), and "bad" dropouts (patients report negative treatment experiences and outcome) were evaluated. Patient records were paired with satisfaction surveys (N=3,054). Binomial mixed-effects models were used to examine differences among providers by dropout type. Thirty-four percent of treatment episodes resulted in dropout. Of these, 14% were bad dropouts and 27% were good dropouts. Providers accounted for approximately 17% of the variance in general dropout and 10% of the variance in both bad dropout and good dropout. The ranking of providers fluctuated by type of dropout. Provider assessments based on patient retention should offer a way to isolate dropout type, given that nonspecific metrics may lead to biased estimates of performance.
Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation
NASA Astrophysics Data System (ADS)
Maiti, Raman
2016-06-01
The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.
Eze, Valentine C; Phan, Anh N; Harvey, Adam P
2014-03-01
A more robust kinetic model of base-catalysed transesterification than the conventional reaction scheme has been developed. All the relevant reactions in the base-catalysed transesterification of rapeseed oil (RSO) to fatty acid methyl ester (FAME) were investigated experimentally, and validated numerically in a model implemented using MATLAB. It was found that including the saponification of RSO and FAME side reactions and hydroxide-methoxide equilibrium data explained various effects that are not captured by simpler conventional models. Both the experiment and modelling showed that the "biodiesel reaction" can reach the desired level of conversion (>95%) in less than 2min. Given the right set of conditions, the transesterification can reach over 95% conversion, before the saponification losses become significant. This means that the reaction must be performed in a reactor exhibiting good mixing and good control of residence time, and the reaction mixture must be quenched rapidly as it leaves the reactor. Copyright © 2014 Elsevier Ltd. All rights reserved.
Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation
NASA Astrophysics Data System (ADS)
Maiti, Raman
2018-06-01
The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.
Groenendijk, Piet; Heinen, Marius; Klammler, Gernot; Fank, Johann; Kupfersberger, Hans; Pisinaras, Vassilios; Gemitzi, Alexandra; Peña-Haro, Salvador; García-Prats, Alberto; Pulido-Velazquez, Manuel; Perego, Alessia; Acutis, Marco; Trevisan, Marco
2014-11-15
The agricultural sector faces the challenge of ensuring food security without an excessive burden on the environment. Simulation models provide excellent instruments for researchers to gain more insight into relevant processes and best agricultural practices and provide tools for planners for decision making support. The extent to which models are capable of reliable extrapolation and prediction is important for exploring new farming systems or assessing the impacts of future land and climate changes. A performance assessment was conducted by testing six detailed state-of-the-art models for simulation of nitrate leaching (ARMOSA, COUPMODEL, DAISY, EPIC, SIMWASER/STOTRASIM, SWAP/ANIMO) for lysimeter data of the Wagna experimental field station in Eastern Austria, where the soil is highly vulnerable to nitrate leaching. Three consecutive phases were distinguished to gain insight in the predictive power of the models: 1) a blind test for 2005-2008 in which only soil hydraulic characteristics, meteorological data and information about the agricultural management were accessible; 2) a calibration for the same period in which essential information on field observations was additionally available to the modellers; and 3) a validation for 2009-2011 with the corresponding type of data available as for the blind test. A set of statistical metrics (mean absolute error, root mean squared error, index of agreement, model efficiency, root relative squared error, Pearson's linear correlation coefficient) was applied for testing the results and comparing the models. None of the models performed good for all of the statistical metrics. Models designed for nitrate leaching in high-input farming systems had difficulties in accurately predicting leaching in low-input farming systems that are strongly influenced by the retention of nitrogen in catch crops and nitrogen fixation by legumes. An accurate calibration does not guarantee a good predictive power of the model. Nevertheless all models were able to identify years and crops with high- and low-leaching rates. Copyright © 2014 Elsevier B.V. All rights reserved.
Optimal pre-scheduling of problem remappings
NASA Technical Reports Server (NTRS)
Nicol, David M.; Saltz, Joel H.
1987-01-01
A large class of scientific computational problems can be characterized as a sequence of steps where a significant amount of computation occurs each step, but the work performed at each step is not necessarily identical. Two good examples of this type of computation are: (1) regridding methods which change the problem discretization during the course of the computation, and (2) methods for solving sparse triangular systems of linear equations. Recent work has investigated a means of mapping such computations onto parallel processors; the method defines a family of static mappings with differing degrees of importance placed on the conflicting goals of good load balance and low communication/synchronization overhead. The performance tradeoffs are controllable by adjusting the parameters of the mapping method. To achieve good performance it may be necessary to dynamically change these parameters at run-time, but such changes can impose additional costs. If the computation's behavior can be determined prior to its execution, it can be possible to construct an optimal parameter schedule using a low-order-polynomial-time dynamic programming algorithm. Since the latter can be expensive, the performance is studied of the effect of a linear-time scheduling heuristic on one of the model problems, and it is shown to be effective and nearly optimal.
Associations Between Fixed-Term Employment and Health and Behaviors: What are the Mechanisms?
Żołnierczyk-Zreda, Dorota; Bedyńska, Sylwia
2018-03-01
To analyze the associations between fixed-term employment and health (work ability and mental health) and behaviors (engagement and performance). Psychological contract fulfilment (PCF) and breach (PCB) are investigated as potential mediators of these associations. Seven hundred workers employed on fixed-term contracts from a broad range of organizations participated in the study. The Structural Equation Model was performed to analyze the data. Mediation analyses revealed that good physical and mental health and productivity are more likely to be achieved by those workers who perform non-manual work and (to some extent) accept their contracts because they experience high levels of PCF and low levels of PCB. Apart from the lack of physical workload, psychological contract fulfilment has been revealed as yet another significant mediator between a higher socioeconomic position and good health and productivity of fixed-term workers.
Automatic Generation of Directive-Based Parallel Programs for Shared Memory Parallel Systems
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Yan, Jerry; Frumkin, Michael
2000-01-01
The shared-memory programming model is a very effective way to achieve parallelism on shared memory parallel computers. As great progress was made in hardware and software technologies, performance of parallel programs with compiler directives has demonstrated large improvement. The introduction of OpenMP directives, the industrial standard for shared-memory programming, has minimized the issue of portability. Due to its ease of programming and its good performance, the technique has become very popular. In this study, we have extended CAPTools, a computer-aided parallelization toolkit, to automatically generate directive-based, OpenMP, parallel programs. We outline techniques used in the implementation of the tool and present test results on the NAS parallel benchmarks and ARC3D, a CFD application. This work demonstrates the great potential of using computer-aided tools to quickly port parallel programs and also achieve good performance.
How to assess good candidate molecules for self-activated optical power limiting
NASA Astrophysics Data System (ADS)
Lundén, Hampus; Glimsdal, Eirik; Lindgren, Mikael; Lopes, Cesar
2018-03-01
Reverse saturable absorbers have shown great potential to attenuate laser radiation. Good candidate molecules and various particles have successfully been incorporated into different glass matrices, enabling the creation of self-activated filters against damaging laser radiation. Although the performance of such filters has been impressive, work is still ongoing to improve the performance in a wider range of wavelengths and pulse widths. The purpose of this tutorial is, from an optical engineering perspective, to give an understanding of the strengths and weaknesses of this class of smart materials, how relevant photophysical parameters are measured and influence system performance and comment on the pitfalls in experimental evaluation of materials. A numerical population model in combination with simple physical formulas is used to demonstrate system behavior from a performance standpoint. Geometrical reasoning shows the advantage of reverse saturable absorption over nonlinear scattering due to a fraction of scattered light being recollected by imaging system optics. The numerical population model illustrates the importance of the optical power limiting performance during the leading edge of a nanosecond pulse, which is most strongly influenced by changes in the two-photon absorption cross section and the triplet linear absorption cross section for a modeled Pt-acetylide. This tutorial not only targets optical engineers evaluating reverse saturable absorbing materials but also aims to assist researchers with a chemistry background working on optical power limiting materials. We also present photophysical data for a series of coumarins that can be useful for the determination of quantum yields and two-photon cross sections and show examples of characterization of molecules with excited triplet states.
Modeling study of air pollution due to the manufacture of export goods in China's Pearl River Delta.
Streets, David G; Yu, Carolyne; Bergin, Michael H; Wang, Xuemei; Carmichael, Gregory R
2006-04-01
The Pearl River Delta is a major manufacturing region on the south coast of China that produces more than dollar 100 billion of goods annually for export to North America, Europe, and other parts of Asia. Considerable air pollution is caused by the manufacturing industries themselves and by the power plants, trucks, and ships that support them. We estimate that 10-40% of emissions of primary SO2, NO(x), RSP, and VOC in the region are caused by export-related activities. Using the STEM-2K1 atmospheric transport model, we estimate that these emissions contribute 5-30% of the ambient concentrations of SO2, NO(x), NO(z), and VOC in the region. One reason that the exported goods are cheap and therefore attractive to consumers in developed countries is that emission controls are lacking or of low performance. We estimate that state-of-the-art controls could be installed at an annualized cost of dollar 0.3-3 billion, representing 0.3-3% of the value of the goods produced. We conclude that mitigation measures could be adopted without seriously affecting the prices of exported goods and would achieve considerable human health and other benefits in the form of reduced air pollutant concentrations in densely populated urban areas.
Optical performance of multifocal soft contact lenses via a single-pass method.
Bakaraju, Ravi C; Ehrmann, Klaus; Falk, Darrin; Ho, Arthur; Papas, Eric
2012-08-01
A physical model eye capable of carrying soft contact lenses (CLs) was used as a platform to evaluate optical performance of several commercial multifocals (MFCLs) with high- and low-add powers and a single-vision control. Optical performance was evaluated at three pupil sizes, six target vergences, and five CL-correcting positions using a spatially filtered monochromatic (632.8 nm) light source. The various target vergences were achieved by using negative trial lenses. A photosensor in the retinal plane recorded the image point-spread that enabled the computation of visual Strehl ratios. The centration of CLs was monitored by an additional integrated en face camera. Hydration of the correcting lens was maintained using a humidity chamber and repeated instillations of rewetting saline drops. All the MFCLs reduced performance for distance but considerably improved performance along the range of distance to near target vergences, relative to the single-vision CL. Performance was dependent on add power, design, pupil, and centration of the correcting CLs. Proclear (D) design produced good performance for intermediate vision, whereas Proclear (N) design performed well at near vision (p < 0.05). AirOptix design exhibited good performance for distance and intermediate vision. PureVision design showed improved performance across the test vergences, but only for pupils ≥4 mm in diameter. Performance of Acuvue bifocal was comparable with other MFCLs, but only for pupils >4 mm in diameter. Acuvue Oasys bifocal produced performance comparable with single-vision CL for most vergences. Direct measurement of single-pass images at the retinal plane of a physical model eye used in conjunction with various MFCLs is demonstrated. This method may have utility in evaluating the relative effectiveness of commercial and prototype designs.
Šiljić Tomić, Aleksandra N; Antanasijević, Davor Z; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A; Pocajt, Viktor V
2016-05-01
This paper describes the application of artificial neural network models for the prediction of biological oxygen demand (BOD) levels in the Danube River. Eighteen regularly monitored water quality parameters at 17 stations on the river stretch passing through Serbia were used as input variables. The optimization of the model was performed in three consecutive steps: firstly, the spatial influence of a monitoring station was examined; secondly, the monitoring period necessary to reach satisfactory performance was determined; and lastly, correlation analysis was applied to evaluate the relationship among water quality parameters. Root-mean-square error (RMSE) was used to evaluate model performance in the first two steps, whereas in the last step, multiple statistical indicators of performance were utilized. As a result, two optimized models were developed, a general regression neural network model (labeled GRNN-1) that covers the monitoring stations from the Danube inflow to the city of Novi Sad and a GRNN model (labeled GRNN-2) that covers the stations from the city of Novi Sad to the border with Romania. Both models demonstrated good agreement between the predicted and actually observed BOD values.
Vehicle active steering control research based on two-DOF robust internal model control
NASA Astrophysics Data System (ADS)
Wu, Jian; Liu, Yahui; Wang, Fengbo; Bao, Chunjiang; Sun, Qun; Zhao, Youqun
2016-07-01
Because of vehicle's external disturbances and model uncertainties, robust control algorithms have obtained popularity in vehicle stability control. The robust control usually gives up performance in order to guarantee the robustness of the control algorithm, therefore an improved robust internal model control(IMC) algorithm blending model tracking and internal model control is put forward for active steering system in order to reach high performance of yaw rate tracking with certain robustness. The proposed algorithm inherits the good model tracking ability of the IMC control and guarantees robustness to model uncertainties. In order to separate the design process of model tracking from the robustness design process, the improved 2 degree of freedom(DOF) robust internal model controller structure is given from the standard Youla parameterization. Simulations of double lane change maneuver and those of crosswind disturbances are conducted for evaluating the robust control algorithm, on the basis of a nonlinear vehicle simulation model with a magic tyre model. Results show that the established 2-DOF robust IMC method has better model tracking ability and a guaranteed level of robustness and robust performance, which can enhance the vehicle stability and handling, regardless of variations of the vehicle model parameters and the external crosswind interferences. Contradiction between performance and robustness of active steering control algorithm is solved and higher control performance with certain robustness to model uncertainties is obtained.
Determining and Communicating the Value of the Special Library.
ERIC Educational Resources Information Center
Matthews, Joseph R.
2003-01-01
Discusses performance measures for libraries that will indicate the goodness of the library and its services. Highlights include a general evaluation model that includes input, process, output, and outcome measures; balanced scorecard approach that includes financial perspectives; focusing on strategy; strategies for change; user criteria for…
Practical Formal Verification of Diagnosability of Large Models via Symbolic Model Checking
NASA Technical Reports Server (NTRS)
Cavada, Roberto; Pecheur, Charles
2003-01-01
This document reports on the activities carried out during a four-week visit of Roberto Cavada at the NASA Ames Research Center. The main goal was to test the practical applicability of the framework proposed, where a diagnosability problem is reduced to a Symbolic Model Checking problem. Section 2 contains a brief explanation of major techniques currently used in Symbolic Model Checking, and how these techniques can be tuned in order to obtain good performances when using Model Checking tools. Diagnosability is performed on large and structured models of real plants. Section 3 describes how these plants are modeled, and how models can be simplified to improve the performance of Symbolic Model Checkers. Section 4 reports scalability results. Three test cases are briefly presented, and several parameters and techniques have been applied on those test cases in order to produce comparison tables. Furthermore, comparison between several Model Checkers is reported. Section 5 summarizes the application of diagnosability verification to a real application. Several properties have been tested, and results have been highlighted. Finally, section 6 draws some conclusions, and outlines future lines of research.
Limakrisna, Nandan; Yoserizal, Syahril
2016-01-01
Indonesian banking industry has experienced up and down as can be seen after Pakto '88, in which the number of new banks grew rapidly, but after the 1997-1998 financial crisis, a lot of banks were liquidated due to the deteriorating financial condition and violation of the precautionary principles by bank management. The purpose of this research is to determine and analyze the effects of good corporate governance, information technology, HR competencies on competitive advantage and its implication on marketing performance. The method used in this research was a descriptive survey and explanatory survey with a sample size of 320 respondents, and the data analysis methods used are structural equation modeling. Based on the results of the research, the findings obtained from good corporate governance, information technology, HR competencies have a significant effect on competitive advantage on the performance of marketing. However, when seen in part, competitive advantage has a dominant effect on marketing performance.
Novel Observer Scheme of Fuzzy-MRAS Sensorless Speed Control of Induction Motor Drive
NASA Astrophysics Data System (ADS)
Chekroun, S.; Zerikat, M.; Mechernene, A.; Benharir, N.
2017-01-01
This paper presents a novel approach Fuzzy-MRAS conception for robust accurate tracking of induction motor drive operating in a high-performance drives environment. Of the different methods for sensorless control of induction motor drive the model reference adaptive system (MRAS) finds lot of attention due to its good performance. The analysis of the sensorless vector control system using MRAS is presented and the resistance parameters variations and speed observer using new Fuzzy Self-Tuning adaptive IP Controller is proposed. In fact, fuzzy logic is reminiscent of human thinking processes and natural language enabling decisions to be made based on vague information. The present approach helps to achieve a good dynamic response, disturbance rejection and low to plant parameter variations of the induction motor. In order to verify the performances of the proposed observer and control algorithms and to test behaviour of the controlled system, numerical simulation is achieved. Simulation results are presented and discussed to shown the validity and the performance of the proposed observer.
NASA Astrophysics Data System (ADS)
Yan, Hui; Wang, K. G.; Jones, Jim E.
2016-06-01
A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.
Multi-Objective Control Optimization for Greenhouse Environment Using Evolutionary Algorithms
Hu, Haigen; Xu, Lihong; Wei, Ruihua; Zhu, Bingkun
2011-01-01
This paper investigates the issue of tuning the Proportional Integral and Derivative (PID) controller parameters for a greenhouse climate control system using an Evolutionary Algorithm (EA) based on multiple performance measures such as good static-dynamic performance specifications and the smooth process of control. A model of nonlinear thermodynamic laws between numerous system variables affecting the greenhouse climate is formulated. The proposed tuning scheme is tested for greenhouse climate control by minimizing the integrated time square error (ITSE) and the control increment or rate in a simulation experiment. The results show that by tuning the gain parameters the controllers can achieve good control performance through step responses such as small overshoot, fast settling time, and less rise time and steady state error. Besides, it can be applied to tuning the system with different properties, such as strong interactions among variables, nonlinearities and conflicting performance criteria. The results implicate that it is a quite effective and promising tuning method using multi-objective optimization algorithms in the complex greenhouse production. PMID:22163927
NASA Technical Reports Server (NTRS)
Shumka, A.; Sollock, S. G.
1981-01-01
This paper represents the first comprehensive survey of the Mount Laguna Photovoltaic Installation. The novel techniques used for performing the field tests have been effective in locating and characterizing defective modules. A comparative analysis on the two types of modules used in the array indicates that they have significantly different failure rates, different distributions in degradational space and very different failure modes. A life cycle model is presented to explain a multimodal distribution observed for one module type. A statistical model is constructed and it is shown to be in good agreement with the field data.
Marzilli Ericson, Keith M.; White, John Myles; Laibson, David; Cohen, Jonathan D.
2015-01-01
Heuristic models have been proposed for many domains of choice. We compare heuristic models of intertemporal choice, which can account for many of the known intertemporal choice anomalies, to discounting models. We conduct an out-of-sample, cross-validated comparison of intertemporal choice models. Heuristic models outperform traditional utility discounting models, including models of exponential and hyperbolic discounting. The best performing models predict choices by using a weighted average of absolute differences and relative (percentage) differences of the attributes of the goods in a choice set. We conclude that heuristic models explain time-money tradeoff choices in experiments better than utility discounting models. PMID:25911124
Evaluating the double Poisson generalized linear model.
Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique
2013-10-01
The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Statistical alignment: computational properties, homology testing and goodness-of-fit.
Hein, J; Wiuf, C; Knudsen, B; Møller, M B; Wibling, G
2000-09-08
The model of insertions and deletions in biological sequences, first formulated by Thorne, Kishino, and Felsenstein in 1991 (the TKF91 model), provides a basis for performing alignment within a statistical framework. Here we investigate this model.Firstly, we show how to accelerate the statistical alignment algorithms several orders of magnitude. The main innovations are to confine likelihood calculations to a band close to the similarity based alignment, to get good initial guesses of the evolutionary parameters and to apply an efficient numerical optimisation algorithm for finding the maximum likelihood estimate. In addition, the recursions originally presented by Thorne, Kishino and Felsenstein can be simplified. Two proteins, about 1500 amino acids long, can be analysed with this method in less than five seconds on a fast desktop computer, which makes this method practical for actual data analysis.Secondly, we propose a new homology test based on this model, where homology means that an ancestor to a sequence pair can be found finitely far back in time. This test has statistical advantages relative to the traditional shuffle test for proteins.Finally, we describe a goodness-of-fit test, that allows testing the proposed insertion-deletion (indel) process inherent to this model and find that real sequences (here globins) probably experience indels longer than one, contrary to what is assumed by the model. Copyright 2000 Academic Press.
NASA Astrophysics Data System (ADS)
Heesel, E.; Weigel, T.; Lochmatter, P.; Rugi Grond, E.
2017-11-01
For the BepiColombo mission, the extreme thermal environment around Mercury requires good heat shields for the instruments. The BepiColombo Laser altimeter (BELA) Receiver will be equipped with a specular reflective baffle in order to limit the solar power impact. The design uses a Stavroudis geometry with alternating elliptical and hyperbolic vanes to reflect radiation at angles >38° back into space. The thermal loads on the baffle lead to deformations, and the resulting changes in the optical performance can be modeled by ray-tracing. Conventional interfaces, such as Zernike surface fitting, fail to provide a proper import of the mechanical distortions into optical models. We have studied alternative models such as free form surface representations and compared them to a simple modeling approach with straight segments. The performance merit is presented in terms of the power rejection ratio and the absence of specular stray-light.
Study on dynamic performance of SOFC
NASA Astrophysics Data System (ADS)
Zhan, Haiyang; Liang, Qianchao; Wen, Qiang; Zhu, Runkai
2017-05-01
In order to solve the problem of real-time matching of load and fuel cell power, it is urgent to study the dynamic response process of SOFC in the case of load mutation. The mathematical model of SOFC is constructed, and its performance is simulated. The model consider the influence factors such as polarization effect, ohmic loss. It also takes the diffusion effect, thermal effect, energy exchange, mass conservation, momentum conservation. One dimensional dynamic mathematical model of SOFC is constructed by using distributed lumped parameter method. The simulation results show that the I-V characteristic curves are in good agreement with the experimental data, and the accuracy of the model is verified. The voltage response curve, power response curve and the efficiency curve are obtained by this way. It lays a solid foundation for the research of dynamic performance and optimal control in power generation system of high power fuel cell stack.
A Method to Test Model Calibration Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
A Method to Test Model Calibration Techniques: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, Ron; Polly, Ben; Neymark, Joel
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then themore » calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.« less
Cheng, F T; Yang, H C; Luo, T L; Feng, C; Jeng, M
2000-01-01
Equipment Managers (EMs) play a major role in a Manufacturing Execution System (MES). They serve as the communication bridge between the components of an MES and the equipment. The purpose of this paper is to propose a novel methodology for developing analytical and simulation models for the EM such that the validity and performance of the EM can be evaluated. Domain knowledge and requirements are collected from a real semiconductor packaging factory. By using IDEFO and state diagrams, a static functional model and a dynamic state model of the EM are built. Next, these two models are translated into a Petri net model. This allows qualitative and quantitative analyses of the system. The EM net model is then expanded into the MES net model. Therefore, the performance of an EM in the MES environment can be evaluated. These evaluation results are good references for design and decision making.
Polishing, coating and integration of SiC mirrors for space telescopes
NASA Astrophysics Data System (ADS)
Rodolfo, Jacques
2017-11-01
In the last years, the technology of SiC mirrors took an increasingly significant part in the field of space telescopes. Sagem is involved in the JWST program to manufacture and test the optical components of the NIRSpec instrument. The instrument is made of 3 TMAs and 4 plane mirrors made of SiC. Sagem is in charge of the CVD cladding, the polishing, the coating of the mirrors and the integration and testing of the TMAs. The qualification of the process has been performed through the manufacturing and testing of the qualification model of the FOR TMA. This TMA has shown very good performances both at ambient and during the cryo test. The polishing process has been improved for the manufacturing of the flight model. This improvement has been driven by the BRDF performance of the mirror. This parameter has been deeply analysed and a model has been built to predict the performance of the mirrors. The existing Dittman model have been analysed and found to be optimistic.
NASA Technical Reports Server (NTRS)
Wang, Xiao-Yen; Fabanich, William A.; Schmitz, Paul C.
2012-01-01
This paper presents a three-dimensional Advanced Stirling Radioisotope Generator (ASRG) thermal power model that was built using the Thermal Desktop SINDA/FLUINT thermal analyzer. The model was correlated with ASRG engineering unit (EU) test data and ASRG flight unit predictions from Lockheed Martin's Ideas TMG thermal model. ASRG performance under (1) ASC hot-end temperatures, (2) ambient temperatures, and (3) years of mission for the general purpose heat source fuel decay was predicted using this model for the flight unit. The results were compared with those reported by Lockheed Martin and showed good agreement. In addition, the model was used to study the performance of the ASRG flight unit for operations on the ground and on the surface of Titan, and the concept of using gold film to reduce thermal loss through insulation was investigated.
A Model of Self-Monitoring Blood Glucose Measurement Error.
Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio
2017-07-01
A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.
2012-12-01
correcting forces is the free market itself. Unfortunately, macroeconomic principles do not always prove useful at the microeconomic level...model for this discussion are not relevant, but the underlying principle of the model is—forces can be self-correcting. Any im- balance in one...Performance-based acqui- sition appears to be one of those principles that looks good on paper and has proved quite successful in private industry but has had
NASA Astrophysics Data System (ADS)
Pan, S.; Liu, L.; Xu, Y. P.
2017-12-01
Abstract: In physically based distributed hydrological model, large number of parameters, representing spatial heterogeneity of watershed and various processes in hydrologic cycle, are involved. For lack of calibration module in Distributed Hydrology Soil Vegetation Model, this study developed a multi-objective calibration module using Epsilon-Dominance Non-Dominated Sorted Genetic Algorithm II (ɛ-NSGAII) and based on parallel computing of Linux cluster for DHSVM (ɛP-DHSVM). In this study, two hydrologic key elements (i.e., runoff and evapotranspiration) are used as objectives in multi-objective calibration of model. MODIS evapotranspiration obtained by SEBAL is adopted to fill the gap of lack of observation for evapotranspiration. The results show that good performance of runoff simulation in single objective calibration cannot ensure good simulation performance of other hydrologic key elements. Self-developed ɛP-DHSVM model can make multi-objective calibration more efficiently and effectively. The running speed can be increased by more than 20-30 times via applying ɛP-DHSVM. In addition, runoff and evapotranspiration can be simulated very well simultaneously by ɛP-DHSVM, with superior values for two efficiency coefficients (0.74 for NS of runoff and 0.79 for NS of evapotranspiration, -10.5% and -8.6% for PBIAS of runoff and evapotranspiration respectively).
NASA Astrophysics Data System (ADS)
Langel, Christopher Michael
A computational investigation has been performed to better understand the impact of surface roughness on the flow over a contaminated surface. This thesis highlights the implementation and development of the roughness amplification model in the flow solver OVERFLOW-2. The model, originally proposed by Dassler, Kozulovic, and Fiala, introduces an additional scalar field roughness amplification quantity. This value is explicitly set at rough wall boundaries using surface roughness parameters and local flow quantities. This additional transport equation allows non-local effects of surface roughness to be accounted for downstream of rough sections. This roughness amplification variable is coupled with the Langtry-Menter model and used to modify the criteria for transition. Results from flat plate test cases show good agreement with experimental transition behavior on the flow over varying sand grain roughness heights. Additional validation studies were performed on a NACA 0012 airfoil with leading edge roughness. The computationally predicted boundary layer development demonstrates good agreement with experimental results. New tests using varying roughness configurations are being carried out at the Texas A&M Oran W. Nicks Low Speed Wind Tunnel to provide further calibration of the roughness amplification method. An overview and preliminary results are provided of this concurrent experimental investigation.
NASA Astrophysics Data System (ADS)
Ángel Prósper Fernández, Miguel; Casal, Carlos Otero; Canoura Fernández, Felipe; Miguez-Macho, Gonzalo
2017-04-01
Regional meteorological models are becoming a generalized tool for forecasting wind resource, due to their capacity to simulate local flow dynamics impacting wind farm production. This study focuses on the production forecast and validation of a real onshore wind farm using high horizontal and vertical resolution WRF (Weather Research and Forecasting) model simulations. The wind farm is located in Galicia, in the northwest of Spain, in a complex terrain region with high wind resource. Utilizing the Fitch scheme, specific for wind farms, a period of one year is simulated with a daily operational forecasting set-up. Power and wind predictions are obtained and compared with real data provided by the management company. Results show that WRF is able to yield good wind power operational predictions for this kind of wind farms, due to a good representation of the planetary boundary layer behaviour of the region and the good performance of the Fitch scheme under these conditions.
NASA Astrophysics Data System (ADS)
Bessagnet, Bertrand; Pirovano, Guido; Mircea, Mihaela; Cuvelier, Cornelius; Aulinger, Armin; Calori, Giuseppe; Ciarelli, Giancarlo; Manders, Astrid; Stern, Rainer; Tsyro, Svetlana; García Vivanco, Marta; Thunis, Philippe; Pay, Maria-Teresa; Colette, Augustin; Couvidat, Florian; Meleux, Frédérik; Rouïl, Laurence; Ung, Anthony; Aksoyoglu, Sebnem; María Baldasano, José; Bieser, Johannes; Briganti, Gino; Cappelletti, Andrea; D'Isidoro, Massimo; Finardi, Sandro; Kranenburg, Richard; Silibello, Camillo; Carnevale, Claudio; Aas, Wenche; Dupont, Jean-Charles; Fagerli, Hilde; Gonzalez, Lucia; Menut, Laurent; Prévôt, André S. H.; Roberts, Pete; White, Les
2016-10-01
The EURODELTA III exercise has facilitated a comprehensive intercomparison and evaluation of chemistry transport model performances. Participating models performed calculations for four 1-month periods in different seasons in the years 2006 to 2009, allowing the influence of different meteorological conditions on model performances to be evaluated. The exercise was performed with strict requirements for the input data, with few exceptions. As a consequence, most of differences in the outputs will be attributed to the differences in model formulations of chemical and physical processes. The models were evaluated mainly for background rural stations in Europe. The performance was assessed in terms of bias, root mean square error and correlation with respect to the concentrations of air pollutants (NO2, O3, SO2, PM10 and PM2.5), as well as key meteorological variables. Though most of meteorological parameters were prescribed, some variables like the planetary boundary layer (PBL) height and the vertical diffusion coefficient were derived in the model preprocessors and can partly explain the spread in model results. In general, the daytime PBL height is underestimated by all models. The largest variability of predicted PBL is observed over the ocean and seas. For ozone, this study shows the importance of proper boundary conditions for accurate model calculations and then on the regime of the gas and particle chemistry. The models show similar and quite good performance for nitrogen dioxide, whereas they struggle to accurately reproduce measured sulfur dioxide concentrations (for which the agreement with observations is the poorest). In general, the models provide a close-to-observations map of particulate matter (PM2.5 and PM10) concentrations over Europe rather with correlations in the range 0.4-0.7 and a systematic underestimation reaching -10 µg m-3 for PM10. The highest concentrations are much more underestimated, particularly in wintertime. Further evaluation of the mean diurnal cycles of PM reveals a general model tendency to overestimate the effect of the PBL height rise on PM levels in the morning, while the intensity of afternoon chemistry leads formation of secondary species to be underestimated. This results in larger modelled PM diurnal variations than the observations for all seasons. The models tend to be too sensitive to the daily variation of the PBL. All in all, in most cases model performances are more influenced by the model setup than the season. The good representation of temporal evolution of wind speed is the most responsible for models' skillfulness in reproducing the daily variability of pollutant concentrations (e.g. the development of peak episodes), while the reconstruction of the PBL diurnal cycle seems to play a larger role in driving the corresponding pollutant diurnal cycle and hence determines the presence of systematic positive and negative biases detectable on daily basis.
Ericson, Keith M Marzilli; White, John Myles; Laibson, David; Cohen, Jonathan D
2015-06-01
Heuristic models have been proposed for many domains involving choice. We conducted an out-of-sample, cross-validated comparison of heuristic models of intertemporal choice (which can account for many of the known intertemporal choice anomalies) and discounting models. Heuristic models outperformed traditional utility-discounting models, including models of exponential and hyperbolic discounting. The best-performing models predicted choices by using a weighted average of absolute differences and relative percentage differences of the attributes of the goods in a choice set. We concluded that heuristic models explain time-money trade-off choices in experiments better than do utility-discounting models. © The Author(s) 2015.
Automatic reactor model synthesis with genetic programming.
Dürrenmatt, David J; Gujer, Willi
2012-01-01
Successful modeling of wastewater treatment plant (WWTP) processes requires an accurate description of the plant hydraulics. Common methods such as tracer experiments are difficult and costly and thus have limited applicability in practice; engineers are often forced to rely on their experience only. An implementation of grammar-based genetic programming with an encoding to represent hydraulic reactor models as program trees should fill this gap: The encoding enables the algorithm to construct arbitrary reactor models compatible with common software used for WWTP modeling by linking building blocks, such as continuous stirred-tank reactors. Discharge measurements and influent and effluent concentrations are the only required inputs. As shown in a synthetic example, the technique can be used to identify a set of reactor models that perform equally well. Instead of being guided by experience, the most suitable model can now be chosen by the engineer from the set. In a second example, temperature measurements at the influent and effluent of a primary clarifier are used to generate a reactor model. A virtual tracer experiment performed on the reactor model has good agreement with a tracer experiment performed on-site.
Sun, Gang; Hoff, Steven J; Zelle, Brian C; Nelson, Minda A
2008-12-01
It is vital to forecast gas and particle matter concentrations and emission rates (GPCER) from livestock production facilities to assess the impact of airborne pollutants on human health, ecological environment, and global warming. Modeling source air quality is a complex process because of abundant nonlinear interactions between GPCER and other factors. The objective of this study was to introduce statistical methods and radial basis function (RBF) neural network to predict daily source air quality in Iowa swine deep-pit finishing buildings. The results show that four variables (outdoor and indoor temperature, animal units, and ventilation rates) were identified as relative important model inputs using statistical methods. It can be further demonstrated that only two factors, the environment factor and the animal factor, were capable of explaining more than 94% of the total variability after performing principal component analysis. The introduction of fewer uncorrelated variables to the neural network would result in the reduction of the model structure complexity, minimize computation cost, and eliminate model overfitting problems. The obtained results of RBF network prediction were in good agreement with the actual measurements, with values of the correlation coefficient between 0.741 and 0.995 and very low values of systemic performance indexes for all the models. The good results indicated the RBF network could be trained to model these highly nonlinear relationships. Thus, the RBF neural network technology combined with multivariate statistical methods is a promising tool for air pollutant emissions modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trent, D.S.; Eyler, L.L.
In this study several aspects of simulating hydrogen distribution in geometric configurations relevant to reactor containment structures were investigated using the TEMPEST computer code. Of particular interest was the performance of the TEMPEST turbulence model in a density-stratified environment. Computed results illustrated that the TEMPEST numerical procedures predicted the measured phenomena with good accuracy under a variety of conditions and that the turbulence model used is a viable approach in complex turbulent flow simulation.
Characterization of the Body-to-Body Propagation Channel for Subjects during Sports Activities.
Mohamed, Marshed; Cheffena, Michael; Moldsvor, Arild
2018-02-18
Body-to-body wireless networks (BBWNs) have great potential to find applications in team sports activities among others. However, successful design of such systems requires great understanding of the communication channel as the movement of the body components causes time-varying shadowing and fading effects. In this study, we present results of the measurement campaign of BBWN during running and cycling activities. Among others, the results indicated the presence of good and bad states with each state following a specific distribution for the considered propagation scenarios. This motivated the development of two-state semi-Markov model, for simulation of the communication channels. The simulation model was validated using the available measurement data in terms of first and second order statistics and have shown good agreement. The first order statistics obtained from the simulation model as well as the measured results were then used to analyze the performance of the BBWNs channels under running and cycling activities in terms of capacity and outage probability. Cycling channels showed better performance than running, having higher channel capacity and lower outage probability, regardless of the speed of the subjects involved in the measurement campaign.
NASA Astrophysics Data System (ADS)
Shcherbakova, D. A.; Debusschere, N.; Caenen, A.; Iannaccone, F.; Pernot, M.; Swillens, A.; Segers, P.
2017-07-01
Shear wave elastography (SWE) is an ultrasound (US) diagnostic method for measuring the stiffness of soft tissues based on generated shear waves (SWs). SWE has been applied to bulk tissues, but in arteries it is still under investigation. Previously performed studies in arteries or arterial phantoms demonstrated the potential of SWE to measure arterial wall stiffness—a relevant marker in prediction of cardiovascular diseases. This study is focused on numerical modelling of SWs in ex vivo equine aortic tissue, yet based on experimental SWE measurements with the tissue dynamically loaded while rotating the US probe to investigate the sensitivity of SWE to the anisotropic structure. A good match with experimental shear wave group speed results was obtained. SWs were sensitive to the orthotropy and nonlinearity of the material. The model also allowed to study the nature of the SWs by performing 2D FFT-based and analytical phase analyses. A good match between numerical group velocities derived using the time-of-flight algorithm and derived from the dispersion curves was found in the cross-sectional and axial arterial views. The complexity of solving analytical equations for nonlinear orthotropic stressed plates was discussed.
Itteboina, Ramesh; Ballu, Srilata; Sivan, Sree Kanth; Manga, Vijjulatha
2016-10-01
Janus kinase 1 (JAK 1) plays a critical role in initiating responses to cytokines by the JAK-signal transducer and activator of transcription (JAK-STAT). This controls survival, proliferation and differentiation of a variety of cells. Docking, 3D quantitative structure activity relationship (3D-QSAR) and molecular dynamics (MD) studies were performed on a series of Imidazo-pyrrolopyridine derivatives reported as JAK 1 inhibitors. QSAR model was generated using 30 molecules in the training set; developed model showed good statistical reliability, which is evident from r 2 ncv and r 2 loo values. The predictive ability of this model was determined using a test set of 13 molecules that gave acceptable predictive correlation (r 2 Pred ) values. Finally, molecular dynamics simulation was performed to validate docking results and MM/GBSA calculations. This facilitated us to compare binding free energies of cocrystal ligand and newly designed molecule R1. The good concordance between the docking results and CoMFA/CoMSIA contour maps afforded obliging clues for the rational modification of molecules to design more potent JAK 1 inhibitors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Assessment and prediction of drying shrinkage cracking in bonded mortar overlays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beushausen, Hans, E-mail: hans.beushausen@uct.ac.za; Chilwesa, Masuzyo
2013-11-15
Restrained drying shrinkage cracking was investigated on composite beams consisting of substrate concrete and bonded mortar overlays, and compared to the performance of the same mortars when subjected to the ring test. Stress development and cracking in the composite specimens were analytically modeled and predicted based on the measurement of relevant time-dependent material properties such as drying shrinkage, elastic modulus, tensile relaxation and tensile strength. Overlay cracking in the composite beams could be very well predicted with the analytical model. The ring test provided a useful qualitative comparison of the cracking performance of the mortars. The duration of curing wasmore » found to only have a minor influence on crack development. This was ascribed to the fact that prolonged curing has a beneficial effect on tensile strength at the onset of stress development, but is in the same time not beneficial to the values of tensile relaxation and elastic modulus. -- Highlights: •Parameter study on material characteristics influencing overlay cracking. •Analytical model gives good quantitative indication of overlay cracking. •Ring test presents good qualitative indication of overlay cracking. •Curing duration has little effect on overlay cracking.« less
Evolution of collectivity in the N =100 isotones near 170Yb
NASA Astrophysics Data System (ADS)
Karayonchev, V.; Régis, J.-M.; Jolie, J.; Blazhev, A.; Altenkirch, R.; Ansari, S.; Dannhoff, M.; Diel, F.; Esmaylzadeh, A.; Fransen, C.; Gerst, R.-B.; Moschner, K.; Müller-Gatermann, C.; Saed-Samii, N.; Stegemann, S.; Warr, N.; Zell, K. O.
2017-03-01
An experiment using the electronic γ -γ fast-timing technique was performed to measure lifetimes of the yrast states in 170Yb. The lifetime of the yrast 2+ state was determined using the slope method. The value of τ =2.33 (3 ) ns is in good agreement with the lifetimes measured using other techniques. The lifetimes of the first 4+ and 6+ states are determined using the generalized centroid difference method. The derived B (E 2 ) values are compared to calculations done using the confined beta soft model and show good agreement with the experimental values. These calculations were extended to the isotonic chain N =100 around 170Yb and show a good quantitative description of the collectivity observed along it.
Freitas, Sandra; Prieto, Gerardo; Simões, Mário R; Nogueira, Joana; Santana, Isabel; Martins, Cristina; Alves, Lara
2018-05-03
The present study aims to analyze the psychometric characteristics of the TeLPI (Irregular Words Reading Test), a Portuguese premorbid intelligence test, using the Rasch model for dichotomous items. The results reveal an overall adequacy and a good fit of values regarding both items and persons. A high variability of cognitive performance level and a good quality of the measurements were also found. The TeLPI has proved to be a unidimensional measure with reduced DIF effects. The present findings contribute to overcome an important gap in the psychometric validity of this instrument and provide good evidence of the overall psychometric validity of TeLPI results.
Bohmanova, J; Miglior, F; Jamrozik, J; Misztal, I; Sullivan, P G
2008-09-01
A random regression model with both random and fixed regressions fitted by Legendre polynomials of order 4 was compared with 3 alternative models fitting linear splines with 4, 5, or 6 knots. The effects common for all models were a herd-test-date effect, fixed regressions on days in milk (DIM) nested within region-age-season of calving class, and random regressions for additive genetic and permanent environmental effects. Data were test-day milk, fat and protein yields, and SCS recorded from 5 to 365 DIM during the first 3 lactations of Canadian Holstein cows. A random sample of 50 herds consisting of 96,756 test-day records was generated to estimate variance components within a Bayesian framework via Gibbs sampling. Two sets of genetic evaluations were subsequently carried out to investigate performance of the 4 models. Models were compared by graphical inspection of variance functions, goodness of fit, error of prediction of breeding values, and stability of estimated breeding values. Models with splines gave lower estimates of variances at extremes of lactations than the model with Legendre polynomials. Differences among models in goodness of fit measured by percentages of squared bias, correlations between predicted and observed records, and residual variances were small. The deviance information criterion favored the spline model with 6 knots. Smaller error of prediction and higher stability of estimated breeding values were achieved by using spline models with 5 and 6 knots compared with the model with Legendre polynomials. In general, the spline model with 6 knots had the best overall performance based upon the considered model comparison criteria.
Multi-objective optimization for generating a weighted multi-model ensemble
NASA Astrophysics Data System (ADS)
Lee, H.
2017-12-01
Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.
Using remote sensing for validation of a large scale hydrologic and hydrodynamic model in the Amazon
NASA Astrophysics Data System (ADS)
Paiva, R. C.; Bonnet, M.; Buarque, D. C.; Collischonn, W.; Frappart, F.; Mendes, C. B.
2011-12-01
We present the validation of the large-scale, catchment-based hydrological MGB-IPH model in the Amazon River basin. In this model, physically-based equations are used to simulate the hydrological processes, such as the Penman Monteith method to estimate evapotranspiration, or the Moore and Clarke infiltration model. A new feature recently introduced in the model is a 1D hydrodynamic module for river routing. It uses the full Saint-Venant equations and a simple floodplain storage model. River and floodplain geometry parameters are extracted from SRTM DEM using specially developed GIS algorithms that provide catchment discretization, estimation of river cross-sections geometry and water storage volume variations in the floodplains. The model was forced using satellite-derived daily rainfall TRMM 3B42, calibrated against discharge data and first validated using daily discharges and water levels from 111 and 69 stream gauges, respectively. Then, we performed a validation against remote sensing derived hydrological products, including (i) monthly Terrestrial Water Storage (TWS) anomalies derived from GRACE, (ii) river water levels derived from ENVISAT satellite altimetry data (212 virtual stations from Santos da Silva et al., 2010) and (iii) a multi-satellite monthly global inundation extent dataset at ~25 x 25 km spatial resolution (Papa et al., 2010). Validation against river discharges shows good performance of the MGB-IPH model. For 70% of the stream gauges, the Nash and Suttcliffe efficiency index (ENS) is higher than 0.6 and at Óbidos, close to Amazon river outlet, ENS equals 0.9 and the model bias equals,-4.6%. Largest errors are located in drainage areas outside Brazil and we speculate that it is due to the poor quality of rainfall datasets in these areas poorly monitored and/or mountainous. Validation against water levels shows that model is performing well in the major tributaries. For 60% of virtual stations, ENS is higher than 0.6. But, similarly, largest errors are also located in drainage areas outside Brazil, mostly Japurá River, and in the lower Amazon River. In the latter, correlation with observations is high but the model underestimates the amplitude of water levels. We also found a large bias between model and ENVISAT water levels, ranging from -3 to -15 m. The model provided TWS in good accordance with GRACE estimates. ENS values for TWS over the whole Amazon equals 0.93. We also analyzed results in 21 sub-regions of 4 x 4°. ENS is smaller than 0.8 only in 5 areas, and these are found mostly in the northwest part of the Amazon, possibly due to same errors reported in discharge results. Flood extent validation is under development, but a previous analysis in Brazilian part of Solimões River basin suggests a good model performance. The authors are grateful for the financial and operational support from the brazilian agencies FINEP, CNPq and ANA and from the french observatories HYBAM and SOERE RBV.
Cevenini, Gabriele; Barbini, Emanuela; Scolletta, Sabino; Biagioli, Bonizella; Giomarelli, Pierpaolo; Barbini, Paolo
2007-11-22
Popular predictive models for estimating morbidity probability after heart surgery are compared critically in a unitary framework. The study is divided into two parts. In the first part modelling techniques and intrinsic strengths and weaknesses of different approaches were discussed from a theoretical point of view. In this second part the performances of the same models are evaluated in an illustrative example. Eight models were developed: Bayes linear and quadratic models, k-nearest neighbour model, logistic regression model, Higgins and direct scoring systems and two feed-forward artificial neural networks with one and two layers. Cardiovascular, respiratory, neurological, renal, infectious and hemorrhagic complications were defined as morbidity. Training and testing sets each of 545 cases were used. The optimal set of predictors was chosen among a collection of 78 preoperative, intraoperative and postoperative variables by a stepwise procedure. Discrimination and calibration were evaluated by the area under the receiver operating characteristic curve and Hosmer-Lemeshow goodness-of-fit test, respectively. Scoring systems and the logistic regression model required the largest set of predictors, while Bayesian and k-nearest neighbour models were much more parsimonious. In testing data, all models showed acceptable discrimination capacities, however the Bayes quadratic model, using only three predictors, provided the best performance. All models showed satisfactory generalization ability: again the Bayes quadratic model exhibited the best generalization, while artificial neural networks and scoring systems gave the worst results. Finally, poor calibration was obtained when using scoring systems, k-nearest neighbour model and artificial neural networks, while Bayes (after recalibration) and logistic regression models gave adequate results. Although all the predictive models showed acceptable discrimination performance in the example considered, the Bayes and logistic regression models seemed better than the others, because they also had good generalization and calibration. The Bayes quadratic model seemed to be a convincing alternative to the much more usual Bayes linear and logistic regression models. It showed its capacity to identify a minimum core of predictors generally recognized as essential to pragmatically evaluate the risk of developing morbidity after heart surgery.
Longo, Maria Cristina
2015-01-01
The research analyzes good practices in health care "management experimentation models," which fall within the broader range of the integrative public-private partnerships (PPPs). Introduced by the Italian National Healthcare System in 1991, the "management experimentation models" are based on a public governance system mixed with a private management approach, a patient-centric orientation, a shared financial risk, and payment mechanisms correlated with clinical outcomes, quality, and cost-savings. This model makes public hospitals more competitive and efficient without affecting the principles of universal coverage, solidarity, and equity of access, but requires higher financial responsibility for managers and more flexibility in operations. In Italy the experience of such experimental models is limited but successful. The study adopts the case study methodology and refers to the international collaboration started in 1997 between two Italian hospitals and the University of Pittsburgh Medical Center (UPMC - Pennsylvania, USA) in the field of organ transplants and biomedical advanced therapies. The research allows identifying what constitutes good management practices and factors associated with higher clinical performance. Thus, it allows to understand whether and how the management experimentation model can be implemented on a broader basis, both nationwide and internationally. However, the implementation of integrative PPPs requires strategic, cultural, and managerial changes in the way in which a hospital operates; these transformations are not always sustainable. The recognition of ISMETT's good management practices is useful for competitive benchmarking among hospitals specialized in organ transplants and for its insights on the strategies concerning the governance reorganization in the hospital setting. Findings can be used in the future for analyzing the cross-country differences in productivity among well-managed public hospitals.
NAVO MSRC Navigator. Spring 2001
2001-01-01
preparations for UGC 2001 are almost complete. This year’s conference promises to be a good one, afford- ing us the opportunity to extend some Gulf Coast...cur- rent market pricing, and other rea- sonable estimates would not signifi- cantly alter the predicted trends. The performance model1 estimates CPU
Analogue based design of MMP-13 (Collagenase-3) inhibitors.
Sarma, J A R P; Rambabu, G; Srikanth, K; Raveendra, D; Vithal, M
2002-10-07
3D-QSAR studies using MFA and RSA methods were performed on a series of 39MMP-13 inhibitors. Model developed by MFA method has a r(2)(cv) (cross-validated) of 0.616 while its r(2) (conventional) value is 0.822. For the RSA model r(2)(cv) and r(2) are 0.681 and 0.847, respectively. Both the models indicate good internal as well as external predictive abilities. These models provide crucial information about the field descriptors for the design of potential inhibitors of MMP-13.
Buczinski, S; Vandeweerd, J M
2016-09-01
Provision of good quality colostrum [i.e., immunoglobulin G (IgG) concentration ≥50g/L] is the first step toward ensuring proper passive transfer of immunity for young calves. Precise quantification of colostrum IgG levels cannot be easily performed on the farm. Assessment of the refractive index using a Brix scale with a refractometer has been described as being highly correlated with IgG concentration in colostrum. The aim of this study was to perform a systematic review of the diagnostic accuracy of Brix refractometry to diagnose good quality colostrum. From 101 references initially obtain ed, 11 were included in the systematic review meta-analysis representing 4,251 colostrum samples. The prevalence of good colostrum samples with IgG ≥50g/L varied from 67.3 to 92.3% (median 77.9%). Specific estimates of accuracy [sensitivity (Se) and specificity (Sp)] were obtained for different reported cut-points using a hierarchical summary receiver operating characteristic curve model. For the cut-point of 22% (n=8 studies), Se=80.2% (95% CI: 71.1-87.0%) and Sp=82.6% (71.4-90.0%). Decreasing the cut-point to 18% increased Se [96.1% (91.8-98.2%)] and decreased Sp [54.5% (26.9-79.6%)]. Modeling the effect of these Brix accuracy estimates using a stochastic simulation and Bayes theorem showed that a positive result with the 22% Brix cut-point can be used to diagnose good quality colostrum (posttest probability of a good colostrum: 94.3% (90.7-96.9%). The posttest probability of good colostrum with a Brix value <18% was only 22.7% (12.3-39.2%). Based on this study, the 2 cut-points could be alternatively used to select good quality colostrum (sample with Brix ≥22%) or to discard poor quality colostrum (sample with Brix <18%). When sample results are between these 2 values, colostrum supplementation should be considered. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
WHO Expert Committee on Specifications for Pharmaceutical Preparations. Fiftieth report.
2016-01-01
The Expert Committee on Specifications for Pharmaceutical Preparations works towards clear, independent and practical standards and guidelines for the quality assurance of medicines. Standards are developed by the Committee through worldwide consultation and an international consensus-building process. The following new guidelines were adopted and recommended for use. Good pharmacopoeial practices; FIP-WHO technical guidelines: points to consider in the provision by health-care professionals of children-specific preparations that are not available as authorized products; Guidance on good manufacturing practices for biological products; Guidance on good manufacturing practices: inspection report, including Appendix 1: Model inspection report; Guidance on good data and record management practices; Good trade and distribution practices for starting materials; Guidelines on the conduct of surveys of the quality of medicines; Collaborative procedure between the World Health Organization (WHO) prequalification team and national regulatory authorities in the assessment and accelerated national registration of WHO-prequalified pharmaceutical products and vaccines; Guidance for organizations performing in vivo bioequivalence studies; and World Health Organization (WHO) general guidance on variations to multisource pharmaceutical products.
NASA Astrophysics Data System (ADS)
Falsafioon, Mehdi; Aidoun, Zine; Poirier, Michel
2017-12-01
A wide range of industrial refrigeration systems are good candidates to benefit from the cooling and refrigeration potential of supersonic ejectors. These are thermally activated and can use waste heat recovery from industrial processes where it is abundantly generated and rejected to the environment. In other circumstances low cost heat from biomass or solar energy may also be used in order to produce a cooling effect. Ejector performance is however typically modest and needs to be maximized in order to take full advantage of the simplicity and low cost of the technology. In the present work, the behavior of ejectors with different nozzle exit positions has been investigated using a prototype as well as a CFD model. The prototype was used in order to measure the performance advantages of refrigerant (R-134a) flowing inside the ejector. For the CFD model, it is assumed that the ejectors are axi-symmetric along x-axis, thus the generated model is in 2D. The preliminary CFD results are validated with experimental data over a wide range of conditions and are in good accordance in terms of entrainment and compression ratios. Next, the flow patterns of four different topologies are studied in order to discuss the optimum geometry in term of ejector entrainment improvement. Finally, The numerical simulations were used to find an optimum value corresponding to maximized entrainment ratio for fixed operating conditions.
On the necessity of U-shaped learning.
Carlucci, Lorenzo; Case, John
2013-01-01
A U-shaped curve in a cognitive-developmental trajectory refers to a three-step process: good performance followed by bad performance followed by good performance once again. U-shaped curves have been observed in a wide variety of cognitive-developmental and learning contexts. U-shaped learning seems to contradict the idea that learning is a monotonic, cumulative process and thus constitutes a challenge for competing theories of cognitive development and learning. U-shaped behavior in language learning (in particular in learning English past tense) has become a central topic in the Cognitive Science debate about learning models. Antagonist models (e.g., connectionism versus nativism) are often judged on their ability of modeling or accounting for U-shaped behavior. The prior literature is mostly occupied with explaining how U-shaped behavior occurs. Instead, we are interested in the necessity of this kind of apparently inefficient strategy. We present and discuss a body of results in the abstract mathematical setting of (extensions of) Gold-style computational learning theory addressing a mathematically precise version of the following question: Are there learning tasks that require U-shaped behavior? All notions considered are learning in the limit from positive data. We present results about the necessity of U-shaped learning in classical models of learning as well as in models with bounds on the memory of the learner. The pattern emerges that, for parameterized, cognitively relevant learning criteria, beyond very few initial parameter values, U-shapes are necessary for full learning power! We discuss the possible relevance of the above results for the Cognitive Science debate about learning models as well as directions for future research. Copyright © 2013 Cognitive Science Society, Inc.
Prospective comparison of severity scores for predicting mortality in community-acquired pneumonia.
Luque, Sonia; Gea, Joaquim; Saballs, Pere; Ferrández, Olivia; Berenguer, Nuria; Grau, Santiago
2012-06-01
Specific prognostic models for community acquired pneumonia (CAP) to guide treatment decisions have been developed, such us the Pneumonia Severity Index (PSI) and the Confusion, Urea nitrogen, Respiratory rate, Blood pressure and age ≥ 65 years index (CURB-65). Additionally, general models are available such as the Mortality Probability Model (MPM-II). So far, which score performs better in CAP remains controversial. The objective was to compare PSI and CURB-65 and the general model, MPM-II, for predicting 30-day mortality in patients admitted with CAP. Prospective observational study including all consecutive patients hospitalised with a confirmed diagnosis of CAP and treated according to the hospital guidelines. Comparison of the overall discriminatory power of the models was performed by calculating the area under a receiver operator characteristic curve (AUC ROC curve) and calibration through the Goodness-of-fit test. One hundred and fifty two patients were included (mean age 73.0 years; 69.1% male; 75.0% with more than one comorbid condition). Seventy-five percent of the patients were classified as high-risk subjects according to the PSI, versus 61.2% according to the CURB-65. The 30-day mortality rate was 11.8%. All three scores obtained acceptable and similar values of the AUCs of the ROC curve for predicting mortality. Despite all rules showed good calibration, this seemed to be better for CURB-65. CURB-65 also revealed the highest positive likelihood ratio. CURB-65 performs similar to PSI or MPMII for predicting 30-day mortality in patients with CAP. Consequently, this simple model can be regarded as a valid alternative to the more complex rules.
Lustgarten, Jonathan Lyle; Balasubramanian, Jeya Balaji; Visweswaran, Shyam; Gopalakrishnan, Vanathi
2017-03-01
The comprehensibility of good predictive models learned from high-dimensional gene expression data is attractive because it can lead to biomarker discovery. Several good classifiers provide comparable predictive performance but differ in their abilities to summarize the observed data. We extend a Bayesian Rule Learning (BRL-GSS) algorithm, previously shown to be a significantly better predictor than other classical approaches in this domain. It searches a space of Bayesian networks using a decision tree representation of its parameters with global constraints, and infers a set of IF-THEN rules. The number of parameters and therefore the number of rules are combinatorial to the number of predictor variables in the model. We relax these global constraints to a more generalizable local structure (BRL-LSS). BRL-LSS entails more parsimonious set of rules because it does not have to generate all combinatorial rules. The search space of local structures is much richer than the space of global structures. We design the BRL-LSS with the same worst-case time-complexity as BRL-GSS while exploring a richer and more complex model space. We measure predictive performance using Area Under the ROC curve (AUC) and Accuracy. We measure model parsimony performance by noting the average number of rules and variables needed to describe the observed data. We evaluate the predictive and parsimony performance of BRL-GSS, BRL-LSS and the state-of-the-art C4.5 decision tree algorithm, across 10-fold cross-validation using ten microarray gene-expression diagnostic datasets. In these experiments, we observe that BRL-LSS is similar to BRL-GSS in terms of predictive performance, while generating a much more parsimonious set of rules to explain the same observed data. BRL-LSS also needs fewer variables than C4.5 to explain the data with similar predictive performance. We also conduct a feasibility study to demonstrate the general applicability of our BRL methods on the newer RNA sequencing gene-expression data.
A side-by-side comparison of CPV module and system performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, Matthew; Marion, Bill; Kurtz, Sarah
A side-by-side comparison is made between concentrator photovoltaic module and system direct current aperture efficiency data with a focus on quantifying system performance losses. The individual losses measured/calculated, when combined, are in good agreement with the total loss seen between the module and the system. Results indicate that for the given test period, the largest individual loss of 3.7% relative is due to the baseline performance difference between the individual module and the average for the 200 modules in the system. A basic empirical model is derived based on module spectral performance data and the tabulated losses between the modulemore » and the system. The model predicts instantaneous system direct current aperture efficiency with a root mean square error of 2.3% relative.« less
NASA Technical Reports Server (NTRS)
Arenstorf, Norbert S.; Jordan, Harry F.
1987-01-01
A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.
2014-09-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.
Development and control of a magnetorheological haptic device for robot assisted surgery.
Shokrollahi, Elnaz; Goldenberg, Andrew A; Drake, James M; Eastwood, Kyle W; Kang, Matthew
2017-07-01
A prototype magnetorheological (MR) fluid-based actuator has been designed for tele-robotic surgical applications. This device is capable of generating forces up to 47 N, with input currents ranging from 0 to 1.5 A. We begin by outlining the physical design of the device, and then discuss a novel nonlinear model of the device's behavior. The model was developed using the Hammerstein-Wiener (H-W) nonlinear black-box technique and is intended to accurately capture the hysteresis behavior of the MR-fluid. Several experiments were conducted on the device to collect estimation and validation datasets to construct the model and assess its performance. Different estimating functions were used to construct the model, and their effectiveness is assessed based on goodness-of-fit and final-prediction-error measurements. A sigmoid network was found to have a goodness-of-fit of 95%. The model estimate was then used to tune a PID controller. Two control schemes were proposed to eliminate the hysteresis behavior present in the MR fluid device. One method uses a traditional force feedback control loop and the other is based on measuring the magnetic field using a Hall-effect sensor embedded within the device. The Hall-effect sensor scheme was found to be superior in terms of cost, simplicity and real-time control performance compared to the force control strategy.
Development and evaluation of a physics-based windblown ...
A new windblown dust emission treatment was incorporated in the Community Multiscale Air Quality (CMAQ) modeling system. This new model treatment has been built upon previously developed physics-based parameterization schemes from the literature. A distinct and novel feature of this scheme, however, is the incorporation of a newly developed dynamic relation for the surface roughness length relevant to small-scale dust generation processes. Through this implementation, the effect of nonerodible elements on the local flow acceleration, drag partitioning, and surface coverage protection is modeled in a physically based and consistent manner. Careful attention is paid in integrating the new windblown dust treatment in the CMAQ model to ensure that the required input parameters are correctly configured. To test the performance of the new dust module in CMAQ, the entire year 2011 is simulated for the continental United States, with particular emphasis on the southwestern United States (SWUS) where windblown dust concentrations are relatively large. Overall, the model shows good performance with the daily mean bias of soil concentrations fluctuating in the range of ±1 µg m−3 for the entire year. Springtime soil concentrations are in quite good agreement (normalized mean bias of 8.3%) with observations, while moderate to high underestimation of soil concentration is seen in the summertime. The latter is attributed to the issue of representing the convective dust sto
NASA Astrophysics Data System (ADS)
Zhang, Ling; Min, Junying; Wang, Bin; Lin, Jianping; Li, Fangfang; Liu, Jing
2016-03-01
In practical engineering, finite element(FE) modeling for weld seam is commonly simplified by neglecting its inhomogeneous mechanical properties. This will cause a significant loss in accuracy of FE forming analysis, in particular, for friction stir welded(FSW) blanks due to the large width and good formability of its weld seam. The inhomogeneous mechanical properties across weld seam need to be well characterized for an accurate FE analysis. Based on a similar AA5182 FSW blank, the metallographic observation and micro-Vickers hardness analysis upon the weld cross-section are performed to identify the interfaces of different sub-zones, i.e., heat affected zone(HAZ), thermal-mechanically affected zone(TMAZ) and weld nugget(WN). Based on the rule of mixture and hardness distribution, a constitutive model is established for each sub-zone to characterize the inhomogeneous mechanical properties across the weld seam. Uniaxial tensile tests of the AA5182 FSW blank are performed with the aid of digital image correlation(DIC) techniques. Experimental local stress-strain curves are obtained for different weld sub-zones. The experimental results show good agreement with those derived from the constitutive models, which demonstrates the feasibility and accuracy of these models. The proposed research gives an accurate characterization of inhomogeneous mechanical properties across the weld seam produced by FSW, which provides solutions for improving the FE simulation accuracy of FSW sheet forming.
NASA Technical Reports Server (NTRS)
Stoll, F.; Koenig, D. G.
1983-01-01
Data obtained through very high angles of attack from a large-scale, subsonic wind-tunnel test of a close-coupled canard-delta-wing fighter model are analyzed. The canard delays wing leading-edge vortex breakdown, even for angles of attack at which the canard is completely stalled. A vortex-lattice method was applied which gave good predictions of lift and pitching moment up to an angle of attack of about 20 deg, where vortex-breakdown effects on performance become significant. Pitch-control inputs generally retain full effectiveness up to the angle of attack of maximum lift, beyond which, effectiveness drops off rapidly. A high-angle-of-attack prediction method gives good estimates of lift and drag for the completely stalled aircraft. Roll asymmetry observed at zero sideslip is apparently caused by an asymmetry in the model support structure.
A flamelet model for transcritical LOx/GCH4 flames
NASA Astrophysics Data System (ADS)
Müller, Hagen; Pfitzner, Michael
2017-03-01
This work presents a numerical framework to efficiently simulate methane combustion at supercritical pressures. A LES flamelet approach is adapted to account for real-gas thermodynamics effects which are a prominent feature of flames at near-critical injection conditions. The thermodynamics model is based on the Peng-Robinson equation of state (PR-EoS) in conjunction with a novel volume-translation method to correct deficiencies in the transcritical regime. The resulting formulation is more accurate than standard cubic EoSs without deteriorating their good computational performance. To consistently account for pressure and strain fluctuations in the flamelet model, an additional enthalpy equation is solved along with the transport equations for mixture fraction and mixture fraction variance. The method is validated against available experimental data for a laboratory scale LOx/GCH4 flame at conditions that resemble those in liquid-propellant rocket engines. The LES result is in good agreement with the measured OH* radiation.
Prediction of pelvic organ prolapse using an artificial neural network.
Robinson, Christopher J; Swift, Steven; Johnson, Donna D; Almeida, Jonas S
2008-08-01
The objective of this investigation was to test the ability of a feedforward artificial neural network (ANN) to differentiate patients who have pelvic organ prolapse (POP) from those who retain good pelvic organ support. Following institutional review board approval, patients with POP (n = 87) and controls with good pelvic organ support (n = 368) were identified from the urogynecology research database. Historical and clinical information was extracted from the database. Data analysis included the training of a feedforward ANN, variable selection, and external validation of the model with an independent data set. Twenty variables were used. The median-performing ANN model used a median of 3 (quartile 1:3 to quartile 3:5) variables and achieved an area under the receiver operator curve of 0.90 (external, independent validation set). Ninety percent sensitivity and 83% specificity were obtained in the external validation by ANN classification. Feedforward ANN modeling is applicable to the identification and prediction of POP.
Slushy weightings for the optimal pilot model. [considering visual tracking task
NASA Technical Reports Server (NTRS)
Dillow, J. D.; Picha, D. G.; Anderson, R. O.
1975-01-01
A pilot model is described which accounts for the effect of motion cues in a well defined visual tracking task. The effect of visual and motion cues are accounted for in the model in two ways. First, the observation matrix in the pilot model is structured to account for the visual and motion inputs presented to the pilot. Secondly, the weightings in the quadratic cost function associated with the pilot model are modified to account for the pilot's perception of the variables he considers important in the task. Analytic results obtained using the pilot model are compared to experimental results and in general good agreement is demonstrated. The analytic model yields small improvements in tracking performance with the addition of motion cues for easily controlled task dynamics and large improvements in tracking performance with the addition of motion cues for difficult task dynamics.
Schaefferkoetter, Joshua; Casey, Michael; Townsend, David; Fakhri, Georges El
2013-01-01
Time-of-flight (TOF) and point spread function (PSF) modeling have been shown to improve PET reconstructions, but the impact on physicians in the clinical setting has not been thoroughly investigated. A lesion detection and localization study was performed using simulated lesions in real patient images. Four reconstruction schemes were considered: ordinary Poisson OSEM (OP) alone and combined with TOF, PSF, and TOF+PSF. The images were presented to physicians experienced in reading PET images, and the performance of each was quantified using localization receiver operating characteristic (LROC). Numerical observers (non-prewhitening and Hotelling) were used to identify optimal reconstruction parameters, and observer SNR was compared to the performance of the physicians. The numerical models showed good agreement with human performance, and best performance was achieved by both when using TOF+PSF. These findings suggest a large potential benefit of TOF+PSF for oncology PET studies, especially in the detection of small, low-intensity, focal disease in larger patients. PMID:23403399
NASA Astrophysics Data System (ADS)
Hosseinalipour, S. M.; Raja, A.; Hajikhani, S.
2012-06-01
A full three dimensional Navier - Stokes numerical simulation has been performed for performance analysis of a Kaplan turbine which is installed in one of the Irans south dams. No simplifications have been enforced in the simulation. The numerical results have been evaluated using some integral parameters such as the turbine efficiency via comparing the results with existing experimental data from the prototype Hill chart. In part of this study the numerical simulations were performed in order to calculate the prototype turbine efficiencies in some specific points which comes from the scaling up of the model efficiency that are available in the model experimental Hill chart. The results are very promising which shows the good ability of the numerical techniques for resolving the flow characteristics in these kind of complex geometries. A parametric study regarding the evaluation of turbine performance in three different runner angles of the prototype is also performed and the results are cited in this paper.
NASA Astrophysics Data System (ADS)
Schaefferkoetter, Joshua; Casey, Michael; Townsend, David; El Fakhri, Georges
2013-03-01
Time-of-flight (TOF) and point spread function (PSF) modeling have been shown to improve PET reconstructions, but the impact on physicians in the clinical setting has not been thoroughly investigated. A lesion detection and localization study was performed using simulated lesions in real patient images. Four reconstruction schemes were considered: ordinary Poisson OSEM (OP) alone and combined with TOF, PSF, and TOF + PSF. The images were presented to physicians experienced in reading PET images, and the performance of each was quantified using localization receiver operating characteristic. Numerical observers (non-prewhitening and Hotelling) were used to identify optimal reconstruction parameters, and observer SNR was compared to the performance of the physicians. The numerical models showed good agreement with human performance, and best performance was achieved by both when using TOF + PSF. These findings suggest a large potential benefit of TOF + PSF for oncology PET studies, especially in the detection of small, low-intensity, focal disease in larger patients.
Khan, Taimoor; De, Asok
2014-01-01
In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results.
De, Asok
2014-01-01
In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616
Weather model performance on extreme rainfall events simulation's over Western Iberian Peninsula
NASA Astrophysics Data System (ADS)
Pereira, S. C.; Carvalho, A. C.; Ferreira, J.; Nunes, J. P.; Kaiser, J. J.; Rocha, A.
2012-08-01
This study evaluates the performance of the WRF-ARW numerical weather model in simulating the spatial and temporal patterns of an extreme rainfall period over a complex orographic region in north-central Portugal. The analysis was performed for the December month of 2009, during the Portugal Mainland rainy season. The heavy rainfall to extreme heavy rainfall periods were due to several low surface pressure's systems associated with frontal surfaces. The total amount of precipitation for December exceeded, in average, the climatological mean for the 1971-2000 time period in +89 mm, varying from 190 mm (south part of the country) to 1175 mm (north part of the country). Three model runs were conducted to assess possible improvements in model performance: (1) the WRF-ARW is forced with the initial fields from a global domain model (RunRef); (2) data assimilation for a specific location (RunObsN) is included; (3) nudging is used to adjust the analysis field (RunGridN). Model performance was evaluated against an observed hourly precipitation dataset of 15 rainfall stations using several statistical parameters. The WRF-ARW model reproduced well the temporal rainfall patterns but tended to overestimate precipitation amounts. The RunGridN simulation provided the best results but model performance of the other two runs was good too, so that the selected extreme rainfall episode was successfully reproduced.
A Lagrangian mixing frequency model for transported PDF modeling
NASA Astrophysics Data System (ADS)
Turkeri, Hasret; Zhao, Xinyu
2017-11-01
In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.
Vaporization and Zonal Mixing in Performance Modeling of Advanced LOX-Methane Rockets
NASA Technical Reports Server (NTRS)
Williams, George J., Jr.; Stiegemeier, Benjamin R.
2013-01-01
Initial modeling of LOX-Methane reaction control (RCE) 100 lbf thrusters and larger, 5500 lbf thrusters with the TDK/VIPER code has shown good agreement with sea-level and altitude test data. However, the vaporization and zonal mixing upstream of the compressible flow stage of the models leveraged empirical trends to match the sea-level data. This was necessary in part because the codes are designed primarily to handle the compressible part of the flow (i.e. contraction through expansion) and in part because there was limited data on the thrusters themselves on which to base a rigorous model. A more rigorous model has been developed which includes detailed vaporization trends based on element type and geometry, radial variations in mixture ratio within each of the "zones" associated with elements and not just between zones of different element types, and, to the extent possible, updated kinetic rates. The Spray Combustion Analysis Program (SCAP) was leveraged to support assumptions in the vaporization trends. Data of both thrusters is revisited and the model maintains a good predictive capability while addressing some of the major limitations of the previous version.
Reflexion on linear regression trip production modelling method for ensuring good model quality
NASA Astrophysics Data System (ADS)
Suprayitno, Hitapriya; Ratnasari, Vita
2017-11-01
Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.
Evaluation of 3D-Jury on CASP7 models.
Kaján, László; Rychlewski, Leszek
2007-08-21
3D-Jury, the structure prediction consensus method publicly available in the Meta Server http://meta.bioinfo.pl/, was evaluated using models gathered in the 7th round of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7). 3D-Jury is an automated expert process that generates protein structure meta-predictions from sets of models obtained from partner servers. The performance of 3D-Jury was analysed for three aspects. First, we examined the correlation between the 3D-Jury score and a model quality measure: the number of correctly predicted residues. The 3D-Jury score was shown to correlate significantly with the number of correctly predicted residues, the correlation is good enough to be used for prediction. 3D-Jury was also found to improve upon the competing servers' choice of the best structure model in most cases. The value of the 3D-Jury score as a generic reliability measure was also examined. We found that the 3D-Jury score separates bad models from good models better than the reliability score of the original server in 27 cases and falls short of it in only 5 cases out of a total of 38. We report the release of a new Meta Server feature: instant 3D-Jury scoring of uploaded user models. The 3D-Jury score continues to be a good indicator of structural model quality. It also provides a generic reliability score, especially important for models that were not assigned such by the original server. Individual structure modellers can also benefit from the 3D-Jury scoring system by testing their models in the new instant scoring feature http://meta.bioinfo.pl/compare_your_model_example.pl available in the Meta Server.
Magretta, Joan
2002-05-01
"Business model" was one of the great buzz-words of the Internet boom. A company didn't need a strategy, a special competence, or even any customers--all it needed was a Web-based business model that promised wild profits in some distant, ill-defined future. Many people--investors, entrepreneurs, and executives alike--fell for the fantasy and got burned. And as the inevitable counterreaction played out, the concept of the business model fell out of fashion nearly as quickly as the .com appendage itself. That's a shame. As Joan Magretta explains, a good business model remains essential to every successful organization, whether it's a new venture or an established player. To help managers apply the concept successfully, she defines what a business model is and how it complements a smart competitive strategy. Business models are, at heart, stories that explain how enterprises work. Like a good story, a robust business model contains precisely delineated characters, plausible motivations, and a plot that turns on an insight about value. It answers certain questions: Who is the customer? How do we make money? What underlying economic logic explains how we can deliver value to customers at an appropriate cost? Every viable organization is built on a sound business model, but a business model isn't a strategy, even though many people use the terms interchangeably. Business models describe, as a system, how the pieces of a business fit together. But they don't factor in one critical dimension of performance: competition. That's the job of strategy. Illustrated with examples from companies like American Express, EuroDisney, WalMart, and Dell Computer, this article clarifies the concepts of business models and strategy, which are fundamental to every company's performance.
NASA Astrophysics Data System (ADS)
Jeziorska, Justyna; Niedzielski, Tomasz
2018-03-01
River basins located in the Central Sudetes (SW Poland) demonstrate a high vulnerability to flooding. Four mountainous basins and the corresponding outlets have been chosen for modeling the streamflow dynamics using TOPMODEL, a physically based semi-distributed topohydrological model. The model has been calibrated using the Monte Carlo approach—with discharge, rainfall, and evapotranspiration data used to estimate the parameters. The overall performance of the model was judged by interpreting the efficiency measures. TOPMODEL was able to reproduce the main pattern of the hydrograph with acceptable accuracy for two of the investigated catchments. However, it failed to simulate the hydrological response in the remaining two catchments. The best performing data set obtained Nash-Sutcliffe efficiency of 0.78. This data set was chosen to conduct a detailed analysis aiming to estimate the optimal timespan of input data for which TOPMODEL performs best. The best fit was attained for the half-year time span. The model was validated and found to reveal good skills.
Study on the CFD simulation of refrigerated container
NASA Astrophysics Data System (ADS)
Arif Budiyanto, Muhammad; Shinoda, Takeshi; Nasruddin
2017-10-01
The objective this study is to performed Computational Fluid Dynamic (CFD) simulation of refrigerated container in the container port. Refrigerated container is a thermal cargo container constructed from an insulation wall to carry kind of perishable goods. CFD simulation was carried out use cross sectional of container walls to predict surface temperatures of refrigerated container and to estimate its cooling load. The simulation model is based on the solution of the partial differential equations governing the fluid flow and heat transfer processes. The physical model of heat-transfer processes considered in this simulation are consist of solar radiation from the sun, heat conduction on the container walls, heat convection on the container surfaces and thermal radiation among the solid surfaces. The validation of simulation model was assessed uses surface temperatures at center points on each container walls obtained from the measurement experimentation in the previous study. The results shows the surface temperatures of simulation model has good agreement with the measurement data on all container walls.
Amagasa, Takashi; Nakayama, Takeo
2013-08-01
To clarify how long working hours affect the likelihood of current and future depression. Using data from four repeated measurements collected from 218 clerical workers, four models associating work-related factors to the depressive mood scale were established. The final model was constructed after comparing and testing the goodness-of-fit index using structural equation modeling. Multiple logistic regression analysis was also performed. The final model showed the best fit (normed fit index = 0.908; goodness-of-fit index = 0.936; root-mean-square error of approximation = 0.018). Its standardized total effect indicated that long working hours affected depression at the time of evaluation and 1 to 3 years later. The odds ratio for depression risk was 14.7 in employees who were not long-hours overworked according to the initial survey but who were long-hours overworked according to the second survey. Long working hours increase current and future risks of depression.
Benoit, Gaëlle; Heinkélé, Christophe; Gourdon, Emmanuel
2013-12-01
This paper deals with a numerical procedure to identify the acoustical parameters of road pavement from surface impedance measurements. This procedure comprises three steps. First, a suitable equivalent fluid model for the acoustical properties porous media is chosen, the variation ranges for the model parameters are set, and a sensitivity analysis for this model is performed. Second, this model is used in the parameter inversion process, which is performed with simulated annealing in a selected frequency range. Third, the sensitivity analysis and inversion process are repeated to estimate each parameter in turn. This approach is tested on data obtained for porous bituminous concrete and using the Zwikker and Kosten equivalent fluid model. This work provides a good foundation for the development of non-destructive in situ methods for the acoustical characterization of road pavements.
Evaluating and Optimizing Online Advertising: Forget the Click, but There Are Good Proxies.
Dalessandro, Brian; Hook, Rod; Perlich, Claudia; Provost, Foster
2015-06-01
Online systems promise to improve advertisement targeting via the massive and detailed data available. However, there often is too few data on exactly the outcome of interest, such as purchases, for accurate campaign evaluation and optimization (due to low conversion rates, cold start periods, lack of instrumentation of offline purchases, and long purchase cycles). This paper presents a detailed treatment of proxy modeling, which is based on the identification of a suitable alternative (proxy) target variable when data on the true objective is in short supply (or even completely nonexistent). The paper has a two-fold contribution. First, the potential of proxy modeling is demonstrated clearly, based on a massive-scale experiment across 58 real online advertising campaigns. Second, we assess the value of different specific proxies for evaluating and optimizing online display advertising, showing striking results. The results include bad news and good news. The most commonly cited and used proxy is a click on an ad. The bad news is that across a large number of campaigns, clicks are not good proxies for evaluation or for optimization: clickers do not resemble buyers. The good news is that an alternative sort of proxy performs remarkably well: observed visits to the brand's website. Specifically, predictive models built based on brand site visits-which are much more common than purchases-do a remarkably good job of predicting which browsers will make a purchase. The practical bottom line: evaluating and optimizing campaigns using clicks seems wrongheaded; however, there is an easy and attractive alternative-use a well-chosen site-visit proxy instead.
Ion thruster performance model
NASA Technical Reports Server (NTRS)
Brophy, J. R.
1984-01-01
A model of ion thruster performance is developed for high flux density, cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature. The model and experiments indicate that thruster performance may be described in terms of only four thruster configuration dependent parameters and two operating parameters. The model also suggests that improved performance should be exhibited by thruster designs which extract a large fraction of the ions produced in the discharge chamber, which have good primary electron and neutral atom containment and which operate at high propellant flow rates.
Simulation-based performance analysis of EC-Earth 3.2.0 using Dimemas
NASA Astrophysics Data System (ADS)
Yepes Arbós, Xavier; César Acosta Cobos, Mario; Serradell Maronda, Kim; Sanchez Lorente, Alicia; Doblas Reyes, Francisco Javier
2017-04-01
Earth System Models (ESMs) are complex applications executed in supercomputing facilities due to their high demand on computing resources. However, not all these models perform a good resources usage and the energy efficiency can be well below a minimum acceptable. One example is EC-Earth, a global coupled climate model which integrates different component models to simulate the Earth system. The two main components used in this analysis are IFS as atmospheric model and NEMO as ocean model, both coupled via the OASIS3-MCT coupler. Preliminary results proved that EC-Earth does not have a good computational performance. For example, the scalability of this model using the T255L91 grid with 512 MPI processes for IFS and the ORCA1L75 grid with 128 MPI processes for NEMO achieves 40.3 of speedup. This means that the 81.2% of the resources are wasted. Therefore, it is necessary a performance analysis to find the bottlenecks of the model and thus, determine the most appropriate optimization techniques. Using traces of the model collected with profiling tools such as Extrae, Paraver and Dimemas, allow us to simulate the model behaviour on a configurable parallel platform and extrapolate the impact of hardware changes in the performance of EC-Earth. In this document we propose a state-of-art procedure which makes possible to evaluate the different characteristics of climate models in a very efficient way. Accordingly, the performance of EC-Earth in different scenarios, namely assuming an ideal machine, model sensitivity and limiting model due to coupling has been shown. By simulating these scenarios, we realized that each model has different characteristics. With the ideal machine, we have seen that there are some sources of inefficiency: about a 20.59% of the execution time is communication; and there are workload imbalances produced by data dependences both between IFS and NEMO and within each model. In addition, in the model sensitivity simulations, we have described the types of messages and detected data dependencies. In IFS, we have observed that latency affects the coupling between models due to a large amount of small communications, whereas bandwidth affects another region of the code with a few big messages. In NEMO, results show that the simulated latencies and bandwidths only affect slightly to its execution time. However, it has data dependencies solved inefficiently and workload imbalances. The last simulation performed to detect the slowest model due to coupling has revealed that IFS is slower than NEMO. Moreover, there is not enough bandwidth to transfer all the data in IFS, whereas in NEMO there is almost no contention. This study is useful to improve the computational efficiency of the model, adapt it to support ultra-high resolution (UHR) experiments and future exascale supercomputers, and help code developers to design new algorithms more machine-independent.
A nonparametric smoothing method for assessing GEE models with longitudinal binary data.
Lin, Kuo-Chin; Chen, Yi-Ju; Shyr, Yu
2008-09-30
Studies involving longitudinal binary responses are widely applied in the health and biomedical sciences research and frequently analyzed by generalized estimating equations (GEE) method. This article proposes an alternative goodness-of-fit test based on the nonparametric smoothing approach for assessing the adequacy of GEE fitted models, which can be regarded as an extension of the goodness-of-fit test of le Cessie and van Houwelingen (Biometrics 1991; 47:1267-1282). The expectation and approximate variance of the proposed test statistic are derived. The asymptotic distribution of the proposed test statistic in terms of a scaled chi-squared distribution and the power performance of the proposed test are discussed by simulation studies. The testing procedure is demonstrated by two real data. Copyright (c) 2008 John Wiley & Sons, Ltd.
Estimation of stochastic volatility by using Ornstein-Uhlenbeck type models
NASA Astrophysics Data System (ADS)
Mariani, Maria C.; Bhuiyan, Md Al Masum; Tweneboah, Osei K.
2018-02-01
In this study, we develop a technique for estimating the stochastic volatility (SV) of a financial time series by using Ornstein-Uhlenbeck type models. Using the daily closing prices from developed and emergent stock markets, we conclude that the incorporation of stochastic volatility into the time varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. Furthermore, our estimation algorithm is feasible with large data sets and have good convergence properties.
Load Measurement in Structural Members Using Guided Acoustic Waves
NASA Astrophysics Data System (ADS)
Chen, Feng; Wilcox, Paul D.
2006-03-01
A non-destructive technique to measure load in structures such as rails and bridge cables by using guided acoustic waves is investigated both theoretically and experimentally. Robust finite element models for predicting the effect of load on guided wave propagation are developed and example results are presented for rods. Reasonably good agreement of experimental results with modelling prediction is obtained. The measurement technique has been developed to perform tests on larger specimens.
Optimization of CW Fiber Lasers With Strong Nonlinear Cavity Dynamics
NASA Astrophysics Data System (ADS)
Shtyrina, O. V.; Efremov, S. A.; Yarutkina, I. A.; Skidin, A. S.; Fedoruk, M. P.
2018-04-01
In present work the equation for the saturated gain is derived from one-level gain equations describing the energy evolution inside the laser cavity. It is shown how to derive the parameters of the mathematical model from the experimental results. The numerically-estimated energy and spectrum of the signal are in good agreement with the experiment. Also, the optimization of the output energy is performed for a given set of model parameters.
Teaching project: a low-cost swine model for chest tube insertion training.
Netto, Fernando Antonio Campelo Spencer; Sommer, Camila Garcia; Constantino, Michael de Mello; Cardoso, Michel; Cipriani, Raphael Flávio Fachini; Pereira, Renan Augusto
2016-02-01
to describe and evaluate the acceptance of a low-cost chest tube insertion porcine model in a medical education project in the southwest of Paraná, Brazil. we developed a low-cost and low technology porcine model for teaching chest tube insertion and used it in a teaching project. Medical trainees - students and residents - received theoretical instructions about the procedure and performed thoracic drainage in this porcine model. After performing the procedure, the participants filled a feedback questionnaire about the proposed experimental model. This study presents the model and analyzes the questionnaire responses. seventy-nine medical trainees used and evaluated the model. The anatomical correlation between the porcine model and human anatomy was considered high and averaged 8.1±1.0 among trainees. All study participants approved the low-cost porcine model for chest tube insertion. the presented low-cost porcine model for chest tube insertion training was feasible and had good acceptability among trainees. This model has potential use as a teaching tool in medical education.
NASA Astrophysics Data System (ADS)
Munusami, Ravindiran; Yakkala, Bhaskar Rao; Prabhakar, Shankar
2013-12-01
Magnetic tunnel junction were made by inserting the magnetic materials between the source, channel and the drain of the High Electron Mobility Transistor (HEMT) to enhance the performance. Material studio software package was used to design the superlattice layers. Different cases were analyzed to optimize the performance of the device by placing the magnetic material at different positions of the device. Simulation results based on conductivity reveals that the device has a very good electron transport due to the magnetic materials and will amplify very low frequency signals.
Nabawy, Mostafa R. A.; Crowther, William J.
2014-01-01
This paper introduces a generic, transparent and compact model for the evaluation of the aerodynamic performance of insect-like flapping wings in hovering flight. The model is generic in that it can be applied to wings of arbitrary morphology and kinematics without the use of experimental data, is transparent in that the aerodynamic components of the model are linked directly to morphology and kinematics via physical relationships and is compact in the sense that it can be efficiently evaluated for use within a design optimization environment. An important aspect of the model is the method by which translational force coefficients for the aerodynamic model are obtained from first principles; however important insights are also provided for the morphological and kinematic treatments that improve the clarity and efficiency of the overall model. A thorough analysis of the leading-edge suction analogy model is provided and comparison of the aerodynamic model with results from application of the leading-edge suction analogy shows good agreement. The full model is evaluated against experimental data for revolving wings and good agreement is obtained for lift and drag up to 90° incidence. Comparison of the model output with data from computational fluid dynamics studies on a range of different insect species also shows good agreement with predicted weight support ratio and specific power. The validated model is used to evaluate the relative impact of different contributors to the induced power factor for the hoverfly and fruitfly. It is shown that the assumption of an ideal induced power factor (k = 1) for a normal hovering hoverfly leads to a 23% overestimation of the generated force owing to flapping. PMID:24554578
Evaluation of 3D-Jury on CASP7 models
Kaján, László; Rychlewski, Leszek
2007-01-01
Background 3D-Jury, the structure prediction consensus method publicly available in the Meta Server , was evaluated using models gathered in the 7th round of the Critical Assessment of Techniques for Protein Structure Prediction (CASP7). 3D-Jury is an automated expert process that generates protein structure meta-predictions from sets of models obtained from partner servers. Results The performance of 3D-Jury was analysed for three aspects. First, we examined the correlation between the 3D-Jury score and a model quality measure: the number of correctly predicted residues. The 3D-Jury score was shown to correlate significantly with the number of correctly predicted residues, the correlation is good enough to be used for prediction. 3D-Jury was also found to improve upon the competing servers' choice of the best structure model in most cases. The value of the 3D-Jury score as a generic reliability measure was also examined. We found that the 3D-Jury score separates bad models from good models better than the reliability score of the original server in 27 cases and falls short of it in only 5 cases out of a total of 38. We report the release of a new Meta Server feature: instant 3D-Jury scoring of uploaded user models. Conclusion The 3D-Jury score continues to be a good indicator of structural model quality. It also provides a generic reliability score, especially important for models that were not assigned such by the original server. Individual structure modellers can also benefit from the 3D-Jury scoring system by testing their models in the new instant scoring feature available in the Meta Server. PMID:17711571
Nabawy, Mostafa R A; Crowther, William J
2014-05-06
This paper introduces a generic, transparent and compact model for the evaluation of the aerodynamic performance of insect-like flapping wings in hovering flight. The model is generic in that it can be applied to wings of arbitrary morphology and kinematics without the use of experimental data, is transparent in that the aerodynamic components of the model are linked directly to morphology and kinematics via physical relationships and is compact in the sense that it can be efficiently evaluated for use within a design optimization environment. An important aspect of the model is the method by which translational force coefficients for the aerodynamic model are obtained from first principles; however important insights are also provided for the morphological and kinematic treatments that improve the clarity and efficiency of the overall model. A thorough analysis of the leading-edge suction analogy model is provided and comparison of the aerodynamic model with results from application of the leading-edge suction analogy shows good agreement. The full model is evaluated against experimental data for revolving wings and good agreement is obtained for lift and drag up to 90° incidence. Comparison of the model output with data from computational fluid dynamics studies on a range of different insect species also shows good agreement with predicted weight support ratio and specific power. The validated model is used to evaluate the relative impact of different contributors to the induced power factor for the hoverfly and fruitfly. It is shown that the assumption of an ideal induced power factor (k = 1) for a normal hovering hoverfly leads to a 23% overestimation of the generated force owing to flapping.
Do we need animal hands-on courses for transplantation surgery?
Golriz, Mohammad; Hafezi, Mohammadreza; Garoussi, Camelia; Fard, Nassim; Arvin, Jalal; Fonouni, Hamidreza; Nickkholgh, Arash; Kulu, Yakob; Frongia, Giovani; Schemmer, Peter; Mehrabi, Arianeb
2013-01-01
Transplantation surgery requires many years of training. This study evaluates and presents the results of our recent four-yr animal hands-on courses of transplantation surgery on participants' training. Since 2008, five two-d hands-on courses of transplantation surgery were performed on swine models at our department. Sixty-one participants were asked to answer three questionnaires (pre-course, immediate post-course, subsequent post-course). The questions pertained to their past education, expectations, and evaluation of our courses, as well as our course's effectiveness in advancing their surgical abilities. The results were analyzed, compared and are presented herein. On average, 1.8 multiorgan procurements, 2.3 kidney, 1.5 liver, and 0.7 pancreas transplantations were performed by each participant. 41.7% of participants considered their previous practical training only satisfactory; 85% hoped for more opportunities to practice surgery; 73.3% evaluated our courses as very good; and 95.8% believed that our courses had fulfilled their expectations. 66% found the effectiveness of our course in advancing their surgical abilities very good; 30% good, and 4% satisfactory. Animal hands-on courses of transplantation surgery are one of the best options to learn and practice different operations and techniques in a near to clinical simulated model. Regular participation in such courses with a focus on practical issues can provide optimal opportunities for trainees with the advantage of direct mentoring and feedback. © 2013 John Wiley & Sons A/S.
Stage-discharge relationship in tidal channels
NASA Astrophysics Data System (ADS)
Kearney, W. S.; Mariotti, G.; Deegan, L.; Fagherazzi, S.
2016-12-01
Long-term records of the flow of water through tidal channels are essential to constrain the budgets of sediments and biogeochemical compounds in salt marshes. Statistical models which relate discharge to water level allow the estimation of such records from more easily obtained records of water stage in the channel. While there is clearly structure in the stage-discharge relationship, nonlinearity and nonstationarity of the relationship complicates the construction of statistical stage-discharge models with adequate performance for discharge estimation and uncertainty quantification. Here we compare four different types of stage-discharge models, each of which is designed to capture different characteristics of the stage-discharge relationship. We estimate and validate each of these models on a two-month long time series of stage and discharge obtained with an Acoustic Doppler Current Profiler in a salt marsh channel. We find that the best performance is obtained by models which account for the nonlinear and time-varying nature of the stage-discharge relationship. Good performance can also be obtained from a simplified version of these models which approximates the fully nonlinear and time-varying models with a piecewise linear formulation.
Risk assessment model for development of advanced age-related macular degeneration.
Klein, Michael L; Francis, Peter J; Ferris, Frederick L; Hamon, Sara C; Clemons, Traci E
2011-12-01
To design a risk assessment model for development of advanced age-related macular degeneration (AMD) incorporating phenotypic, demographic, environmental, and genetic risk factors. We evaluated longitudinal data from 2846 participants in the Age-Related Eye Disease Study. At baseline, these individuals had all levels of AMD, ranging from none to unilateral advanced AMD (neovascular or geographic atrophy). Follow-up averaged 9.3 years. We performed a Cox proportional hazards analysis with demographic, environmental, phenotypic, and genetic covariates and constructed a risk assessment model for development of advanced AMD. Performance of the model was evaluated using the C statistic and the Brier score and externally validated in participants in the Complications of Age-Related Macular Degeneration Prevention Trial. The final model included the following independent variables: age, smoking history, family history of AMD (first-degree member), phenotype based on a modified Age-Related Eye Disease Study simple scale score, and genetic variants CFH Y402H and ARMS2 A69S. The model did well on performance measures, with very good discrimination (C statistic = 0.872) and excellent calibration and overall performance (Brier score at 5 years = 0.08). Successful external validation was performed, and a risk assessment tool was designed for use with or without the genetic component. We constructed a risk assessment model for development of advanced AMD. The model performed well on measures of discrimination, calibration, and overall performance and was successfully externally validated. This risk assessment tool is available for online use.
Real-Time Simulation of the X-33 Aerospace Engine
NASA Technical Reports Server (NTRS)
Aguilar, Robert
1999-01-01
This paper discusses the development and performance of the X-33 Aerospike Engine RealTime Model. This model was developed for the purposes of control law development, six degree-of-freedom trajectory analysis, vehicle system integration testing, and hardware-in-the loop controller verification. The Real-Time Model uses time-step marching solution of non-linear differential equations representing the physical processes involved in the operation of a liquid propellant rocket engine, albeit in a simplified form. These processes include heat transfer, fluid dynamics, combustion, and turbomachine performance. Two engine models are typically employed in order to accurately model maneuvering and the powerpack-out condition where the power section of one engine is used to supply propellants to both engines if one engine malfunctions. The X-33 Real-Time Model is compared to actual hot fire test data and is been found to be in good agreement.
A charge-based model of Junction Barrier Schottky rectifiers
NASA Astrophysics Data System (ADS)
Latorre-Rey, Alvaro D.; Mudholkar, Mihir; Quddus, Mohammed T.; Salih, Ali
2018-06-01
A new charge-based model of the electric field distribution for Junction Barrier Schottky (JBS) diodes is presented, based on the description of the charge-sharing effect between the vertical Schottky junction and the lateral pn-junctions that constitute the active cell of the device. In our model, the inherently 2-D problem is transformed into a simple but accurate 1-D problem which has a closed analytical solution that captures the reshaping and reduction of the electric field profile responsible for the improved electrical performance of these devices, while preserving physically meaningful expressions that depend on relevant device parameters. The validation of the model is performed by comparing calculated electric field profiles with drift-diffusion simulations of a JBS device showing good agreement. Even though other fully 2-D models already available provide higher accuracy, they lack physical insight making the proposed model an useful tool for device design.
Integrated care in the management of chronic diseases: an Italian perspective.
Stefani, Ilario; Scolari, Francesca; Croce, Davide; Mazzone, Antonino
2016-12-01
This letter provides a view on the issue of the organizational model of Primary Care Groups (PCGs), which represent a best practice in continuity and appropriateness of care for chronic patients. Our analysis aimed at estimating the impact of PCGs introduction in terms of efficiency and effectiveness. The results of our study showed a better performance of PCGs compared with the other General Practitioners of Local Health Authority Milano 1, supporting the conclusion that good care cannot be delivered without good organization of care. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Deep generative learning for automated EHR diagnosis of traditional Chinese medicine.
Liang, Zhaohui; Liu, Jun; Ou, Aihua; Zhang, Honglai; Li, Ziping; Huang, Jimmy Xiangji
2018-05-04
Computer-aided medical decision-making (CAMDM) is the method to utilize massive EMR data as both empirical and evidence support for the decision procedure of healthcare activities. Well-developed information infrastructure, such as hospital information systems and disease surveillance systems, provides abundant data for CAMDM. However, the complexity of EMR data with abstract medical knowledge makes the conventional model incompetent for the analysis. Thus a deep belief networks (DBN) based model is proposed to simulate the information analysis and decision-making procedure in medical practice. The purpose of this paper is to evaluate a deep learning architecture as an effective solution for CAMDM. A two-step model is applied in our study. At the first step, an optimized seven-layer deep belief network (DBN) is applied as an unsupervised learning algorithm to perform model training to acquire feature representation. Then a support vector machine model is adopted to DBN at the second step of the supervised learning. There are two data sets used in the experiments. One is a plain text data set indexed by medical experts. The other is a structured dataset on primary hypertension. The data are randomly divided to generate the training set for the unsupervised learning and the testing set for the supervised learning. The model performance is evaluated by the statistics of mean and variance, the average precision and coverage on the data sets. Two conventional shallow models (support vector machine / SVM and decision tree / DT) are applied as the comparisons to show the superiority of our proposed approach. The deep learning (DBN + SVM) model outperforms simple SVM and DT on two data sets in terms of all the evaluation measures, which confirms our motivation that the deep model is good at capturing the key features with less dependence when the index is built up by manpower. Our study shows the two-step deep learning model achieves high performance for medical information retrieval over the conventional shallow models. It is able to capture the features of both plain text and the highly-structured database of EMR data. The performance of the deep model is superior to the conventional shallow learning models such as SVM and DT. It is an appropriate knowledge-learning model for information retrieval of EMR system. Therefore, deep learning provides a good solution to improve the performance of CAMDM systems. Copyright © 2018. Published by Elsevier B.V.
PSO-Assisted Development of New Transferable Coarse-Grained Water Models.
Bejagam, Karteek K; Singh, Samrendra; An, Yaxin; Berry, Carter; Deshmukh, Sanket A
2018-02-15
We have employed two-to-one mapping scheme to develop three coarse-grained (CG) water models, namely, 1-, 2-, and 3-site CG models. Here, for the first time, particle swarm optimization (PSO) and gradient descent methods were coupled to optimize the force-field parameters of the CG models to reproduce the density, self-diffusion coefficient, and dielectric constant of real water at 300 K. The CG MD simulations of these new models conducted with various timesteps, for different system sizes, and at a range of different temperatures are able to predict the density, self-diffusion coefficient, dielectric constant, surface tension, heat of vaporization, hydration free energy, and isothermal compressibility of real water with excellent accuracy. The 1-site model is ∼3 and ∼4.5 times computationally more efficient than 2- and 3-site models, respectively. To utilize the speed of 1-site model and electrostatic interactions offered by 2- and 3-site models, CG MD simulations of 1:1 combination of 1- and 2-/3-site models were performed at 300 K. These mixture simulations could also predict the properties of real water with good accuracy. Two new CG models of benzene, consisting of beads with and without partial charges, were developed. All three water models showed good capacity to solvate these benzene models.
Risk prediction models of breast cancer: a systematic review of model performances.
Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin
2012-05-01
The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.
Novel associative-memory-based self-learning neurocontrol model
NASA Astrophysics Data System (ADS)
Chen, Ke
1992-09-01
Intelligent control is an important field of AI application, which is closely related to machine learning, and the neurocontrol is a kind of intelligent control that controls actions of a physical system or a plant. Linear associative memory model is a good analytic tool for artificial neural networks. In this paper, we present a novel self-learning neurocontrol on the basis of the linear associative memory model to support intelligent control. Using our self-learning neurocontrol model, the learning process is viewed as an extension of one of J. Piaget's developmental stages. After a particular linear associative model developed by us is presented, a brief introduction to J. Piaget's cognitive theory is described as the basis of our self-learning style control. It follows that the neurocontrol model is presented, which usually includes two learning stages, viz. primary learning and high-level learning. As a demonstration of our neurocontrol model, an example is also presented with simulation techniques, called that `bird' catches an aim. The tentative experimental results show that the learning and controlling performance of this approach is surprisingly good. In conclusion, future research is pointed out to improve our self-learning neurocontrol model and explore other areas of application.
Bayesian Evaluation of Dynamical Soil Carbon Models Using Soil Carbon Flux Data
NASA Astrophysics Data System (ADS)
Xie, H. W.; Romero-Olivares, A.; Guindani, M.; Allison, S. D.
2017-12-01
2016 was Earth's hottest year in the modern temperature record and the third consecutive record-breaking year. As the planet continues to warm, temperature-induced changes in respiration rates of soil microbes could reduce the amount of carbon sequestered in the soil organic carbon (SOC) pool, one of the largest terrestrial stores of carbon. This would accelerate temperature increases. In order to predict the future size of the SOC pool, mathematical soil carbon models (SCMs) describing interactions between the biosphere and atmosphere are needed. SCMs must be validated before they can be chosen for predictive use. In this study, we check two SCMs called CON and AWB for consistency with observed data using Bayesian goodness of fit testing that can be used in the future to compare other models. We compare the fit of the models to longitudinal soil respiration data from a meta-analysis of soil heating experiments using a family of Bayesian goodness of fit metrics called information criteria (IC), including the Widely Applicable Information Criterion (WAIC), the Leave-One-Out Information Criterion (LOOIC), and the Log Pseudo Marginal Likelihood (LPML). These IC's take the entire posterior distribution into account, rather than just one outputted model fit line. A lower WAIC and LOOIC and larger LPML indicate a better fit. We compare AWB and CON with fixed steady state model pool sizes. At equivalent SOC, dissolved organic carbon, and microbial pool sizes, CON always outperforms AWB quantitatively by all three IC's used. AWB monotonically improves in fit as we reduce the SOC steady state pool size while fixing all other pool sizes, and the same is almost true for CON. The AWB model with the lowest SOC is the best performing AWB model, while the CON model with the second lowest SOC is the best performing model. We observe that AWB displays more changes in slope sign and qualitatively displays more adaptive dynamics, which prevents AWB from being fully ruled out for predictive use, but based on IC's, CON is clearly the superior model for fitting the data. Hence, we demonstrate that Bayesian goodness of fit testing with information criteria helps us rigorously determine the consistency of models with data. Models that demonstrate their consistency to multiple data sets with our approach can then be selected for further refinement.
Modelling of TES X-ray Microcalorimeters with a Novel Absorber Design
NASA Technical Reports Server (NTRS)
Iyomoto, Naoko; Bandler, Simon; Brefosky, Regis; Brown, Ari; Chervenak, James; Figueroa-Feliciano, Enectali; Finkbeiner, Frederick; Kelley, Richard; Kilbourne, Caroline; Lindeman, Mark;
2007-01-01
Our development of a novel x-ray absorber design that has enabled the incorporation of high-conductivity electroplated gold into our absorbers has yielded devices that not only have achieved breakthrough performance at 6 keV, but also are extraordinarily well modelled. We have determined device parameters that reproduce complex impedance curves and noise spectra throughout transition. Observed pulse heights, decay time and baseline energy resolution were in good agreement with simulated results using the same parameters. In the presentation, we will show these results in detail and we will also show highlights of the characterization of our gold/bismuth-absorber devices. We will discuss possible improvement of our current devices and expected performance of future devices using the modelling results.
Thrust performance of a variable-geometry, divergent exhaust nozzle on a turbojet engine at altitude
NASA Technical Reports Server (NTRS)
Straight, D. M.; Collom, R. R.
1983-01-01
A variable geometry, low aspect ratio, nonaxisymmetric, two dimensional, convergent-divergent exhaust nozzle was tested at simulated altitude on a turbojet engine to obtain baseline axial, dry thrust performance over wide ranges of operating nozzle pressure ratios, throat areas, and internal expansion area ratios. The thrust data showed good agreement with theory and scale model test results after the data were corrected for seal leakage and coolant losses. Wall static pressure profile data were also obtained and compared with one dimensional theory and scale model data. The pressure data indicate greater three dimensional flow effects in the full scale tests than with models. The leakage and coolant penalties were substantial, and the method to determine them is included.
Aerial robot intelligent control method based on back-stepping
NASA Astrophysics Data System (ADS)
Zhou, Jian; Xue, Qian
2018-05-01
The aerial robot is characterized as strong nonlinearity, high coupling and parameter uncertainty, a self-adaptive back-stepping control method based on neural network is proposed in this paper. The uncertain part of the aerial robot model is compensated online by the neural network of Cerebellum Model Articulation Controller and robust control items are designed to overcome the uncertainty error of the system during online learning. At the same time, particle swarm algorithm is used to optimize and fix parameters so as to improve the dynamic performance, and control law is obtained by the recursion of back-stepping regression. Simulation results show that the designed control law has desired attitude tracking performance and good robustness in case of uncertainties and large errors in the model parameters.
Lai, Lei-Jie; Gu, Guo-Ying; Zhu, Li-Min
2012-04-01
This paper presents a novel decoupled two degrees of freedom (2-DOF) translational parallel micro-positioning stage. The stage consists of a monolithic compliant mechanism driven by two piezoelectric actuators. The end-effector of the stage is connected to the base by four independent kinematic limbs. Two types of compound flexure module are serially connected to provide 2-DOF for each limb. The compound flexure modules and mirror symmetric distribution of the four limbs significantly reduce the input and output cross couplings and the parasitic motions. Based on the stiffness matrix method, static and dynamic models are constructed and optimal design is performed under certain constraints. The finite element analysis results are then given to validate the design model and a prototype of the XY stage is fabricated for performance tests. Open-loop tests show that maximum static and dynamic cross couplings between the two linear motions are below 0.5% and -45 dB, which are low enough to utilize the single-input-single-out control strategies. Finally, according to the identified dynamic model, an inversion-based feedforward controller in conjunction with a proportional-integral-derivative controller is applied to compensate for the nonlinearities and uncertainties. The experimental results show that good positioning and tracking performances are achieved, which verifies the effectiveness of the proposed mechanism and controller design. The resonant frequencies of the loaded stage at 2 kg and 5 kg are 105 Hz and 68 Hz, respectively. Therefore, the performance of the stage is reasonably good in term of a 200 N load capacity. © 2012 American Institute of Physics
Schetelig, J; de Wreede, L C; van Gelder, M; Andersen, N S; Moreno, C; Vitek, A; Karas, M; Michallet, M; Machaczka, M; Gramatzki, M; Beelen, D; Finke, J; Delgado, J; Volin, L; Passweg, J; Dreger, P; Henseler, A; van Biezen, A; Bornhäuser, M; Schönland, S O; Kröger, N
2017-04-01
For young patients with high-risk CLL, BTK-/PI3K-inhibitors or allogeneic stem cell transplantation (alloHCT) are considered. Patients with a low risk of non-relapse mortality (NRM) but a high risk of failure of targeted therapy may benefit most from alloHCT. We performed Cox regression analyses to identify risk factors for 2-year NRM and 5-year event-free survival (using EFS as a surrogate for long-term disease control) in a large, updated EBMT registry cohort (n= 694). For the whole cohort, 2-year NRM was 28% and 5-year EFS 37%. Higher age, lower performance status, unrelated donor type and unfavorable sex-mismatch had a significant adverse impact on 2-year NRM. Two-year NRM was calculated for good- and poor-risk reference patients. Predicted 2-year-NRM was 11 and 12% for male and female good-risk patients compared with 42 and 33% for male and female poor-risk patients. For 5-year EFS, age, performance status, prior autologous HCT, remission status and sex-mismatch had a significant impact, whereas del(17p) did not. The model-based prediction of 5-year EFS was 55% and 64%, respectively, for male and female good-risk patients. Good-risk transplant candidates with high-risk CLL and limited prognosis either on or after failure of targeted therapy should still be considered for alloHCT.
Neng, N R; Mestre, A S; Carvalho, A P; Nogueira, J M F
2011-09-16
In this contribution, powdered activated carbons (ACs) from cork waste were supported for bar adsorptive micro-extraction (BAμE), as novel adsorbent phases for the analysis of polar compounds. By combining this approach with liquid desorption followed by high performance liquid chromatography with diode array detection (BAμE(AC)-LD/HPLC-DAD), good analytical performance was achieved using clofibric acid (CLOF) and ibuprofen (IBU) model compounds in environmental and biological matrices. Assays performed on 30 mL water samples spiked at the 25.0 μg L(-1) level yielded recoveries around 80% for CLOF and 95% for IBU, under optimized experimental conditions. The ACs textural and surface chemistry properties were correlated with the results obtained. The analytical performance showed good precision (<15%), suitable detection limits (0.24 and 0.78 μg L(-1) for CLOF and IBU, respectively) and good linear dynamic ranges (r(2)>0.9922) from 1.0 to 600.0 μg L(-1). By using the standard addition methodology, the application of the present approach to environmental water and urine matrices allowed remarkable performance at the trace level. The proposed methodology proved to be a viable alternative for acidic pharmaceuticals analysis, showing to be easy to implement, reliable, sensitive and requiring low sample volume to monitor these priority compounds in environmental and biological matrices. Copyright © 2011 Elsevier B.V. All rights reserved.
Roux, Paul; Passerieux, Christine; Fleury, Marie-Josée
2016-12-01
Needs and service performance assessment are key components in improving recovery among individuals with mental disorders. To test the role of service performance as a mediating factor between severity of patients' needs and outcomes. A total of 339 adults with mental disorders were interviewed. A mediation analysis between severity of needs, service performance (adequacy of help, continuity of care and recovery orientation of services) and outcomes (personal recovery and quality of life) was carried out using structural equation modelling. The structural equation model provided a good fit with the data. An increase in needs was associated with lower service performance and worse outcomes, whereas higher service performance was associated with better outcomes. Service performance partially mediated the effect of patient needs on outcomes. Poorer service performance has a negative impact on outcomes for patients with the highest needs. Ensuring more efficient services for patients with high needs may help improve their recovery and quality of life. © The Royal College of Psychiatrists 2016.
Application of remote sensing in estimating evapotranspiration in the Platte river basin
NASA Technical Reports Server (NTRS)
Blad, B. L.; Rosenberg, N. J.
1976-01-01
A 'resistance model' and a mass transport model for estimating evapotranspiration (ET) were tested on large fields of naturally subirrigated alfalfa. Both models make use of crop canopy temperature data. Temperature data were obtained with an IR thermometer and with leaf thermocouples. A Bowen ratio-energy balance (BREB) model, adjusted to account for underestimation of ET during periods of strong sensible heat advection, was used as the standard against which the resistance and mass transport models were compared. Daily estimates by the resistance model were within 10% of estimates made by the BREB model. Daily estimates by the mass transport model did not agree quite as well. Performance was good on clear and cloudy days and also during periods of non-advection and strong advection of sensible heat. The performance of the mass transport and resistance models was less satisfactory for estimation of fluxes of latent heat for short term periods. Both models tended to overestimate at low LE fluxes.
NASA Astrophysics Data System (ADS)
Dang, Jie; Chen, Hao
2016-12-01
The methodology and procedures are discussed on designing merchant ships to achieve fully-integrated and optimized hull-propulsion systems by using asymmetric aftbodies. Computational fluid dynamics (CFD) has been used to evaluate the powering performance through massive calculations with automatic deformation algorisms for the hull forms and the propeller blades. Comparative model tests of the designs to the optimized symmetric hull forms have been carried out to verify the efficiency gain. More than 6% improvement on the propulsive efficiency of an oil tanker has been measured during the model tests. Dedicated sea-trials show good agreement with the predicted performance from the test results.
Experimental evaluation of joint designs for a space-shuttle orbiter ablative leading edge
NASA Technical Reports Server (NTRS)
Tompkins, S. S.; Kabana, W. P.
1975-01-01
The thermal performance of two types of ablative leading-edge joints for a space-shuttle orbiter were tested and evaluated. Chordwise joints between ablative leading-edge segments, and spanwise joints between ablative leading-edge segments and reusable surface insulation tiles were exposed to simulated shuttle heating environments. The data show that the thermal performance of models with chordwise joints to be as good as jointless models in simulated ascent-heating and orbital cold-soak environments. The suggestion is made for additional work on the joint seals, and, in particular, on the effects of heat-induced seal-material surface irregularities on the local flow.
Constraints on Smoke Injection Height, Source Strength, and Transports from MISR and MODIS
NASA Technical Reports Server (NTRS)
Kahn, Ralph A.; Petrenko, Mariya; Val Martin, Maria; Chin, Mian
2014-01-01
The AeroCom BB (Biomass Burning) Experiment AOD (Aerosol Optical Depth) motivation: We have a substantial set of satellite wildfire plume AOD snapshots and injection heights to help calibrate model/inventory performance; We are 1) adding more fire source-strength cases 2) using MISR to improve the AOD constrains and 3) adding 2008 global injection heights; We selected GFED3-daily due to good overall source strength performance, but any inventory can be tested; Joint effort to test multiple, global models, to draw robust BB injection height and emission strength conclusions. We provide satellite-based injection height and smoke plume AOD climatologies.
Regime-based evaluation of cloudiness in CMIP5 models
NASA Astrophysics Data System (ADS)
Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin
2017-01-01
The concept of cloud regimes (CRs) is used to develop a framework for evaluating the cloudiness of 12 fifth Coupled Model Intercomparison Project (CMIP5) models. Reference CRs come from existing global International Satellite Cloud Climatology Project (ISCCP) weather states. The evaluation is made possible by the implementation in several CMIP5 models of the ISCCP simulator generating in each grid cell daily joint histograms of cloud optical thickness and cloud top pressure. Model performance is assessed with several metrics such as CR global cloud fraction (CF), CR relative frequency of occurrence (RFO), their product [long-term average total cloud amount (TCA)], cross-correlations of CR RFO maps, and a metric of resemblance between model and ISCCP CRs. In terms of CR global RFO, arguably the most fundamental metric, the models perform unsatisfactorily overall, except for CRs representing thick storm clouds. Because model CR CF is internally constrained by our method, RFO discrepancies yield also substantial TCA errors. Our results support previous findings that CMIP5 models underestimate cloudiness. The multi-model mean performs well in matching observed RFO maps for many CRs, but is still not the best for this or other metrics. When overall performance across all CRs is assessed, some models, despite shortcomings, apparently outperform Moderate Resolution Imaging Spectroradiometer cloud observations evaluated against ISCCP like another model output. Lastly, contrasting cloud simulation performance against each model's equilibrium climate sensitivity in order to gain insight on whether good cloud simulation pairs with particular values of this parameter, yields no clear conclusions.
NASA Astrophysics Data System (ADS)
Vollant, A.; Balarac, G.; Corre, C.
2017-09-01
New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.
Pohjola, Mikko V; Pohjola, Pasi; Tainio, Marko; Tuomisto, Jouni T
2013-06-26
The calls for knowledge-based policy and policy-relevant research invoke a need to evaluate and manage environment and health assessments and models according to their societal outcomes. This review explores how well the existing approaches to assessment and model performance serve this need. The perspectives to assessment and model performance in the scientific literature can be called: (1) quality assurance/control, (2) uncertainty analysis, (3) technical assessment of models, (4) effectiveness and (5) other perspectives, according to what is primarily seen to constitute the goodness of assessments and models. The categorization is not strict and methods, tools and frameworks in different perspectives may overlap. However, altogether it seems that most approaches to assessment and model performance are relatively narrow in their scope. The focus in most approaches is on the outputs and making of assessments and models. Practical application of the outputs and the consequential outcomes are often left unaddressed. It appears that more comprehensive approaches that combine the essential characteristics of different perspectives are needed. This necessitates a better account of the mechanisms of collective knowledge creation and the relations between knowledge and practical action. Some new approaches to assessment, modeling and their evaluation and management span the chain from knowledge creation to societal outcomes, but the complexity of evaluating societal outcomes remains a challenge.
NASA Astrophysics Data System (ADS)
Jean, Ming-Der; Lei, Peng-Da; Kong, Ling-Hua; Liu, Cheng-Wu
2018-05-01
This study optimizes the thermal dissipation ability of aluminum nitride (AlN) ceramics to increase the thermal performance of light-emitting diode (LED) modulus. AlN powders are deposited on heat sink as a heat interface material, using an electrostatic spraying process. The junction temperature of the heat sink is developed by response surface methodology based on Taguchi methods. In addition, the structure and properties of the AlN coating are examined using X-ray photoelectron spectroscopy (XPS). In the XPS analysis, the AlN sub-peaks are observed at 72.79 eV for Al2p and 398.88 eV for N1s, and an N1s sub-peak is assigned to N-O at 398.60eV and Al-N bonding at 395.95eV, which allows good thermal properties. The results have shown that the use of AlN ceramic material on a heat sink can enhance the thermal performance of LED modules. In addition, the percentage error between the predicted and experimental results compared the quadric model with between the linear and he interaction models was found to be within 7.89%, indicating that it was a good predictor. Accordingly, RSM can effectively enhance the thermal performance of an LED, and the beneficial heat dissipation effects for AlN are improved by electrostatic spraying.
Performance of DIMTEST-and NOHARM-Based Statistics for Testing Unidimensionality
ERIC Educational Resources Information Center
Finch, Holmes; Habing, Brian
2007-01-01
This Monte Carlo study compares the ability of the parametric bootstrap version of DIMTEST with three goodness-of-fit tests calculated from a fitted NOHARM model to detect violations of the assumption of unidimensionality in testing data. The effectiveness of the procedures was evaluated for different numbers of items, numbers of examinees,…
Comparing Three Models of Achievement Goals: Goal Orientations, Goal Standards, and Goal Complexes
ERIC Educational Resources Information Center
Senko, Corwin; Tropiano, Katie L.
2016-01-01
Achievement goal theory (Dweck, 1986) initially characterized mastery goals and performance goals as opposites in a good-bad dualism of student motivation. A later revision (Harackiewicz, Barron, & Elliot, 1998) contended that both goals can provide benefits and be pursued together. Perhaps both frameworks are correct: Their contrasting views…
Neoliberal Competition in Higher Education Today: Research, Accountability and Impact
ERIC Educational Resources Information Center
Olssen, Mark
2016-01-01
Drawing on Foucault's elaboration of neoliberalism as a positive form of state power, the ascendancy of neoliberalism in higher education in Britain is examined in terms of the displacement of public good models of governance, and their replacement with individualised incentives and performance targets, heralding new and more stringent conceptions…
Essays on Participative Web and Social Media for Information Goods
ERIC Educational Resources Information Center
Lee, Young Jin
2010-01-01
Tremendous growth in online consumer participation has facilitated new business models by firms trying to leverage User-Generated Content (UGC). As a type of the outcomes of UGC, social media is one of the fastest-growing media forms and may significantly affect firm's economic actions or performances. My dissertation investigates several…
Route towards cylindrical cloaking at visible frequencies using an optimization algorithm
NASA Astrophysics Data System (ADS)
Rottler, Andreas; Krüger, Benjamin; Heitmann, Detlef; Pfannkuche, Daniela; Mendach, Stefan
2012-12-01
We derive a model based on the Maxwell-Garnett effective-medium theory that describes a cylindrical cloaking shell composed of metal rods which are radially aligned in a dielectric host medium. We propose and demonstrate a minimization algorithm that calculates for given material parameters the optimal geometrical parameters of the cloaking shell such that its effective optical parameters fit the best to the required permittivity distribution for cylindrical cloaking. By means of sophisticated full-wave simulations we find that a cylindrical cloak with good performance using silver as the metal can be designed with our algorithm for wavelengths in the red part of the visible spectrum (623nm <λ<773nm). We also present a full-wave simulation of such a cloak at an exemplary wavelength of λ=729nm (ℏω=1.7eV) which indicates that our model is useful to find design rules of cloaks with good cloaking performance. Our calculations investigate a structure that is easy to fabricate using standard preparation techniques and therefore pave the way to a realization of guiding light around an object at visible frequencies, thus rendering it invisible.
Kim, Hwi Young; Lee, Dong Hyeon; Lee, Jeong-Hoon; Cho, Young Youn; Cho, Eun Ju; Yu, Su Jong; Kim, Yoon Jun; Yoon, Jung-Hwan
2018-03-20
Prediction of the outcome of sorafenib therapy using biomarkers is an unmet clinical need in patients with advanced hepatocellular carcinoma (HCC). The aim was to develop and validate a biomarker-based model for predicting sorafenib response and overall survival (OS). This prospective cohort study included 124 consecutive HCC patients (44 with disease control, 80 with progression) with Child-Pugh class A liver function, who received sorafenib. Potential serum biomarkers (namely, hepatocyte growth factor [HGF], fibroblast growth factor [FGF], vascular endothelial growth factor receptor-1, CD117, and angiopoietin-2) were tested. After identifying independent predictors of tumor response, a risk scoring system for predicting OS was developed and 3-fold internal validation was conducted. A risk scoring system was developed with six covariates: etiology, platelet count, Barcelona Clinic Liver Cancer stage, protein induced by vitamin K absence-II, HGF, and FGF. When patients were stratified into low-risk (score ≤ 5), intermediate-risk (score 6), and high-risk (score ≥ 7) groups, the model provided good discriminant functions on tumor response (concordance [c]-index, 0.884) and 12-month survival (area under the curve [AUC], 0.825). The median OS was 19.0, 11.2, and 6.1 months in the low-, intermediate-, and high-risk group, respectively (P < 0.001). In internal validation, the model maintained good discriminant functions on tumor response (c-index, 0.825) and 12-month survival (AUC, 0.803), and good calibration functions (all P > 0.05 between expected and observed values). This new model including serum FGF and HGF showed good performance in predicting the response to sorafenib and survival in patients with advanced HCC.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204
SWAT use of gridded observations for simulating runoff - a Vietnam river basin study
NASA Astrophysics Data System (ADS)
Vu, M. T.; Raghavan, S. V.; Liong, S. Y.
2012-08-01
Many research studies that focus on basin hydrology have applied the SWAT model using station data to simulate runoff. But over regions lacking robust station data, there is a problem of applying the model to study the hydrological responses. For some countries and remote areas, the rainfall data availability might be a constraint due to many different reasons such as lacking of technology, war time and financial limitation that lead to difficulty in constructing the runoff data. To overcome such a limitation, this research study uses some of the available globally gridded high resolution precipitation datasets to simulate runoff. Five popular gridded observation precipitation datasets: (1) Asian Precipitation Highly Resolved Observational Data Integration Towards the Evaluation of Water Resources (APHRODITE), (2) Tropical Rainfall Measuring Mission (TRMM), (3) Precipitation Estimation from Remote Sensing Information using Artificial Neural Network (PERSIANN), (4) Global Precipitation Climatology Project (GPCP), (5) a modified version of Global Historical Climatology Network (GHCN2) and one reanalysis dataset, National Centers for Environment Prediction/National Center for Atmospheric Research (NCEP/NCAR) are used to simulate runoff over the Dak Bla river (a small tributary of the Mekong River) in Vietnam. Wherever possible, available station data are also used for comparison. Bilinear interpolation of these gridded datasets is used to input the precipitation data at the closest grid points to the station locations. Sensitivity Analysis and Auto-calibration are performed for the SWAT model. The Nash-Sutcliffe Efficiency (NSE) and Coefficient of Determination (R2) indices are used to benchmark the model performance. Results indicate that the APHRODITE dataset performed very well on a daily scale simulation of discharge having a good NSE of 0.54 and R2 of 0.55, when compared to the discharge simulation using station data (0.68 and 0.71). The GPCP proved to be the next best dataset that was applied to the runoff modelling, with NSE and R2 of 0.46 and 0.51, respectively. The PERSIANN and TRMM rainfall data driven runoff did not show good agreement compared to the station data as both the NSE and R2 indices showed a low value of 0.3. GHCN2 and NCEP also did not show good correlations. The varied results by using these datasets indicate that although the gauge based and satellite-gauge merged products use some ground truth data, the different interpolation techniques and merging algorithms could also be a source of uncertainties. This entails a good understanding of the response of the hydrological model to different datasets and a quantification of the uncertainties in these datasets. Such a methodology is also useful for planning on Rainfall-runoff and even reservoir/river management both at rural and urban scales.
A speech processing study using an acoustic model of a multiple-channel cochlear implant
NASA Astrophysics Data System (ADS)
Xu, Ying
1998-10-01
A cochlear implant is an electronic device designed to provide sound information for adults and children who have bilateral profound hearing loss. The task of representing speech signals as electrical stimuli is central to the design and performance of cochlear implants. Studies have shown that the current speech- processing strategies provide significant benefits to cochlear implant users. However, the evaluation and development of speech-processing strategies have been complicated by hardware limitations and large variability in user performance. To alleviate these problems, an acoustic model of a cochlear implant with the SPEAK strategy is implemented in this study, in which a set of acoustic stimuli whose psychophysical characteristics are as close as possible to those produced by a cochlear implant are presented on normal-hearing subjects. To test the effectiveness and feasibility of this acoustic model, a psychophysical experiment was conducted to match the performance of a normal-hearing listener using model- processed signals to that of a cochlear implant user. Good agreement was found between an implanted patient and an age-matched normal-hearing subject in a dynamic signal discrimination experiment, indicating that this acoustic model is a reasonably good approximation of a cochlear implant with the SPEAK strategy. The acoustic model was then used to examine the potential of the SPEAK strategy in terms of its temporal and frequency encoding of speech. It was hypothesized that better temporal and frequency encoding of speech can be accomplished by higher stimulation rates and a larger number of activated channels. Vowel and consonant recognition tests were conducted on normal-hearing subjects using speech tokens processed by the acoustic model, with different combinations of stimulation rate and number of activated channels. The results showed that vowel recognition was best at 600 pps and 8 activated channels, but further increases in stimulation rate and channel numbers were not beneficial. Manipulations of stimulation rate and number of activated channels did not appreciably affect consonant recognition. These results suggest that overall speech performance may improve by appropriately increasing stimulation rate and number of activated channels. Future revision of this acoustic model is necessary to provide more accurate amplitude representation of speech.
NASA Astrophysics Data System (ADS)
Nie, Shida; Zhuang, Ye; Wang, Yong; Guo, Konghui
2018-01-01
The performance of velocity & displacement-dependent damper (VDD), inspired by the semi-active control, is analyzed. The main differences among passive, displacement-dependent and semi-active dampers are compared on their damping properties. Valve assemblies of VDD are modelled to get an insight into its working principle. The mechanical structure composed by four valve assemblies helps to enable VDD to approach the performance by those semi-active control dampers. The valve structure parameters are determined by the suggested two-step process. Hydraulic model of the damper is built with AMEsim. Simulation result of F-V curves, which is similar to those of semi-active control damper, demonstrates that VDD could achieve the similar performance of semi-active control damper. The performance of a quarter vehicle model employing VDD is analyzed and compared with semi-active suspension. Simulation results show that VDD could perform as good as a semi-active control damper. In addition, no add-on hardware or energy consumption is needed for VDD to achieve the remarkable performance.
McAllister, Katherine S L; Ludman, Peter F; Hulme, William; de Belder, Mark A; Stables, Rodney; Chowdhary, Saqib; Mamas, Mamas A; Sperrin, Matthew; Buchan, Iain E
2016-05-01
The current risk model for percutaneous coronary intervention (PCI) in the UK is based on outcomes of patients treated in a different era of interventional cardiology. This study aimed to create a new model, based on a contemporary cohort of PCI treated patients, which would: predict 30 day mortality; provide good discrimination; and be well calibrated across a broad risk-spectrum. The model was derived from a training dataset of 336,433 PCI cases carried out between 2007 and 2011 in England and Wales, with 30 day mortality provided by record linkage. Candidate variables were selected on the basis of clinical consensus and data quality. Procedures in 2012 were used to perform temporal validation of the model. The strongest predictors of 30-day mortality were: cardiogenic shock; dialysis; and the indication for PCI and the degree of urgency with which it was performed. The model had an area under the receiver operator characteristic curve of 0.85 on the training data and 0.86 on validation. Calibration plots indicated a good model fit on development which was maintained on validation. We have created a contemporary model for PCI that encompasses a range of clinical risk, from stable elective PCI to emergency primary PCI and cardiogenic shock. The model is easy to apply and based on data reported in national registries. It has a high degree of discrimination and is well calibrated across the risk spectrum. The examination of key outcomes in PCI audit can be improved with this risk-adjusted model. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Benkert, Pascal; Schwede, Torsten; Tosatto, Silvio Ce
2009-05-20
The selection of the most accurate protein model from a set of alternatives is a crucial step in protein structure prediction both in template-based and ab initio approaches. Scoring functions have been developed which can either return a quality estimate for a single model or derive a score from the information contained in the ensemble of models for a given sequence. Local structural features occurring more frequently in the ensemble have a greater probability of being correct. Within the context of the CASP experiment, these so called consensus methods have been shown to perform considerably better in selecting good candidate models, but tend to fail if the best models are far from the dominant structural cluster. In this paper we show that model selection can be improved if both approaches are combined by pre-filtering the models used during the calculation of the structural consensus. Our recently published QMEAN composite scoring function has been improved by including an all-atom interaction potential term. The preliminary model ranking based on the new QMEAN score is used to select a subset of reliable models against which the structural consensus score is calculated. This scoring function called QMEANclust achieves a correlation coefficient of predicted quality score and GDT_TS of 0.9 averaged over the 98 CASP7 targets and perform significantly better in selecting good models from the ensemble of server models than any other groups participating in the quality estimation category of CASP7. Both scoring functions are also benchmarked on the MOULDER test set consisting of 20 target proteins each with 300 alternatives models generated by MODELLER. QMEAN outperforms all other tested scoring functions operating on individual models, while the consensus method QMEANclust only works properly on decoy sets containing a certain fraction of near-native conformations. We also present a local version of QMEAN for the per-residue estimation of model quality (QMEANlocal) and compare it to a new local consensus-based approach. Improved model selection is obtained by using a composite scoring function operating on single models in order to enrich higher quality models which are subsequently used to calculate the structural consensus. The performance of consensus-based methods such as QMEANclust highly depends on the composition and quality of the model ensemble to be analysed. Therefore, performance estimates for consensus methods based on large meta-datasets (e.g. CASP) might overrate their applicability in more realistic modelling situations with smaller sets of models based on individual methods.
NASA Astrophysics Data System (ADS)
Réveillet, Marion; Six, Delphine; Vincent, Christian; Rabatel, Antoine; Dumont, Marie; Lafaysse, Matthieu; Morin, Samuel; Vionnet, Vincent; Litt, Maxime
2018-04-01
This study focuses on simulations of the seasonal and annual surface mass balance (SMB) of Saint-Sorlin Glacier (French Alps) for the period 1996-2015 using the detailed SURFEX/ISBA-Crocus snowpack model. The model is forced by SAFRAN meteorological reanalysis data, adjusted with automatic weather station (AWS) measurements to ensure that simulations of all the energy balance components, in particular turbulent fluxes, are accurately represented with respect to the measured energy balance. Results indicate good model performance for the simulation of summer SMB when using meteorological forcing adjusted with in situ measurements. Model performance however strongly decreases without in situ meteorological measurements. The sensitivity of the model to meteorological forcing indicates a strong sensitivity to wind speed, higher than the sensitivity to ice albedo. Compared to an empirical approach, the model exhibited better performance for simulations of snow and firn melting in the accumulation area and similar performance in the ablation area when forced with meteorological data adjusted with nearby AWS measurements. When such measurements were not available close to the glacier, the empirical model performed better. Our results suggest that simulations of the evolution of future mass balance using an energy balance model require very accurate meteorological data. Given the uncertainties in the temporal evolution of the relevant meteorological variables and glacier surface properties in the future, empirical approaches based on temperature and precipitation could be more appropriate for simulations of glaciers in the future.
NASA Astrophysics Data System (ADS)
Tudor, Magdalena
IATA has estimated, in 2012, at about 2% of global carbon dioxide emissions, the environmental impact of the air transport, as a consequence caused by the rapidly growing of global movement demand of people and goods, and which was effectively taken into account in the development of the aviation industry. The historic achievements of scientific and technical progress in the field of commercial aviation were contributed to this estimate, and even today the research continues to make progress to help to reduce the emissions of greenhouse gases. Advances in commercial aircraft, and its engine design technology had the aim to improve flight performance. These improvements have enhanced the global flight planning of these types of aircrafts. Almost all of these advances rely on generated performance data as reference sources, the most of which are classified as "confidential" by the aircraft manufacturers. There are very few aero-propulsive models conceived for the climb regime in the literature, but none of them was designed without access to an engine database, and/or to performance data in climb and in cruise regimes with direct applicability for flight optimization. In this thesis, aero-propulsive models methodologies are proposed for climb and cruise regimes, using system identification and validation methods, through which airplane performance can be computed and stored in the most compact and easily accessible format for this kind of performance data. The acquiring of performance data in this format makes it possible to optimize flight profiles, used by on-board Flight Management Systems. The aero-propulsive models developed here were investigated on two aircrafts belonging to commercial class, and both of them had offered very good accuracy. One of their advantages is that they can be adapted to any other aircraft of the same class, even if there is no access to their corresponding engine flight data. In addition, these models could save airlines a considerable amount of money, given the fact that the number of flight tests could be drastically reduced. Lastly, academia, thus the laboratory of applied research in active control, avionics and aeroservoelasticity (LARCASE) team is gaining direct access to these aircraft performance data to obtain experience in novel optimization algorithms of flight profiles.
Evaluating an online pharmaceutical education system for pharmacy interns in critical care settings.
Yeh, Yu-Ting; Chen, Hsiang-Yin; Cheng, Kuei-Ju; Hou, Ssu-An; Yen, Yu-Hsuan; Liu, Chien-Tsai
2014-02-01
Incorporating electronic learning (eLearning) system into professional experimental programs such as pharmacy internships is a challenge. However, none of the current systems can fully support the unique needs of clinical pharmacy internship. In this study we enhanced a commercial eLearning system for clinical pharmacy internship (The Clinical Pharmacy Internship eLearning System, CPIES). The KAP questionnaire was used to evaluate the performance of group A with the traditional teaching model and group B with the CPIES teaching model. The CPIES teaching model showed significant improvement in interns' knowledge and practice (p = 0.002 and 0.031, respectively). The traditional teaching model only demonstrated significant improvement in practice (p = 0.011). Moreover, professionalism, such as attitudes on cooperating with other health professionals, is developed by learning from a good mentor. The on-line teaching and traditional teaching methods should undoubtedly be blended in a complete teaching model in order to improve learners' professional knowledge, facilitate correct attitude, and influence good practice. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Amagasa, Takashi; Nakayama, Takeo
2012-07-01
To test the hypothesis that relationship reported between long working hours and depression was inconsistent in previous studies because job demand was treated as a confounder. Structural equation modeling was used to construct five models, using work-related factors and depressive mood scale obtained from 218 clerical workers, to test for goodness of fit and was externally validated with data obtained from 1160 sales workers. Multiple logistic regression analysis was also performed. The model that showed that long working hours increased depression risk when job demand was regarded as an intermediate variable was the best fitted model (goodness-of-fit index/root-mean-square error of approximation: 0.981 to 0.996/0.042 to 0.044). The odds ratio for depression risk with work that was high demand and 60 hours or more per week was estimated at 2 to 4 versus work that was low demand and less than 60 hours per week. Long working hours increased depression risk, with job demand being an intermediate variable.
Yajima, Airi; Uesawa, Yoshihiro; Ogawa, Chiaki; Yatabe, Megumi; Kondo, Naoki; Saito, Shinichiro; Suzuki, Yoshihiko; Atsuda, Kouichiro; Kagaya, Hajime
2015-05-01
There exist various useful predictive models, such as the Cockcroft-Gault model, for estimating creatinine clearance (CLcr). However, the prediction of renal function is difficult in patients with cancer treated with cisplatin. Therefore, we attempted to construct a new model for predicting CLcr in such patients. Japanese patients with head and neck cancer who had received cisplatin-based chemotherapy were used as subjects. A multiple regression equation was constructed as a model for predicting CLcr values based on background and laboratory data. A model for predicting CLcr, which included body surface area, serum creatinine and albumin, was constructed. The model exhibited good performance prior to cisplatin therapy. In addition, it performed better than previously reported models after cisplatin therapy. The predictive model constructed in the present study displayed excellent potential and was useful for estimating the renal function of patients treated with cisplatin therapy. Copyright© 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Iranian risk model as a predictive tool for retinopathy in patients with type 2 diabetes.
Azizi-Soleiman, Fatemeh; Heidari-Beni, Motahar; Ambler, Gareth; Omar, Rumana; Amini, Masoud; Hosseini, Sayed-Mohsen
2015-10-01
Diabetic retinopathy (DR) is the leading cause of blindness in patients with type 1 or type 2 diabetes. The gold standard for the detection of DR requires expensive equipment. This study was undertaken to develop a simple and practical scoring system to predict the probability of DR. A total of 1782 patients who had first-degree relatives with type II diabetes were selected. Eye examinations were performed by an expert ophthalmologist. Biochemical and anthropometric predictors of DR were measured. Logistic regression was used to develop a statistical model that can be used to predict DR. Goodness of fit was examined using the Hosmer-Lemeshow test and the area under the receiver operating characteristic (ROC) curve. The risk model demonstrated good calibration and discrimination (ROC area=0.76) in the validation sample. Factors associated with DR in our model were duration of diabetes (odds ratio [OR]=2.14, confidence interval [CI] 95%=1.87 to 2.45); glycated hemoglobin (A1C) (OR=1.21, CI 95%=1.13 to 1.30); fasting plasma glucose (OR=1.83, CI 95%=1.28 to 2.62); systolic blood pressure (OR=1.01, CI 95%= 1.00 to 1.02); and proteinuria (OR=1.37, CI 95%=1.01 to 1.85). The only factor that had a protective effect against DR were body mass index and education level (OR=0.95, CI 95%=0.92 to 0.98). The good performance of our risk model suggests that it may be a useful risk-prediction tool for DR. It consisted of the positive predictors like A1C, diabetes duration, sex (male), fasting plasma glucose, systolic blood pressure and proteinuria, as well as negative risk factors like body mass index and education level. Copyright © 2015 Canadian Diabetes Association. Published by Elsevier Inc. All rights reserved.
Children's comprehension skill and the understanding of nominal metaphors.
Seigneuric, Alix; Megherbi, Hakima; Bueno, Steve; Lebahar, Julie; Bianco, Maryse
2016-10-01
According to Levorato and Cacciari's global elaboration model, understanding figurative language is explained by the same processes and background knowledge that are required for literal language. In this study, we investigated the relation between children's comprehension skill and the ability to understand referential nominal metaphors. Two groups of poor versus good comprehenders (8- to 10-year-olds) matched for word reading and vocabulary skills were invited to identify the referent of nouns used metaphorically or literally in short texts. Compared with good comprehenders, performance of poor comprehenders showed a substantial decrease in the metaphoric condition. Moreover, their performance was strongly affected by the degree of semantic incongruence between the terms of the nominal metaphor. These findings are discussed in relation to several factors, in particular the ability to use contextual information and semantic processing. Copyright © 2016 Elsevier Inc. All rights reserved.
Thermal Testing and Analysis of an Efficient High-Temperature Multi-Screen Internal Insulation
NASA Technical Reports Server (NTRS)
Weiland, Stefan; Handrick, Karin; Daryabeigi, Kamran
2007-01-01
Conventional multi-layer insulations exhibit excellent insulation performance but they are limited to the temperature range to which their components reflective foils and spacer materials are compatible. For high temperature applications, the internal multi-screen insulation IMI has been developed that utilizes unique ceramic material technology to produce reflective screens with high temperature stability. For analytical insulation sizing a parametric material model is developed that includes the main contributors for heat flow which are radiation and conduction. The adaptation of model-parameters based on effective steady-state thermal conductivity measurements performed at NASA Langley Research Center (LaRC) allows for extrapolation to arbitrary stack configurations and temperature ranges beyond the ones that were covered in the conductivity measurements. Experimental validation of the parametric material model was performed during the thermal qualification test of the X-38 Chin-panel, where test results and predictions showed a good agreement.
NASA Technical Reports Server (NTRS)
Yung, C. S.; Lansing, F. L.
1983-01-01
A 37.85 cu m (10,000 gallons) per year (nominal) passive solar powered water distillation system was installed and is operational in the Venus Deep Space Station. The system replaced an old, electrically powered water distiller. The distilled water produced with its high electrical resistivity is used to cool the sensitive microwave equipment. A detailed thermal model was developed to simulate the performance of the distiller and study its sensitivity under varying environment and load conditions. The quasi-steady state portion of the model is presented together with the formulas for heat and mass transfer coefficients used. Initial results indicated that a daily water evaporation efficiency of 30% can be achieved. A comparison made between a full day performance simulation and the actual field measurements gave good agreement between theory and experiment, which verified the model.
Description of the University of Auckland Global Mars Mesoscale Meteorological Model (GM4)
NASA Astrophysics Data System (ADS)
Wing, D. R.; Austin, G. L.
2005-08-01
The University of Auckland Global Mars Mesoscale Meteorological Model (GM4) is a numerical weather prediction model of the Martian atmosphere that has been developed through the conversion of the Penn State University / National Center for Atmospheric Research fifth generation mesoscale model (MM5). The global aspect of this model is self consistent, overlapping, and forms a continuous domain around the entire planet, removing the need to provide boundary conditions other than at initialisation, yielding independence from the constraint of a Mars general circulation model. The brief overview of the model will be given, outlining the key physical processes and setup of the model. Comparison between data collected from Mars Pathfinder during its 1997 mission and simulated conditions using GM4 have been performed. Diurnal temperature variation as predicted by the model shows very good correspondence with the surface truth data, to within 5 K for the majority of the diurnal cycle. Mars Viking Data is also compared with the model, with good agreement. As a further means of validation for the model, various seasonal comparisons of surface and vertical atmospheric structure are conducted with the European Space Agency AOPP/LMD Mars Climate Database. Selected simulations over regions of interest will also be presented.
Multi-scale modelling of supercapacitors: From molecular simulations to a transmission line model
NASA Astrophysics Data System (ADS)
Pean, C.; Rotenberg, B.; Simon, P.; Salanne, M.
2016-09-01
We perform molecular dynamics simulations of a typical nanoporous-carbon based supercapacitor. The organic electrolyte consists in 1-ethyl-3-methylimidazolium and hexafluorophosphate ions dissolved in acetonitrile. We simulate systems at equilibrium, for various applied voltages. This allows us to determine the relevant thermodynamic (capacitance) and transport (in-pore resistivities) properties. These quantities are then injected in a transmission line model for testing its ability to predict the charging properties of the device. The results from this macroscopic model are in good agreement with non-equilibrium molecular dynamics simulations, which validates its use for interpreting electrochemical impedance experiments.
NASA Astrophysics Data System (ADS)
Sun, Guo-Qin; Sun, Feng-Yang; Cao, Fang-Li; Chen, Shu-Jun; Barkey, Mark E.
2015-11-01
The numerical simulation of tensile fracture behavior on Al-Cu alloy friction stir-welded joint was performed with the Gurson-Tvergaard-Needleman (GTN) damage model. The parameters of the GTN model were studied in each region of the friction stir-welded joint by means of inverse identification. Based on the obtained parameters, the finite element model of the welded joint was built to predict the fracture behavior and tension properties. Good agreement can be found between the numerical and experimental results in the location of the tensile fracture and the mechanical properties.
Hosseinzadeh, M; Ghoreishi, M; Narooei, K
2016-06-01
In this study, the hyperelastic models of demineralized and deproteinized bovine cortical femur bone were investigated and appropriate models were developed. Using uniaxial compression test data, the strain energy versus stretch was calculated and the appropriate hyperelastic strain energy functions were fitted on data in order to calculate the material parameters. To obtain the mechanical behavior in other loading conditions, the hyperelastic strain energy equations were investigated for pure shear and equi-biaxial tension loadings. The results showed the Mooney-Rivlin and Ogden models cannot predict the mechanical response of demineralized and deproteinized bovine cortical femur bone accurately, while the general exponential-exponential and general exponential-power law models have a good agreement with the experimental results. To investigate the sensitivity of the hyperelastic models, a variation of 10% in material parameters was performed and the results indicated an acceptable stability for the general exponential-exponential and general exponential-power law models. Finally, the uniaxial tension and compression of cortical femur bone were studied using the finite element method in VUMAT user subroutine of ABAQUS software and the computed stress-stretch curves were shown a good agreement with the experimental data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kernel spectral clustering with memory effect
NASA Astrophysics Data System (ADS)
Langone, Rocco; Alzate, Carlos; Suykens, Johan A. K.
2013-05-01
Evolving graphs describe many natural phenomena changing over time, such as social relationships, trade markets, metabolic networks etc. In this framework, performing community detection and analyzing the cluster evolution represents a critical task. Here we propose a new model for this purpose, where the smoothness of the clustering results over time can be considered as a valid prior knowledge. It is based on a constrained optimization formulation typical of Least Squares Support Vector Machines (LS-SVM), where the objective function is designed to explicitly incorporate temporal smoothness. The latter allows the model to cluster the current data well and to be consistent with the recent history. We also propose new model selection criteria in order to carefully choose the hyper-parameters of our model, which is a crucial issue to achieve good performances. We successfully test the model on four toy problems and on a real world network. We also compare our model with Evolutionary Spectral Clustering, which is a state-of-the-art algorithm for community detection of evolving networks, illustrating that the kernel spectral clustering with memory effect can achieve better or equal performances.
Stability and bifurcation for an SEIS epidemic model with the impact of media
NASA Astrophysics Data System (ADS)
Huo, Hai-Feng; Yang, Peng; Xiang, Hong
2018-01-01
A novel SEIS epidemic model with the impact of media is introduced. By analyzing the characteristic equation of equilibrium, the basic reproduction number is obtained and the stability of the steady states is proved. The occurrence of a forward, backward and Hopf bifurcation is derived. Numerical simulations and sensitivity analysis are performed. Our results manifest that media can regard as a good indicator in controlling the emergence and spread of the epidemic disease.
NASA Astrophysics Data System (ADS)
Liu, Lei; zhang, Zhihua; Wang, Ya; Qin, hao
2018-03-01
The study on the pressure resistance performance of emulsion explosives in deep water can provide theoretical basis for underwater blasting, deep-hole blasting and emulsion explosives development. The sensitizer is an important component of emulsion explosives. By using reusable experimental devices to simulate the charge environment in deep water, the influence of the content of chemical sensitizer on the deep-water pressure resistance performance of emulsion explosives was studied. The experimental results show that with the increasing of the content of chemical sensitizer, the deep-water pressure resistance performance of emulsion explosives gradually improves, and when the pressure is fairly large, the effect is particularly pronounced; in a certain range, with the increase of the content of chemical sensitizer, that emulsion explosives’ explosion performance also gradually improve, but when the content reaches a certain value, the explosion properties declined instead; under the same emulsion matrix condition, when the content of NANO2 is 0.2%, that the emulsion explosives has good resistance to water pressure and good explosion properties. The correctness of the results above was testified in model blasting.
Cavo, Marta; Scaglione, Silvia
2016-11-01
The really nontrivial goal of tissue engineering is combining all scaffold micro-architectural features, affecting both fluid-dynamical and mechanical performance, to obtain a fully functional implant. In this work we identified an optimal geometrical pattern for bone tissue engineering applications, best balancing several graft needs which correspond to competing design goals. In particular, we investigated the occurred changes in graft behavior by varying pore size (300μm, 600μm, 900μm), interpore distance (equal to pore size or 300μm fixed) and pores interconnection (absent, 45°-oriented, 90°-oriented). Mathematical considerations and Computational Fluid Dynamics (CFD) tools, here combined in a complete theoretical model, were carried out to this aim. Poly-lactic acid (PLA) based samples were realized by 3D printing, basing on the modeled architectures. A collagen (COL) coating was also realized on grafts surface and the interaction between PLA and COL, besides the protein contribution to graft bioactivity, was evaluated. Scaffolds were extensively characterized; human articular cells were used to test their biocompatibility and to evaluate the theoretical model predictions. Grafts fulfilled both the chemical and physical requirements. Finally, a good agreement was found between the theoretical model predictions and the experimental data, making these prototypes good candidates for bone graft replacements. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Priyadarshini, Lakshmi
Frequently transported packaging goods are more prone to damage due to impact, jolting or vibration in transit. Fragile goods, for example, glass, ceramics, porcelain are susceptible to mechanical stresses. Hence ancillary materials like cushions play an important role when utilized within package. In this work, an analytical model of a 3D cellular structure is established based on Kelvin model and lattice structure. The research will provide a comparative study between the 3D printed Kelvin unit structure and 3D printed lattice structure. The comparative investigation is based on parameters defining cushion performance such as cushion creep, indentation, and cushion curve analysis. The applications of 3D printing is in rapid prototyping where the study will provide information of which model delivers better form of energy absorption. 3D printed foam will be shown as a cost-effective approach as prototype. The research also investigates about the selection of material for 3D printing process. As cushion development demands flexible material, three-dimensional printing with material having elastomeric properties is required. Further, the concept of cushion design is based on Kelvin model structure and lattice structure. The analytical solution provides the cushion curve analysis with respect to the results observed when load is applied over the cushion. The results are reported on basis of attenuation and amplification curves.
Hameed, Shilan S.; Aziz, Fakhra; Sulaiman, Khaulah; Ahmad, Zubair
2017-01-01
In this research work, numerical simulations are performed to correlate the photovoltaic parameters with various internal and external factors influencing the performance of solar cells. Single-diode modeling approach is utilized for this purpose and theoretical investigations are compared with the reported experimental evidences for organic and inorganic solar cells at various electrical and thermal conditions. Electrical parameters include parasitic resistances (Rs and Rp) and ideality factor (n), while thermal parameters can be defined by the cells temperature (T). A comprehensive analysis concerning broad spectral variations in the short circuit current (Isc), open circuit voltage (Voc), fill factor (FF) and efficiency (η) is presented and discussed. It was generally concluded that there exists a good agreement between the simulated results and experimental findings. Nevertheless, the controversial consequence of temperature impact on the performance of organic solar cells necessitates the development of a complementary model which is capable of well simulating the temperature impact on these devices performance. PMID:28793325
28 CFR 523.10 - Purpose and scope.
Code of Federal Regulations, 2010 CFR
2010-07-01
... TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.10 Purpose and scope. (a) The Bureau of Prisons awards extra good time credit for performing exceptionally meritorious service, or for performing duties... of extra good time award at a time (e.g., an inmate earning industrial or camp good time is not...
28 CFR 523.10 - Purpose and scope.
Code of Federal Regulations, 2014 CFR
2014-07-01
... TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.10 Purpose and scope. (a) The Bureau of Prisons awards extra good time credit for performing exceptionally meritorious service, or for performing duties... of extra good time award at a time (e.g., an inmate earning industrial or camp good time is not...
28 CFR 523.10 - Purpose and scope.
Code of Federal Regulations, 2011 CFR
2011-07-01
... TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.10 Purpose and scope. (a) The Bureau of Prisons awards extra good time credit for performing exceptionally meritorious service, or for performing duties... of extra good time award at a time (e.g., an inmate earning industrial or camp good time is not...
28 CFR 523.10 - Purpose and scope.
Code of Federal Regulations, 2013 CFR
2013-07-01
... TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.10 Purpose and scope. (a) The Bureau of Prisons awards extra good time credit for performing exceptionally meritorious service, or for performing duties... of extra good time award at a time (e.g., an inmate earning industrial or camp good time is not...
28 CFR 523.10 - Purpose and scope.
Code of Federal Regulations, 2012 CFR
2012-07-01
... TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.10 Purpose and scope. (a) The Bureau of Prisons awards extra good time credit for performing exceptionally meritorious service, or for performing duties... of extra good time award at a time (e.g., an inmate earning industrial or camp good time is not...
Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph
2016-01-01
Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760
Multi-Complementary Model for Long-Term Tracking
Zhang, Deng; Zhang, Junchang; Xia, Chenyang
2018-01-01
In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed. PMID:29425170
Dégano, Irene R; Subirana, Isaac; Torre, Marina; Grau, María; Vila, Joan; Fusco, Danilo; Kirchberger, Inge; Ferrières, Jean; Malmivaara, Antti; Azevedo, Ana; Meisinger, Christa; Bongard, Vanina; Farmakis, Dimitros; Davoli, Marina; Häkkinen, Unto; Araújo, Carla; Lekakis, John; Elosua, Roberto; Marrugat, Jaume
2015-03-01
Hospital performance models in acute myocardial infarction (AMI) are useful to assess patient management. While models are available for individual countries, mainly US, cross-European performance models are lacking. Thus, we aimed to develop a system to benchmark European hospitals in AMI and percutaneous coronary intervention (PCI), based on predicted in-hospital mortality. We used the EURopean HOspital Benchmarking by Outcomes in ACS Processes (EURHOBOP) cohort to develop the models, which included 11,631 AMI patients and 8276 acute coronary syndrome (ACS) patients who underwent PCI. Models were validated with a cohort of 55,955 European ACS patients. Multilevel logistic regression was used to predict in-hospital mortality in European hospitals for AMI and PCI. Administrative and clinical models were constructed with patient- and hospital-level covariates, as well as hospital- and country-based random effects. Internal cross-validation and external validation showed good discrimination at the patient level and good calibration at the hospital level, based on the C-index (0.736-0.819) and the concordance correlation coefficient (55.4%-80.3%). Mortality ratios (MRs) showed excellent concordance between administrative and clinical models (97.5% for AMI and 91.6% for PCI). Exclusion of transfers and hospital stays ≤1day did not affect in-hospital mortality prediction in sensitivity analyses, as shown by MR concordance (80.9%-85.4%). Models were used to develop a benchmarking system to compare in-hospital mortality rates of European hospitals with similar characteristics. The developed system, based on the EURHOBOP models, is a simple and reliable tool to compare in-hospital mortality rates between European hospitals in AMI and PCI. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Forecasting of dissolved oxygen in the Guanting reservoir using an optimized NGBM (1,1) model.
An, Yan; Zou, Zhihong; Zhao, Yanfei
2015-03-01
An optimized nonlinear grey Bernoulli model was proposed by using a particle swarm optimization algorithm to solve the parameter optimization problem. In addition, each item in the first-order accumulated generating sequence was set in turn as an initial condition to determine which alternative would yield the highest forecasting accuracy. To test the forecasting performance, the optimized models with different initial conditions were then used to simulate dissolved oxygen concentrations in the Guanting reservoir inlet and outlet (China). The empirical results show that the optimized model can remarkably improve forecasting accuracy, and the particle swarm optimization technique is a good tool to solve parameter optimization problems. What's more, the optimized model with an initial condition that performs well in in-sample simulation may not do as well as in out-of-sample forecasting. Copyright © 2015. Published by Elsevier B.V.
Prediction of wastewater treatment plants performance based on artificial fish school neural network
NASA Astrophysics Data System (ADS)
Zhang, Ruicheng; Li, Chong
2011-10-01
A reliable model for wastewater treatment plant is essential in providing a tool for predicting its performance and to form a basis for controlling the operation of the process. This would minimize the operation costs and assess the stability of environmental balance. For the multi-variable, uncertainty, non-linear characteristics of the wastewater treatment system, an artificial fish school neural network prediction model is established standing on actual operation data in the wastewater treatment system. The model overcomes several disadvantages of the conventional BP neural network. The results of model calculation show that the predicted value can better match measured value, played an effect on simulating and predicting and be able to optimize the operation status. The establishment of the predicting model provides a simple and practical way for the operation and management in wastewater treatment plant, and has good research and engineering practical value.
NASA Astrophysics Data System (ADS)
Adineh-Vand, A.; Torabi, M.; Roshani, G. H.; Taghipour, M.; Feghhi, S. A. H.; Rezaei, M.; Sadati, S. M.
2013-09-01
This paper presents a soft computing based artificial intelligent technique, adaptive neuro-fuzzy inference system (ANFIS) to predict the neutron production rate (NPR) of IR-IECF device in wide discharge current and voltage ranges. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the ANFIS model. The performance of the proposed ANFIS model is tested using the experimental data using four performance measures: correlation coefficient, mean absolute error, mean relative error percentage (MRE%) and root mean square error. The obtained results show that the proposed ANFIS model has achieved good agreement with the experimental results. In comparison to the experimental data the proposed ANFIS model has MRE% <1.53 and 2.85 % for training and testing data respectively. Therefore, this model can be used as an efficient tool to predict the NPR in the IR-IECF device.
Forecasting stochastic neural network based on financial empirical mode decomposition.
Wang, Jie; Wang, Jun
2017-06-01
In an attempt to improve the forecasting accuracy of stock price fluctuations, a new one-step-ahead model is developed in this paper which combines empirical mode decomposition (EMD) with stochastic time strength neural network (STNN). The EMD is a processing technique introduced to extract all the oscillatory modes embedded in a series, and the STNN model is established for considering the weight of occurrence time of the historical data. The linear regression performs the predictive availability of the proposed model, and the effectiveness of EMD-STNN is revealed clearly through comparing the predicted results with the traditional models. Moreover, a new evaluated method (q-order multiscale complexity invariant distance) is applied to measure the predicted results of real stock index series, and the empirical results show that the proposed model indeed displays a good performance in forecasting stock market fluctuations. Copyright © 2017 Elsevier Ltd. All rights reserved.
What makes a good clinical student and teacher? An exploratory study.
Goldie, John; Dowie, Al; Goldie, Anne; Cotton, Phil; Morrison, Jill
2015-03-10
What makes a good clinical student is an area that has received little coverage in the literature and much of the available literature is based on essays and surveys. It is particularly relevant as recent curricular innovations have resulted in greater student autonomy. We also wished to look in depth at what makes a good clinical teacher. A qualitative approach using individual interviews with educational supervisors and focus groups with senior clinical students was used. Data was analysed using a "framework" technique. Good clinical students were viewed as enthusiastic and motivated. They were considered to be proactive and were noted to be visible in the wards. They are confident, knowledgeable, able to prioritise information, flexible and competent in basic clinical skills by the time of graduation. They are fluent in medical terminology while retaining the ability to communicate effectively and are genuine when interacting with patients. They do not let exam pressure interfere with their performance during their attachments. Good clinical teachers are effective role models. The importance of teachers' non-cognitive characteristics such as inter-personal skills and relationship building was particularly emphasised. To be effective, teachers need to take into account individual differences among students, and the communicative nature of the learning process through which students learn and develop. Good teachers were noted to promote student participation in ward communities of practice. Other members of clinical communities of practice can be effective teachers, mentors and role models. Good clinical students are proactive in their learning; an important quality where students are expected to be active in managing their own learning. Good clinical students share similar characteristics with good clinical teachers. A teacher's enthusiasm and non-cognitive abilities are as important as their cognitive abilities. Student learning in clinical settings is a collective responsibility. Our findings could be used in tutor training and for formative assessment of both clinical students and teachers. This may promote early recognition and intervention when problems arise.
Numerical simulation of damage evolution for ductile materials and mechanical properties study
NASA Astrophysics Data System (ADS)
El Amri, A.; Hanafi, I.; Haddou, M. E. Y.; Khamlichi, A.
2015-12-01
This paper presents results of a numerical modelling of ductile fracture and failure of elements made of 5182H111 aluminium alloys subjected to dynamic traction. The analysis was performed using Johnson-Cook model based on ABAQUS software. The modelling difficulty related to prediction of ductile fracture mainly arises because there is a tremendous span of length scales from the structural problem to the micro-mechanics problem governing the material separation process. This study has been used the experimental results to calibrate a simple crack propagation criteria for shell elements of which one has often been used in practical analyses. The performance of the proposed model is in general good and it is believed that the presented results and experimental-numerical calibration procedure can be of use in practical finite-element simulations.
Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale
Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.; ...
2017-01-26
Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less
Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.
Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less
Financial Time Series Prediction Using Elman Recurrent Random Neural Networks
Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli
2016-01-01
In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423
Submillimetre wave imaging and security: imaging performance and prediction
NASA Astrophysics Data System (ADS)
Appleby, R.; Ferguson, S.
2016-10-01
Within the European Commission Seventh Framework Programme (FP7), CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) has designed and is fabricating a stand-off system operating at sub-millimetre wave frequencies for the detection of objects concealed on people. This system scans people as they walk by the sensor. This paper presents the top level system design which brings together both passive and active sensors to provide good performance. The passive system operates in two bands between 100 and 600GHz and is based on a cryogen free cooled focal plane array sensor whilst the active system is a solid-state 340GHz radar. A modified version of OpenFX was used for modelling the passive system. This model was recently modified to include realistic location-specific skin temperature and to accept animated characters wearing up to three layers of clothing that move dynamically, such as those typically found in cinematography. Targets under clothing have been modelled and the performance simulated. The strengths and weaknesses of this modelling approach are discussed.
Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.
Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli
2016-01-01
In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.
Team performance in the Italian NHS: the role of reflexivity.
Urbini, Flavio; Callea, Antonino; Chirumbolo, Antonio; Talamo, Alessandra; Ingusci, Emanuela; Ciavolino, Enrico
2018-04-09
Purpose The purpose of this paper is twofold: first, to investigate the goodness of the input-process-output (IPO) model in order to evaluate work team performance within the Italian National Health Care System (NHS); and second, to test the mediating role of reflexivity as an overarching process factor between input and output. Design/methodology/approach The Italian version of the Aston Team Performance Inventory was administered to 351 employees working in teams in the Italian NHS. Mediation analyses with latent variables were performed via structural equation modeling (SEM); the significance of total, direct, and indirect effect was tested via bootstrapping. Findings Underpinned by the IPO framework, the results of SEM supported mediational hypotheses. First, the application of the IPO model in the Italian NHS showed adequate fit indices, showing that the process mediates the relationship between input and output factors. Second, reflexivity mediated the relationship between input and output, influencing some aspects of team performance. Practical implications The results provide useful information for HRM policies improving process dimensions of the IPO model via the mediating role of reflexivity as a key role in team performance. Originality/value This study is one of a limited number of studies that applied the IPO model in the Italian NHS. Moreover, no study has yet examined the role of reflexivity as a mediator between input and output factors in the IPO model.
The Future of Drought in the Southeastern U.S.: Projections from downscaled CMIP5 models
NASA Astrophysics Data System (ADS)
Keellings, D.; Engstrom, J.
2017-12-01
The Southeastern U.S. has been repeatedly impacted by severe droughts that have affected the environment and economy of the region. In this study the ability of 32 downscaled CMIP5 models, bias corrected using localized constructed analogs (LOCA), to simulate historical observations of dry spells from 1950-2005 are assessed using Perkins skill scores and significance tests. The models generally simulate the distribution of dry days well but there are significant differences between the ability of the best and worst performing models, particularly when it comes to the upper tail of the distribution. The best and worst performing models are then projected through 2099, using RCP 4.5 and 8.5, and estimates of 20 year return periods are compared. Only the higher skill models provide a good estimate of extreme dry spell lengths with simulations of 20 year return values within ± 5 days of observed values across the region. Projected return values differ by model grouping, but all models exhibit significant increases.
A model of clutter for complex, multivariate geospatial displays.
Lohrenz, Maura C; Trafton, J Gregory; Beck, R Melissa; Gendron, Marlin L
2009-02-01
A novel model of measuring clutter in complex geospatial displays was compared with human ratings of subjective clutter as a measure of convergent validity. The new model is called the color-clustering clutter (C3) model. Clutter is a known problem in displays of complex data and has been shown to affect target search performance. Previous clutter models are discussed and compared with the C3 model. Two experiments were performed. In Experiment 1, participants performed subjective clutter ratings on six classes of information visualizations. Empirical results were used to set two free parameters in the model. In Experiment 2, participants performed subjective clutter ratings on aeronautical charts. Both experiments compared and correlated empirical data to model predictions. The first experiment resulted in a .76 correlation between ratings and C3. The second experiment resulted in a .86 correlation, significantly better than results from a model developed by Rosenholtz et al. Outliers to our correlation suggest further improvements to C3. We suggest that (a) the C3 model is a good predictor of subjective impressions of clutter in geospatial displays, (b) geospatial clutter is a function of color density and saliency (primary C3 components), and (c) pattern analysis techniques could further improve C3. The C3 model could be used to improve the design of electronic geospatial displays by suggesting when a display will be too cluttered for its intended audience.
Monamele, Gwladys C.; Vernet, Marie-Astrid; Nsaibirni, Robert F. J.; Bigna, Jean Joel R.; Kenmoe, Sebastien; Njankouo, Mohamadou Ripa
2017-01-01
Influenza is associated with highly contagious respiratory infections. Previous research has found that influenza transmission is often associated with climate variables especially in temperate regions. This study was performed in order to fill the gap of knowledge regarding the relationship between incidence of influenza and three meteorological parameters (temperature, rainfall and humidity) in a tropical setting. This was a retrospective study performed in Yaoundé-Cameroon from January 2009 to November 2015. Weekly proportions of confirmed influenza cases from five sentinel sites were considered as dependent variables, whereas weekly values of mean temperature, average relative humidity and accumulated rainfall were considered as independent variables. A univariate linear regression model was used in determining associations between influenza activity and weather covariates. A time-series method was used to predict on future values of influenza activity. The data was divided into 2 parts; the first 71 months were used to calibrate the model, and the last 12 months to test for prediction. Overall, there were 1173 confirmed infections with influenza virus. Linear regression analysis showed that there was no statistically significant association observed between influenza activity and weather variables. Very weak relationships (-0.1 < r < 0.1) were observed. Three prediction models were obtained for the different viral types (overall positive, Influenza A and Influenza B). Model 1 (overall influenza) and model 2 (influenza A) fitted well during the estimation period; however, they did not succeed to make good forecasts for predictions. Accumulated rainfall was the only external covariate that enabled good fit of both models. Based on the stationary R2, 29.5% and 41.1% of the variation in the series can be explained by model 1 and 2, respectively. This study laid more emphasis on the fact that influenza in Cameroon is characterized by year-round activity. The meteorological variables selected in this study did not enable good forecast of future influenza activity and certainly acted as proxies to other factors not considered, such as, UV radiation, absolute humidity, air quality and wind. PMID:29088290
NASA Astrophysics Data System (ADS)
Henderson, J. M.; Eluszkiewicz, J.; Mountain, M. E.; Nehrkorn, T.; Chang, R. Y.-W.; Karion, A.; Miller, J. B.; Sweeney, C.; Steiner, N.; Wofsy, S. C.; Miller, C. E.
2014-10-01
This paper describes the atmospheric modeling that underlies the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) science analysis, including its meteorological and atmospheric transport components (Polar variant of the Weather Research and Forecasting (WRF) and Stochastic Time Inverted Lagrangian Transport (STILT) models), and provides WRF validation for May-October 2012 and March-November 2013 - the first two years of the aircraft field campaign. A triply nested computational domain for WRF was chosen so that the innermost domain with 3.3 km grid spacing encompasses the entire mainland of Alaska and enables the substantial orography of the state to be represented by the underlying high-resolution topographic input field. Summary statistics of the WRF model performance on the 3.3 km grid indicate good overall agreement with quality-controlled surface and radiosonde observations. Two-meter temperatures are generally too cold by approximately 1.4 K in 2012 and 1.1 K in 2013, while 2 m dewpoint temperatures are too low (dry) by 0.2 K in 2012 and too high (moist) by 0.6 K in 2013. Wind speeds are biased too low by 0.2 m s-1 in 2012 and 0.3 m s-1 in 2013. Model representation of upper level variables is very good. These measures are comparable to model performance metrics of similar model configurations found in the literature. The high quality of these fine-resolution WRF meteorological fields inspires confidence in their use to drive STILT for the purpose of computing surface influences ("footprints") at commensurably increased resolution. Indeed, footprints generated on a 0.1° grid show increased spatial detail compared with those on the more common 0.5° grid, lending itself better for convolution with flux models for carbon dioxide and methane across the heterogeneous Alaskan landscape. Ozone deposition rates computed using STILT footprints indicate good agreement with observations and exhibit realistic seasonal variability, further indicating that WRF-STILT footprints are of high quality and will support accurate estimates of CO2 and CH4 surface-atmosphere fluxes using CARVE observations.
NASA Astrophysics Data System (ADS)
Henderson, J. M.; Eluszkiewicz, J.; Mountain, M. E.; Nehrkorn, T.; Chang, R. Y.-W.; Karion, A.; Miller, J. B.; Sweeney, C.; Steiner, N.; Wofsy, S. C.; Miller, C. E.
2015-04-01
This paper describes the atmospheric modeling that underlies the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) science analysis, including its meteorological and atmospheric transport components (polar variant of the Weather Research and Forecasting (WRF) and Stochastic Time Inverted Lagrangian Transport (STILT) models), and provides WRF validation for May-October 2012 and March-November 2013 - the first 2 years of the aircraft field campaign. A triply nested computational domain for WRF was chosen so that the innermost domain with 3.3 km grid spacing encompasses the entire mainland of Alaska and enables the substantial orography of the state to be represented by the underlying high-resolution topographic input field. Summary statistics of the WRF model performance on the 3.3 km grid indicate good overall agreement with quality-controlled surface and radiosonde observations. Two-meter temperatures are generally too cold by approximately 1.4 K in 2012 and 1.1 K in 2013, while 2 m dewpoint temperatures are too low (dry) by 0.2 K in 2012 and too high (moist) by 0.6 K in 2013. Wind speeds are biased too low by 0.2 m s-1 in 2012 and 0.3 m s-1 in 2013. Model representation of upper level variables is very good. These measures are comparable to model performance metrics of similar model configurations found in the literature. The high quality of these fine-resolution WRF meteorological fields inspires confidence in their use to drive STILT for the purpose of computing surface influences ("footprints") at commensurably increased resolution. Indeed, footprints generated on a 0.1° grid show increased spatial detail compared with those on the more common 0.5° grid, better allowing for convolution with flux models for carbon dioxide and methane across the heterogeneous Alaskan landscape. Ozone deposition rates computed using STILT footprints indicate good agreement with observations and exhibit realistic seasonal variability, further indicating that WRF-STILT footprints are of high quality and will support accurate estimates of CO2 and CH4 surface-atmosphere fluxes using CARVE observations.
Detection of Organophosphorus Pesticides with Colorimetry and Computer Image Analysis.
Li, Yanjie; Hou, Changjun; Lei, Jincan; Deng, Bo; Huang, Jing; Yang, Mei
2016-01-01
Organophosphorus pesticides (OPs) represent a very important class of pesticides that are widely used in agriculture because of their relatively high-performance and moderate environmental persistence, hence the sensitive and specific detection of OPs is highly significant. Based on the inhibitory effect of acetylcholinesterase (AChE) induced by inhibitors, including OPs and carbamates, a colorimetric analysis was used for detection of OPs with computer image analysis of color density in CMYK (cyan, magenta, yellow and black) color space and non-linear modeling. The results showed that there was a gradually weakened trend of yellow intensity with the increase of the concentration of dichlorvos. The quantitative analysis of dichlorvos was achieved by Artificial Neural Network (ANN) modeling, and the results showed that the established model had a good predictive ability between training sets and predictive sets. Real cabbage samples containing dichlorvos were detected by colorimetry and gas chromatography (GC), respectively. The results showed that there was no significant difference between colorimetry and GC (P > 0.05). The experiments of accuracy, precision and repeatability revealed good performance for detection of OPs. AChE can also be inhibited by carbamates, and therefore this method has potential applications in real samples for OPs and carbamates because of high selectivity and sensitivity.
NASA Astrophysics Data System (ADS)
Remón, Laura; Siedlecki, Damian; Cabeza-Gil, Iulen; Calvo, Begoña
2018-03-01
Intraocular lenses (IOLs) are used in the cataract treatment for surgical replacement of the opacified crystalline lens. Before being implanted they have to pass the strict quality control to guarantee a good biomechanical stability inside the capsular bag, avoiding the rotation, and to provide a good optical quality. The goal of this study was to investigate the influence of the material and haptic design on the behavior of the IOLs under dynamic compression condition. For this purpose, the strain-stress characteristics of the hydrophobic and hydrophilic materials were estimated experimentally. Next, these data were used as the input for a finite-element model (FEM) to analyze the stability of different IOL haptic designs, according to the procedure described by the ISO standards. Finally, the simulations of the effect of IOL tilt and decentration on the optical performance were performed in an eye model using a ray-tracing software. The results suggest the major importance of the haptic design rather than the material on the postoperative behavior of an IOL. FEM appears to be a powerful tool for numerical studies of the biomechanical properties of IOLs and it allows one to help in the design phase to the manufacturers.
Remón, Laura; Siedlecki, Damian; Cabeza-Gil, Iulen; Calvo, Begoña
2018-03-01
Intraocular lenses (IOLs) are used in the cataract treatment for surgical replacement of the opacified crystalline lens. Before being implanted they have to pass the strict quality control to guarantee a good biomechanical stability inside the capsular bag, avoiding the rotation, and to provide a good optical quality. The goal of this study was to investigate the influence of the material and haptic design on the behavior of the IOLs under dynamic compression condition. For this purpose, the strain-stress characteristics of the hydrophobic and hydrophilic materials were estimated experimentally. Next, these data were used as the input for a finite-element model (FEM) to analyze the stability of different IOL haptic designs, according to the procedure described by the ISO standards. Finally, the simulations of the effect of IOL tilt and decentration on the optical performance were performed in an eye model using a ray-tracing software. The results suggest the major importance of the haptic design rather than the material on the postoperative behavior of an IOL. FEM appears to be a powerful tool for numerical studies of the biomechanical properties of IOLs and it allows one to help in the design phase to the manufacturers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Ballu, Srilata; Itteboina, Ramesh; Sivan, Sree Kanth; Manga, Vijjulatha
2018-02-01
Filamentous temperature-sensitive protein Z (FtsZ) is a protein encoded by the FtsZ gene that assembles into a Z-ring at the future site of the septum of bacterial cell division. Structurally, FtsZ is a homolog of eukaryotic tubulin but has low sequence similarity; this makes it possible to obtain FtsZ inhibitors without affecting the eukaryotic cell division. Computational studies were performed on a series of substituted 3-arylalkoxybenzamide derivatives reported as inhibitors of FtsZ activity in Staphylococcus aureus. Quantitative structure-activity relationship models (QSAR) models generated showed good statistical reliability, which is evident from r 2 ncv and r 2 loo values. The predictive ability of these models was determined and an acceptable predictive correlation (r 2 Pred ) values were obtained. Finally, we performed molecular dynamics simulations in order to examine the stability of protein-ligand interactions. This facilitated us to compare free binding energies of cocrystal ligand and newly designed molecule B1. The good concordance between the docking results and comparative molecular field analysis (CoMFA)/comparative molecular similarity indices analysis (CoMSIA) contour maps afforded obliging clues for the rational modification of molecules to design more potent FtsZ inhibitors.
Landeros-Martinez, Linda-Lucila; Glossman-Mitnik, Daniel; Orrantia-Borunda, Erasmo; Flores-Holguín, Norma
2017-10-19
The use of nanodiamonds as anticancer drug delivery vehicles has received much attention in recent years. In this theoretical paper, we propose using different esterification methods for nanodiamonds. The monomers proposed are 2-hydroxypropanal, polyethylene glycol, and polyglicolic acid. Specifically, the hydrogen bonds, infrared (IR) spectra, molecular polar surface area, and reactivity parameters are analyzed. The monomers proposed for use in esterification follow Lipinski's rule of five, meaning permeability is good, they have good permeation, and their bioactivity is high. The results show that the complex formed between tamoxifen and nanodiamond esterified with polyglicolic acid presents the greatest number of hydrogen bonds and a good amount of molecular polar surface area. Calculations concerning the esterified nanodiamond and reactivity parameters were performed using Density Functional Theory with the M06 functional and the basis set 6-31G (d); for the esterified nanodiamond-Tamoxifen complexes, the semi-empirical method PM6 was used. The solvent effect has been taken into account by using implicit modelling and the conductor-like polarizable continuum model.
NASA Astrophysics Data System (ADS)
Ahn, Hyunjun; Jung, Younghun; Om, Ju-Seong; Heo, Jun-Haeng
2014-05-01
It is very important to select the probability distribution in Statistical hydrology. Goodness of fit test is a statistical method that selects an appropriate probability model for a given data. The probability plot correlation coefficient (PPCC) test as one of the goodness of fit tests was originally developed for normal distribution. Since then, this test has been widely applied to other probability models. The PPCC test is known as one of the best goodness of fit test because it shows higher rejection powers among them. In this study, we focus on the PPCC tests for the GEV distribution which is widely used in the world. For the GEV model, several plotting position formulas are suggested. However, the PPCC statistics are derived only for the plotting position formulas (Goel and De, In-na and Nguyen, and Kim et al.) in which the skewness coefficient (or shape parameter) are included. And then the regression equations are derived as a function of the shape parameter and sample size for a given significance level. In addition, the rejection powers of these formulas are compared using Monte-Carlo simulation. Keywords: Goodness-of-fit test, Probability plot correlation coefficient test, Plotting position, Monte-Carlo Simulation ACKNOWLEDGEMENTS This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.
2009-01-01
Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented to solve this problem. The proposed ANFIS approach can construct an input–output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input–output data pairs. Good performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down. PMID:20596382
Computational Flow Modeling of Hydrodynamics in Multiphase Trickle-Bed Reactors
NASA Astrophysics Data System (ADS)
Lopes, Rodrigo J. G.; Quinta-Ferreira, Rosa M.
2008-05-01
This study aims to incorporate most recent multiphase models in order to investigate the hydrodynamic behavior of a TBR in terms of pressure drop and liquid holdup. Taking into account transport phenomena such as mass and heat transfer, an Eulerian k-fluid model was developed resulting from the volume averaging of the continuity and momentum equations and solved for a 3D representation of the catalytic bed. Computational fluid dynamics (CFD) model predicts hydrodynamic parameters quite well if good closures for fluid/fluid and fluid/particle interactions are incorporated in the multiphase model. Moreover, catalytic performance is investigated with the catalytic wet oxidation of a phenolic pollutant.
Ahadian, Samad; Kawazoe, Yoshiyuki
2009-06-04
Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented to solve this problem. The proposed ANFIS approach can construct an input-output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input-output data pairs. Good performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down.
SATCOM antenna siting study on P-3C aircraft, volume 1
NASA Technical Reports Server (NTRS)
Bensman, D. A.; Marhefka, R. J.
1991-01-01
The NEC-BSC (Basic Scattering Code) was used to study the performance of a SATCOM antenna on a P-3C aircraft. After plate cylinder fields are added to version 3.1 of the NEC-BSC, it is shown that the NEC-BSC can be used to accurately predict the performance of a SATCOM antenna system on a P-3C aircraft. The study illustrates that the NEC-BSC gives good results when compared with scale model measurements provided by Boeing and Lockheed.
Elastic electron scattering from the DNA bases cytosine and thymine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colyer, C. J.; Bellm, S. M.; Lohmann, B.
2011-10-15
Cross-section data for electron scattering from biologically relevant molecules are important for the modeling of energy deposition in living tissue. Relative elastic differential cross sections have been measured for cytosine and thymine using the crossed-beam method. These measurements have been performed for six discrete electron energies between 60 and 500 eV and for detection angles between 15 deg. and 130 deg. Calculations have been performed via the screen-corrected additivity rule method and are in good agreement with the present experiment.
Filosso, Pier Luigi; Guerrera, Francesco; Evangelista, Andrea; Welter, Stefan; Thomas, Pascal; Casado, Paula Moreno; Rendina, Erino Angelo; Venuta, Federico; Ampollini, Luca; Brunelli, Alessandro; Stella, Franco; Nosotti, Mario; Raveglia, Federico; Larocca, Valentina; Rena, Ottavio; Margaritora, Stefano; Ardissone, Francesco; Travis, William D; Sarkaria, Inderpal; Sagan, Dariusz
2015-09-01
Typical carcinoids (TCs) are uncommon, slow-growing neoplasms, usually with high 5-year survival rates. As these are rare tumours, their management is still based on small clinical observations and no international guidelines exist. Based on the European Society of Thoracic Surgeon Neuroendocrine Tumours Working Group (NET-WG) Database, we evaluated factors that may influence TCs mortality. Using the NET-WG database, an analysis on TC survival was performed. Overall survival (OS) was calculated starting from the date of intervention. Predictors of OS were investigated using the Cox model with shared frailty (accounting for the within-centre correlation). Candidate predictors were: gender, age, smoking habit, tumour location, previous malignancy, Eastern Cooperative Oncology Group (ECOG) performance status (PS), pT, pN, TNM stage and tumour vascular invasion. The final model included predictors with P ≤ 0.15 after a backward selection. Missing data in the evaluated predictors were multiple-imputed and combined estimates were obtained from five imputed data sets. For 58 of 1167 TC patients vital status was unavailable and analyses were therefore performed on 1109 patients from 17 institutions worldwide. During a median follow-up of 50 months, 87 patients died, with a 5-year OS rate of 93.7% (95% confidence interval: 91.7-95.3). Backward selection resulted in a prediction model for mortality containing age, gender, previous malignancies, peripheral tumour, TNM stage and ECOG PS. The final model showed a good discrimination ability with a C-statistic equal to 0.836 (bootstrap optimism-corrected 0.806). We presented and validated a promising prognostic model for TC survival, showing good calibration and discrimination ability. Further analyses are needed and could be focused on an external validation of this model. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
NASA Astrophysics Data System (ADS)
Crowell, B.; Melgar, D.
2017-12-01
The 2016 Mw 7.8 Kaikoura earthquake is one of the most complex earthquakes in recent history, rupturing across at least 10 disparate faults with varying faulting styles, and exhibiting intricate surface deformation patterns. The complexity of this event has motivated the need for multidisciplinary geophysical studies to get at the underlying source physics to better inform earthquake hazards models in the future. However, events like Kaikoura beg the question of how well (or how poorly) such earthquakes can be modeled automatically in real-time and still satisfy the general public and emergency managers. To investigate this question, we perform a retrospective real-time GPS analysis of the Kaikoura earthquake with the G-FAST early warning module. We first perform simple point source models of the earthquake using peak ground displacement scaling and a coseismic offset based centroid moment tensor (CMT) inversion. We predict ground motions based on these point sources as well as simple finite faults determined from source scaling studies, and validate against true recordings of peak ground acceleration and velocity. Secondly, we perform a slip inversion based upon the CMT fault orientations and forward model near-field tsunami maximum expected wave heights to compare against available tide gauge records. We find remarkably good agreement between recorded and predicted ground motions when using a simple fault plane, with the majority of disagreement in ground motions being attributable to local site effects, not earthquake source complexity. Similarly, the near-field tsunami maximum amplitude predictions match tide gauge records well. We conclude that even though our models for the Kaikoura earthquake are devoid of rich source complexities, the CMT driven finite fault is a good enough "average" source and provides useful constraints for rapid forecasting of ground motion and near-field tsunami amplitudes.
Hidden Markov models and neural networks for fault detection in dynamic systems
NASA Technical Reports Server (NTRS)
Smyth, Padhraic
1994-01-01
Neural networks plus hidden Markov models (HMM) can provide excellent detection and false alarm rate performance in fault detection applications, as shown in this viewgraph presentation. Modified models allow for novelty detection. Key contributions of neural network models are: (1) excellent nonparametric discrimination capability; (2) a good estimator of posterior state probabilities, even in high dimensions, and thus can be embedded within overall probabilistic model (HMM); and (3) simple to implement compared to other nonparametric models. Neural network/HMM monitoring model is currently being integrated with the new Deep Space Network (DSN) antenna controller software and will be on-line monitoring a new DSN 34-m antenna (DSS-24) by July, 1994.
Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi
2014-01-01
Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425
Ng, Juki; Rogosheske, John; Barker, Juliet; Weisdorf, Daniel; Jacobson, Pamala A
2006-06-01
Renal transplant patients with suboptimal mycophenolic acid (MPA) areas under the curves (AUCs) are at greater risk of acute rejection. In hematopoietic cell transplantation, a low MPA AUC is also associated with a higher incidence of acute graft versus host disease. Therefore, a limited sampling model was developed and validated to simultaneously estimate total and unbound MPA AUC0-12 in hematopoietic cell transplantation patients. Intensive pharmacokinetic sampling was performed at steady state between days 3 to 7 posttransplant in 73 adult subjects while receiving prophylactic mycophenolate mofetil 1 g per 12 hours orally or intravenously plus cyclosporine. Total and unbound MPA plasma concentrations were measured, and total and unbound AUC0-12 was determined using noncompartmental analysis. Regression analysis was then performed to build IV and PO, total and unbound AUC0-12 models from the first 34 subjects. The predictive performance of these models was tested in the next 39 subjects. Trough concentrations poorly estimate observed total and unbound AUC0-12 (r<0.48). A model with 3 concentrations (2-, 4-, and 6-hour post start of infusion) best estimated observed total and unbound AUC0-12 after IV dosing (r>0.99). Oral total and unbound AUC0-12 was more difficult to estimate and required at least 4 concentrations (0-, 1-, 2-, and 6-hour post dose) in the model (r>0.85). The predictive performance of the final models was good. Eighty-three percent of IV and 70% of PO AUC0-12 predictions fell within +/-20% of the observed values without significant bias. Trough MPA concentrations do not accurately describe MPA AUC0-12. Three intravenous (2-, 4-, 6-hour post start of infusion) or 4 oral (0-, 1-, 2-, and 6-hour post dose) MPA plasma concentrations measured over a 12-hour dosing interval will estimate the total and unbound AUC0-12 nearly as well as intensive pharmacokinetic sampling with good precision and low bias. This approach simplifies AUC0-12 targeting of MPA post hematopoietic cell transplantation.
Sexual challenges with aging: integrating the GES approach in an elderly couple.
McCarthy, Barry; Pierpaoli, Christina
2015-01-01
An advantage of sexuality after 60 years of age is the increased need for couple involvement to promote desire, pleasure, eroticism, and satisfaction inherent to the healthy aging process. This case study clinically explores the complex psychobiosocial interactions for understanding, assessing, and treating sexual problems for couples age 60 years and older, emphasizing the Good Enough Sex approach of variable, flexible, and shared sexual pleasure. Aging couples are discouraged from appraising their sexual experiences within the parameters of the pass/fail binary of the traditional individual performance model and are instead encouraged to embrace the evolving elasticity of their sexual experiences. The Good Enough Sex model espouses an approachable and satisfying alternative for the promotion of sexual function and satisfaction throughout the life span, with particular interest in late adulthood sexual health.
Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic
NASA Astrophysics Data System (ADS)
Mohan Reddy, M.; Gorin, Alexander; Abou-El-Hossein, K. A.
2011-02-01
Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.
NASA Astrophysics Data System (ADS)
Tripathi, O. P.; Godin-Beekmann, S.; Lefevre, F.; Marchand, M.; Pazmino, A.; Hauchecorne, A.
2005-12-01
Model simulations of ozone loss rates during recent arctic and Antarctic winters are compared with the observed ozone loss rates from the match technique. Arctic winters 1994/1995, 1999/2000, 2002/2003 and the Antarctic winter 2003 were considered for the analysis. We use a high resolution chemical transport model MIMOSA-CHIM and REPROBUS box model for the calculation of ozone loss rates. Trajectory model calculations show that the ozone loss rates are dependent on the initialization fields. On the one hand when chemical fields are initialized by UCAM (University of Cambridge SLIMCAT model simulated fields) the loss rates were underestimated by a factor of two whereas on the other hand when it is initialized by UL (University of Leeds) fields the model loss rates are in a very good agreement with match loss rates at lower levels. The study shows a very good agreement between MIMOSA-CHIM simulation and match observation in 1999/2000 winter at both levels, 450 and 500 K, except slight underestimation in March at 500 K. But in January we have a very good agreement. This is also true for 1994/1995 when we consider simulated ozone loss rate in view of the ECMWF wind deficiency assuming that match observations were not made on isolated trajectories. Sensitivity tests, by changing JCl2O2 value, particle number density and heating rates, performed for the arctic winter 1999/2000 shows that we need to improve our understanding of particle number density and heating rate calculation mechanism. Burkholder JCl2O2 has improved the comparison of MIMOSA-CHIM model results with observations (Tripathi et al., 2005). In the same study the comparison results were shown to improved by changing heating rates and number density through NAT particle sedimentation.
Topex Microwave Radiometer thermal control - Post-system-test modifications and on-orbit performance
NASA Technical Reports Server (NTRS)
Lin, Edward I.
1993-01-01
The Topex Microwave Radiometer has had an excellent thermal performance since launch. The instrument, however, went through a hardware modification right before launch to correct for a thermal design inadequacy that was uncovered during the spacecraft thermal vacuum test. This paper reports on how the initially obscure problem was tracked down, and how the thermal models were revised, validated, and utilized to investigate the solution options and guide the hardware modification decisions. Details related to test data interpretation, analytical uncertainties, and model-prediction vs. test-data correlation, are documented. Instrument/spacecraft interface issues, where the problem originated and where in general pitfalls abound, are dealt with specifically. Finally, on-orbit thermal performance data are presented, which exhibit good agreement with flight predictions, and lessons learned are discussed.
NASA Astrophysics Data System (ADS)
Elarusi, Abdulmunaem; Attar, Alaa; Lee, HoSung
2018-02-01
The optimum design of a thermoelectric system for application in car seat climate control has been modeled and its performance evaluated experimentally. The optimum design of the thermoelectric device combining two heat exchangers was obtained by using a newly developed optimization method based on the dimensional technique. Based on the analytical optimum design results, commercial thermoelectric cooler and heat sinks were selected to design and construct the climate control heat pump. This work focuses on testing the system performance in both cooling and heating modes to ensure accurate analytical modeling. Although the analytical performance was calculated using the simple ideal thermoelectric equations with effective thermoelectric material properties, it showed very good agreement with experiment for most operating conditions.
Design and research on discharge performance for aluminum-air battery
NASA Astrophysics Data System (ADS)
Liu, Zu; Zhao, Junhong; Cai, Yanping; Xu, Bin
2017-01-01
As a kind of clean energy, the research of aluminum air battery is carried out because aluminum-air battery has advantages of high specific energy, silence and low infrared. Based on the research on operating principle of aluminum-air battery, a novel aluminum-air battery system was designed composed of aluminum-air cell and the circulation system of electrolyte. A system model is established to analyze the polarization curve, the constant current discharge performance and effect of electrolyte concentration on the performance of monomer. The experimental results show that the new energy aluminum-air battery has good discharge performance, which lays a foundation for its application.
McMullen, Heather; Griffiths, Chris; Leber, Werner; Greenhalgh, Trisha
2015-05-31
Complex intervention trials may require health care organisations to implement new service models. In a recent cluster randomised controlled trial, some participating organisations achieved high recruitment, whereas others found it difficult to assimilate the intervention and were low recruiters. We sought to explain this variation and develop a model to inform organisational participation in future complex intervention trials. The trial included 40 general practices in a London borough with high HIV prevalence. The intervention was offering a rapid HIV test as part of the New Patient Health Check. The primary outcome was mean CD4 cell count at diagnosis. The process evaluation consisted of several hundred hours of ethnographic observation, 21 semi-structured interviews and analysis of routine documents (e.g., patient leaflets, clinical protocols) and trial documents (e.g., inclusion criteria, recruitment statistics). Qualitative data were analysed thematically using--and, where necessary, extending--Greenhalgh et al.'s model of diffusion of innovations. Narrative synthesis was used to prepare case studies of four practices representing maximum variety in clinicians' interest in HIV (assessed by level of serological testing prior to the trial) and performance in the trial (high vs. low recruiters). High-recruiting practices were, in general though not invariably, also innovative practices. They were characterised by strong leadership, good managerial relations, readiness for change, a culture of staff training and available staff time ('slack resources'). Their front-line staff believed that patients might benefit from the rapid HIV test ('relative advantage'), were emotionally comfortable administering it ('compatibility'), skilled in performing it ('task issues') and made creative adaptations to embed the test in local working practices ('reinvention'). Early experience of a positive HIV test ('observability') appeared to reinforce staff commitment to recruiting more participants. Low-performing practices typically had less good managerial relations, significant resource constraints, staff discomfort with the test and no positive results early in the trial. An adaptation of the diffusion of innovations model was an effective analytical tool for retrospectively explaining high and low-performing practices in a complex intervention research trial. Whether the model will work prospectively to predict performance (and hence shape the design of future trials) is unknown. ISRCTN Registry number: ISRCTN63473710. Date assigned: 22 April 2010.
STGSTK- PREDICTING MULTISTAGE AXIAL-FLOW COMPRESSOR PERFORMANCE BY A MEANLINE STAGE-STACKING METHOD
NASA Technical Reports Server (NTRS)
Steinke, R. J.
1994-01-01
The STGSTK computer program was developed for predicting the off-design performance of multistage axial-flow compressors. The axial-flow compressor is widely used in aircraft engines. In addition to its inherent advantage of high mass flow per frontal area, it can exhibit very good aerodynamic performance. However, good aerodynamic performance over an acceptable range of operating conditions is not easily attained. STGSTK provides an analytical tool for the development of new compressor designs. The simplicity of a one-dimensional compressible flow model enables the stage-stacking method used in STGSTK to have excellent convergence properties and short computer run times. Also, the simplicity of the model makes STGSTK a manageable code that eases the incorporation, or modification, of empirical correlations directly linked to test data. Thus, the user can adapt the code to meet varying design needs. STGSTK uses a meanline stage-stacking method to predict off-design performance. Stage and cumulative compressor performance is calculated from representative meanline velocity diagrams located at rotor inlet and outlet meanline radii. STGSTK includes options for the following: 1) non-dimensional stage characteristics may be input directly or calculated from stage design performance input, 2) stage characteristics may be modified for off-design speed and blade reset, and 3) rotor design deviation angle may be modified for off-design flow, speed, and blade setting angle. Many of the code's options use correlations that are normally obtained from experimental data. The STGSTK user may modify these correlations as needed. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 85K of 8 bit bytes. STGSTK was developed in 1982.
NASA Astrophysics Data System (ADS)
Wong, David W. C.; Choy, K. L.; Chow, Harry K. H.; Lin, Canhong
2014-06-01
For the most rapidly growing economic entity in the world, China, a new logistics operation called the indirect cross-border supply chain model has recently emerged. The primary idea of this model is to reduce logistics costs by storing goods at a bonded warehouse with low storage cost in certain Chinese regions, such as the Pearl River Delta (PRD). This research proposes a performance measurement system (PMS) framework to assess the direct and indirect cross-border supply chain models. The PMS covers four categories including cost, time, quality and flexibility in the assessment of the performance of direct and indirect models. Furthermore, a survey was conducted to investigate the logistics performance of third party logistics (3PLs) at the PRD regions, including Guangzhou, Shenzhen and Hong Kong. The significance of the proposed PMS framework allows 3PLs accurately pinpoint the weakness and strengths of it current operations policy at four major performance measurement categories. Hence, this helps 3PLs further enhance the competitiveness and operations efficiency through better resources allocation at the area of warehousing and transportation.
Meng, X H; Zeng, S X; Shi, Jonathan J; Qi, G Y; Zhang, Z B
2014-12-01
Based on a content analysis of 533 Chinese listed companies, this study examines how corporate environmental performance affects not only the level of detail of a company's environmental disclosures, but also what information is disclosed. The results show that (1) both poor and good performers have more disclosure than the median (i.e., "mixed") performers, which provides empirical evidence to support a nonlinear relationship between corporate environmental performance and environmental disclosure; (2) poor performers disclose more soft information on environmental performance than good performers, and good performers disclose more solid information; and (3) although poor performers increase disclosure after being exposed as environmental violators, they avoid disclosing negative environmental information, such as the violation and the associated penalties. This study provides additional evidence for a nonlinear relationship between environmental performance and disclosure in emerging markets, and suggests environmental disclosure may not be a valid signal to differentiate good performers from poor performers in contemporary China. Copyright © 2014 Elsevier Ltd. All rights reserved.
Evaluation of four methods for estimating leaf area of isolated trees
P.J. Peper; E.G. McPherson
2003-01-01
The accurate modeling of the physiological and functional processes of urban forests requires information on the leaf area of urban tree species. Several non-destructive, indirect leaf area sampling methods have shown good performance for homogenous canopies. These methods have not been evaluated for use in urban settings where trees are typically isolated and...
ERIC Educational Resources Information Center
Chu, Hui-Chun; Chang, Shao-Chen
2014-01-01
Although educational computer games have been recognized as being a promising approach, previous studies have indicated that, without supportive models, students might only show temporary interest during the game-based learning process, and their learning performance is often not as good as expected. Therefore, in this paper, a two-tier test…
Never Good Enough: The Educational Journey of a Vietnamese American Woman
ERIC Educational Resources Information Center
Nguyen, Annie T.
2014-01-01
In this article, author Annie Nguyen describes her personal encounters with the "Model Minority Myth" as a young Vietnamese American in a doctoral program. This myth assumes that all Asian Americans are inherently smart and high achieving, which is problematic when in fact there are many Asian American individuals who perform poorly.…
A Sense of Balance: District Aligns Personalized Learning with School and System Goals
ERIC Educational Resources Information Center
Donsky, Debbie; Witherow, Kathy
2015-01-01
This article addresses the challenge of personalizing learning while also ensuring alignment with system and school improvement plans. Leaders of the York Region District School Board in Ontario knew that what took their high-performing school district from good to great would not take it from great to excellent. The district's early model of…
Stepping Stones: Five Ways to Increase Craftsmanship in the Art Room
ERIC Educational Resources Information Center
Balsley, Jessica
2012-01-01
Art educators consistently strive to coach and model good craftsmanship to their students. Sure, teachers can check to ensure students are understanding the art concepts, test them on the vocabulary or even assess students on their color mixing strategies. If these art standards are performed in a sloppy manner (i.e.: lacking craftsmanship),…
Keshavarzi, Sareh; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Pakfetrat, Maryam
2012-01-01
BACKGROUND. In many studies with longitudinal data, time-dependent covariates can only be measured intermittently (not at all observation times), and this presents difficulties for standard statistical analyses. This situation is common in medical studies, and methods that deal with this challenge would be useful. METHODS. In this study, we performed the seemingly unrelated regression (SUR) based models, with respect to each observation time in longitudinal data with intermittently observed time-dependent covariates and further compared these models with mixed-effect regression models (MRMs) under three classic imputation procedures. Simulation studies were performed to compare the sample size properties of the estimated coefficients for different modeling choices. RESULTS. In general, the proposed models in the presence of intermittently observed time-dependent covariates showed a good performance. However, when we considered only the observed values of the covariate without any imputations, the resulted biases were greater. The performances of the proposed SUR-based models in comparison with MRM using classic imputation methods were nearly similar with approximately equal amounts of bias and MSE. CONCLUSION. The simulation study suggests that the SUR-based models work as efficiently as MRM in the case of intermittently observed time-dependent covariates. Thus, it can be used as an alternative to MRM.
McCauley, Peter; Kalachev, Leonid V; Mollicone, Daniel J; Banks, Siobhan; Dinges, David F; Van Dongen, Hans P A
2013-12-01
Recent experimental observations and theoretical advances have indicated that the homeostatic equilibrium for sleep/wake regulation--and thereby sensitivity to neurobehavioral impairment from sleep loss--is modulated by prior sleep/wake history. This phenomenon was predicted by a biomathematical model developed to explain changes in neurobehavioral performance across days in laboratory studies of total sleep deprivation and sustained sleep restriction. The present paper focuses on the dynamics of neurobehavioral performance within days in this biomathematical model of fatigue. Without increasing the number of model parameters, the model was updated by incorporating time-dependence in the amplitude of the circadian modulation of performance. The updated model was calibrated using a large dataset from three laboratory experiments on psychomotor vigilance test (PVT) performance, under conditions of sleep loss and circadian misalignment; and validated using another large dataset from three different laboratory experiments. The time-dependence of circadian amplitude resulted in improved goodness-of-fit in night shift schedules, nap sleep scenarios, and recovery from prior sleep loss. The updated model predicts that the homeostatic equilibrium for sleep/wake regulation--and thus sensitivity to sleep loss--depends not only on the duration but also on the circadian timing of prior sleep. This novel theoretical insight has important implications for predicting operator alertness during work schedules involving circadian misalignment such as night shift work.
NASA Astrophysics Data System (ADS)
Xu, Shuang; Gao, Jun; Wang, Linlin; Kan, Kan; Xie, Yu; Shen, Peikang; Li, Li; Shi, Keying
2015-08-01
Establishing heterostructures, as a good strategy to improve gas sensing performance, has been studied extensively. In this research, In2O3-composite SnO2 nanorod (ICTOs) heterostructures have been prepared via electrospinning, followed by calcination. It is found that In2O3 can improve the carrier density and oxygen deficiency of SnO2. In particular, the 3ICTO (Sn : In atom ratio of 25 : 0.3) nanorods with special particle distributions show an excellent sensing response towards different concentrations of NOx at room temperature. The highest sensing response is up to 8.98 for 100 ppm NOx with a fast response time of 4.67 s, which is over 11 times higher than that of pristine SnO2 nanorods at room temperature and the lowest detection limit is down to 0.1 ppm. More significantly, it presents good stability after 30 days for NOx of low concentration (0.1 ppm and 0.5 ppm). In addition, the rational band structure model combined with the surface depletion model which describe the NOx gas sensing mechanism of 3ICTO are presented. The 3ICTO nanorods may be promising in the application of gas sensors.Establishing heterostructures, as a good strategy to improve gas sensing performance, has been studied extensively. In this research, In2O3-composite SnO2 nanorod (ICTOs) heterostructures have been prepared via electrospinning, followed by calcination. It is found that In2O3 can improve the carrier density and oxygen deficiency of SnO2. In particular, the 3ICTO (Sn : In atom ratio of 25 : 0.3) nanorods with special particle distributions show an excellent sensing response towards different concentrations of NOx at room temperature. The highest sensing response is up to 8.98 for 100 ppm NOx with a fast response time of 4.67 s, which is over 11 times higher than that of pristine SnO2 nanorods at room temperature and the lowest detection limit is down to 0.1 ppm. More significantly, it presents good stability after 30 days for NOx of low concentration (0.1 ppm and 0.5 ppm). In addition, the rational band structure model combined with the surface depletion model which describe the NOx gas sensing mechanism of 3ICTO are presented. The 3ICTO nanorods may be promising in the application of gas sensors. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr03796d
From global circulation to flood loss: Coupling models across the scales
NASA Astrophysics Data System (ADS)
Felder, Guido; Gomez-Navarro, Juan Jose; Bozhinova, Denica; Zischg, Andreas; Raible, Christoph C.; Ole, Roessler; Martius, Olivia; Weingartner, Rolf
2017-04-01
The prediction and the prevention of flood losses requires an extensive understanding of underlying meteorological, hydrological, hydraulic and damage processes. Coupled models help to improve the understanding of such underlying processes and therefore contribute the understanding of flood risk. Using such a modelling approach to determine potentially flood-affected areas and damages requires a complex coupling between several models operating at different spatial and temporal scales. Although the isolated parts of the single modelling components are well established and commonly used in the literature, a full coupling including a mesoscale meteorological model driven by a global circulation one, a hydrologic model, a hydrodynamic model and a flood impact and loss model has not been reported so far. In the present study, we tackle the application of such a coupled model chain in terms of computational resources, scale effects, and model performance. From a technical point of view, results show the general applicability of such a coupled model, as well as good model performance. From a practical point of view, such an approach enables the prediction of flood-induced damages, although some future challenges have been identified.
Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L
2016-07-01
Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breuker, M.S.; Braun, J.E.
This paper presents a detailed evaluation of the performance of a statistical, rule-based fault detection and diagnostic (FDD) technique presented by Rossi and Braun (1997). Steady-state and transient tests were performed on a simple rooftop air conditioner over a range of conditions and fault levels. The steady-state data without faults were used to train models that predict outputs for normal operation. The transient data with faults were used to evaluate FDD performance. The effect of a number of design variables on FDD sensitivity for different faults was evaluated and two prototype systems were specified for more complete evaluation. Good performancemore » was achieved in detecting and diagnosing five faults using only six temperatures (2 input and 4 output) and linear models. The performance improved by about a factor of two when ten measurements (three input and seven output) and higher order models were used. This approach for evaluating and optimizing the performance of the statistical, rule-based FDD technique could be used as a design and evaluation tool when applying this FDD method to other packaged air-conditioning systems. Furthermore, the approach could also be modified to evaluate the performance of other FDD methods.« less
Challenges in modeling the X-29 flight test performance
NASA Technical Reports Server (NTRS)
Hicks, John W.; Kania, Jan; Pearce, Robert; Mills, Glen
1987-01-01
Presented are methods, instrumentation, and difficulties associated with drag measurement of the X-29A aircraft. The initial performance objective of the X-29A program emphasized drag polar shapes rather than absolute drag levels. Priorities during the flight envelope expansion restricted the evaluation of aircraft performance. Changes in aircraft configuration, uncertainties in angle-of-attack calibration, and limitations in instrumentation complicated the analysis. Limited engine instrumentation with uncertainties in overall in-flight thrust accuracy made it difficult to obtain reliable values of coefficient of parasite drag. The aircraft was incapable of tracking the automatic camber control trim schedule for optimum wing flaperon deflection during typical dynamic performance maneuvers; this has also complicated the drag polar shape modeling. The X-29A was far enough off the schedule that the developed trim drag correction procedure has proven inadequate. However, good drag polar shapes have been developed throughout the flight envelope. Preliminary flight results have compared well with wind tunnel predictions. A more comprehensive analysis must be done to complete performance models. The detailed flight performance program with a calibrated engine will benefit from the experience gained during this preliminary performance phase.
Challenges in modeling the X-29A flight test performance
NASA Technical Reports Server (NTRS)
Hicks, John W.; Kania, Jan; Pearce, Robert; Mills, Glen
1987-01-01
The paper presents the methods, instrumentation, and difficulties associated with drag measurement of the X-29A aircraft. The initial performance objective of the X-29A program emphasized drag polar shapes rather than absolute drag levels. Priorities during the flight envelope expansion restricted the evaluation of aircraft performance. Changes in aircraft configuration, uncertainties in angle-of-attack calibration, and limitations in instrumentation complicated the analysis. Limited engine instrumentation with uncertainties in overall in-flight thrust accuracy made it difficult to obtain reliable values of coefficient of parasite drag. The aircraft was incapable of tracking the automatic camber control trim schedule for optimum wing flaperon deflection during typical dynamic performance maneuvers; this has also complicated the drag polar shape modeling. The X-29A was far enough off the schedule that the developed trim drag correction procedure has proven inadequate. Despite these obstacles, good drag polar shapes have been developed throughout the flight envelope. Preliminary flight results have compared well with wind tunnel predictions. A more comprehensive analysis must be done to complete the performance models. The detailed flight performance program with a calibrated engine will benefit from the experience gained during this preliminary performance phase.
Static and transient performance prediction for CFB boilers using a Bayesian-Gaussian Neural Network
NASA Astrophysics Data System (ADS)
Ye, Haiwen; Ni, Weidou
1997-06-01
A Bayesian-Gaussian Neural Network (BGNN) is put forward in this paper to predict the static and transient performance of Circulating Fluidized Bed (CFB) boilers. The advantages of this network over Back-Propagation Neural Networks (BPNNs), easier determination of topology, simpler and time saving in training process as well as self-organizing ability, make this network more practical in on-line performance prediction for complicated processes. Simulation shows that this network is comparable to the BPNNs in predicting the performance of CFB boilers. Good and practical on-line performance predictions are essential for operation guide and model predictive control of CFB boilers, which are under research by the authors.
A dynamical study of Galactic globular clusters under different relaxation conditions
NASA Astrophysics Data System (ADS)
Zocchi, A.; Bertin, G.; Varri, A. L.
2012-03-01
Aims: We perform a systematic combined photometric and kinematic analysis of a sample of globular clusters under different relaxation conditions, based on their core relaxation time (as listed in available catalogs), by means of two well-known families of spherical stellar dynamical models. Systems characterized by shorter relaxation time scales are expected to be better described by isotropic King models, while less relaxed systems might be interpreted by means of non-truncated, radially-biased anisotropic f(ν) models, originally designed to represent stellar systems produced by a violent relaxation formation process and applied here for the first time to the study of globular clusters. Methods: The comparison between dynamical models and observations is performed by fitting simultaneously surface brightness and velocity dispersion profiles. For each globular cluster, the best-fit model in each family is identified, along with a full error analysis on the relevant parameters. Detailed structural properties and mass-to-light ratios are also explicitly derived. Results: We find that King models usually offer a good representation of the observed photometric profiles, but often lead to less satisfactory fits to the kinematic profiles, independently of the relaxation condition of the systems. For some less relaxed clusters, f(ν) models provide a good description of both observed profiles. Some derived structural characteristics, such as the total mass or the half-mass radius, turn out to be significantly model-dependent. The analysis confirms that, to answer some important dynamical questions that bear on the formation and evolution of globular clusters, it would be highly desirable to acquire larger numbers of accurate kinematic data-points, well distributed over the cluster field. Appendices are available in electronic form at http://www.aanda.org
Material Properties from Air Puff Corneal Deformation by Numerical Simulations on Model Corneas.
Bekesi, Nandor; Dorronsoro, Carlos; de la Hoz, Andrés; Marcos, Susana
2016-01-01
To validate a new method for reconstructing corneal biomechanical properties from air puff corneal deformation images using hydrogel polymer model corneas and porcine corneas. Air puff deformation imaging was performed on model eyes with artificial corneas made out of three different hydrogel materials with three different thicknesses and on porcine eyes, at constant intraocular pressure of 15 mmHg. The cornea air puff deformation was modeled using finite elements, and hyperelastic material parameters were determined through inverse modeling, minimizing the difference between the simulated and the measured central deformation amplitude and central-peripheral deformation ratio parameters. Uniaxial tensile tests were performed on the model cornea materials as well as on corneal strips, and the results were compared to stress-strain simulations assuming the reconstructed material parameters. The measured and simulated spatial and temporal profiles of the air puff deformation tests were in good agreement (< 7% average discrepancy). The simulated stress-strain curves of the studied hydrogel corneal materials fitted well the experimental stress-strain curves from uniaxial extensiometry, particularly in the 0-0.4 range. Equivalent Young´s moduli of the reconstructed material properties from air-puff were 0.31, 0.58 and 0.48 MPa for the three polymer materials respectively which differed < 1% from those obtained from extensiometry. The simulations of the same material but different thickness resulted in similar reconstructed material properties. The air-puff reconstructed average equivalent Young´s modulus of the porcine corneas was 1.3 MPa, within 18% of that obtained from extensiometry. Air puff corneal deformation imaging with inverse finite element modeling can retrieve material properties of model hydrogel polymer corneas and real corneas, which are in good correspondence with those obtained from uniaxial extensiometry, suggesting that this is a promising technique to retrieve quantitative corneal biomechanical properties.
Benchmarking hydrological model predictive capability for UK River flows and flood peaks.
NASA Astrophysics Data System (ADS)
Lane, Rosanna; Coxon, Gemma; Freer, Jim; Wagener, Thorsten
2017-04-01
Data and hydrological models are now available for national hydrological analyses. However, hydrological model performance varies between catchments, and lumped, conceptual models are not able to produce adequate simulations everywhere. This study aims to benchmark hydrological model performance for catchments across the United Kingdom within an uncertainty analysis framework. We have applied four hydrological models from the FUSE framework to 1128 catchments across the UK. These models are all lumped models and run at a daily timestep, but differ in the model structural architecture and process parameterisations, therefore producing different but equally plausible simulations. We apply FUSE over a 20 year period from 1988-2008, within a GLUE Monte Carlo uncertainty analyses framework. Model performance was evaluated for each catchment, model structure and parameter set using standard performance metrics. These were calculated both for the whole time series and to assess seasonal differences in model performance. The GLUE uncertainty analysis framework was then applied to produce simulated 5th and 95th percentile uncertainty bounds for the daily flow time-series and additionally the annual maximum prediction bounds for each catchment. The results show that the model performance varies significantly in space and time depending on catchment characteristics including climate, geology and human impact. We identify regions where models are systematically failing to produce good results, and present reasons why this could be the case. We also identify regions or catchment characteristics where one model performs better than others, and have explored what structural component or parameterisation enables certain models to produce better simulations in these catchments. Model predictive capability was assessed for each catchment, through looking at the ability of the models to produce discharge prediction bounds which successfully bound the observed discharge. These results improve our understanding of the predictive capability of simple conceptual hydrological models across the UK and help us to identify where further effort is needed to develop modelling approaches to better represent different catchment and climate typologies.
Theory and performance of plated thermocouples.
NASA Technical Reports Server (NTRS)
Pesko, R. N.; Ash, R. L.; Cupschalk, S. G.; Germain, E. F.
1972-01-01
A theory has been developed to describe the performance of thermocouples which have been formed by electroplating portions of one thermoelectric material with another. The electroplated leg of the thermocouple was modeled as a collection of infinitesimally small homogeneous thermocouples connected in series. Experiments were performed using several combinations of Constantan wire sizes and copper plating thicknesses. A transient method was used to develop the thermoelectric calibrations, and the theory was found to be in quite good agreement with the experiments. In addition, data gathered in a Soviet experiment were also found to be in close agreement with the theory.
Protein single-model quality assessment by feature-based probability density functions.
Cao, Renzhi; Cheng, Jianlin
2016-04-04
Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob.
NASA Astrophysics Data System (ADS)
Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya
2012-05-01
Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.
Dynamic Modeling, Controls, and Testing for Electrified Aircraft
NASA Technical Reports Server (NTRS)
Connolly, Joseph; Stalcup, Erik
2017-01-01
Electrified aircraft have the potential to provide significant benefits for efficiency and emissions reductions. To assess these potential benefits, modeling tools are needed to provide rapid evaluation of diverse concepts and to ensure safe operability and peak performance over the mission. The modeling challenge for these vehicles is the ability to show significant benefits over the current highly refined aircraft systems. The STARC-ABL (single-aisle turbo-electric aircraft with an aft boundary layer propulsor) is a new test proposal that builds upon previous N3-X team hybrid designs. This presentation describes the STARC-ABL concept, the NASA Electric Aircraft Testbed (NEAT) which will allow testing of the STARC-ABL powertrain, and the related modeling and simulation efforts to date. Modeling and simulation includes a turbofan simulation, Numeric Propulsion System Simulation (NPSS), which has been integrated with NEAT; and a power systems and control model for predicting testbed performance and evaluating control schemes. Model predictions provide good comparisons with testbed data for an NPSS-integrated test of the single-string configuration of NEAT.
A Real-time Breakdown Prediction Method for Urban Expressway On-ramp Bottlenecks
NASA Astrophysics Data System (ADS)
Ye, Yingjun; Qin, Guoyang; Sun, Jian; Liu, Qiyuan
2018-01-01
Breakdown occurrence on expressway is considered to relate with various factors. Therefore, to investigate the association between breakdowns and these factors, a Bayesian network (BN) model is adopted in this paper. Based on the breakdown events identified at 10 urban expressways on-ramp in Shanghai, China, 23 parameters before breakdowns are extracted, including dynamic environment conditions aggregated with 5-minutes and static geometry features. Different time periods data are used to predict breakdown. Results indicate that the models using 5-10 min data prior to breakdown performs the best prediction, with the prediction accuracies higher than 73%. Moreover, one unified model for all bottlenecks is also built and shows reasonably good prediction performance with the classification accuracy of breakdowns about 75%, at best. Additionally, to simplify the model parameter input, the random forests (RF) model is adopted to identify the key variables. Modeling with the selected 7 parameters, the refined BN model can predict breakdown with adequate accuracy.
Regime-Based Evaluation of Cloudiness in CMIP5 Models
NASA Technical Reports Server (NTRS)
Jin, Daeho; Oraiopoulos, Lazaros; Lee, Dong Min
2016-01-01
The concept of Cloud Regimes (CRs) is used to develop a framework for evaluating the cloudiness of 12 fifth Coupled Model Intercomparison Project (CMIP5) models. Reference CRs come from existing global International Satellite Cloud Climatology Project (ISCCP) weather states. The evaluation is made possible by the implementation in several CMIP5 models of the ISCCP simulator generating for each gridcell daily joint histograms of cloud optical thickness and cloud top pressure. Model performance is assessed with several metrics such as CR global cloud fraction (CF), CR relative frequency of occurrence (RFO), their product (long-term average total cloud amount [TCA]), cross-correlations of CR RFO maps, and a metric of resemblance between model and ISCCP CRs. In terms of CR global RFO, arguably the most fundamental metric, the models perform unsatisfactorily overall, except for CRs representing thick storm clouds. Because model CR CF is internally constrained by our method, RFO discrepancies yield also substantial TCA errors. Our findings support previous studies showing that CMIP5 models underestimate cloudiness. The multi-model mean performs well in matching observed RFO maps for many CRs, but is not the best for this or other metrics. When overall performance across all CRs is assessed, some models, despite their shortcomings, apparently outperform Moderate Resolution Imaging Spectroradiometer (MODIS) cloud observations evaluated against ISCCP as if they were another model output. Lastly, cloud simulation performance is contrasted with each model's equilibrium climate sensitivity (ECS) in order to gain insight on whether good cloud simulation pairs with particular values of this parameter.
Zhou, Xiangrong; Xu, Rui; Hara, Takeshi; Hirano, Yasushi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Kido, Shoji; Fujita, Hiroshi
2014-07-01
The shapes of the inner organs are important information for medical image analysis. Statistical shape modeling provides a way of quantifying and measuring shape variations of the inner organs in different patients. In this study, we developed a universal scheme that can be used for building the statistical shape models for different inner organs efficiently. This scheme combines the traditional point distribution modeling with a group-wise optimization method based on a measure called minimum description length to provide a practical means for 3D organ shape modeling. In experiments, the proposed scheme was applied to the building of five statistical shape models for hearts, livers, spleens, and right and left kidneys by use of 50 cases of 3D torso CT images. The performance of these models was evaluated by three measures: model compactness, model generalization, and model specificity. The experimental results showed that the constructed shape models have good "compactness" and satisfied the "generalization" performance for different organ shape representations; however, the "specificity" of these models should be improved in the future.
Wu, Jun; Li, Chengbing; Huo, Yueying
2014-01-01
Safety of dangerous goods transport is directly related to the operation safety of dangerous goods transport enterprise. Aiming at the problem of the high accident rate and large harm in dangerous goods logistics transportation, this paper took the group decision making problem based on integration and coordination thought into a multiagent multiobjective group decision making problem; a secondary decision model was established and applied to the safety assessment of dangerous goods transport enterprise. First of all, we used dynamic multivalue background and entropy theory building the first level multiobjective decision model. Secondly, experts were to empower according to the principle of clustering analysis, and combining with the relative entropy theory to establish a secondary rally optimization model based on relative entropy in group decision making, and discuss the solution of the model. Then, after investigation and analysis, we establish the dangerous goods transport enterprise safety evaluation index system. Finally, case analysis to five dangerous goods transport enterprises in the Inner Mongolia Autonomous Region validates the feasibility and effectiveness of this model for dangerous goods transport enterprise recognition, which provides vital decision making basis for recognizing the dangerous goods transport enterprises. PMID:25477954
Wu, Jun; Li, Chengbing; Huo, Yueying
2014-01-01
Safety of dangerous goods transport is directly related to the operation safety of dangerous goods transport enterprise. Aiming at the problem of the high accident rate and large harm in dangerous goods logistics transportation, this paper took the group decision making problem based on integration and coordination thought into a multiagent multiobjective group decision making problem; a secondary decision model was established and applied to the safety assessment of dangerous goods transport enterprise. First of all, we used dynamic multivalue background and entropy theory building the first level multiobjective decision model. Secondly, experts were to empower according to the principle of clustering analysis, and combining with the relative entropy theory to establish a secondary rally optimization model based on relative entropy in group decision making, and discuss the solution of the model. Then, after investigation and analysis, we establish the dangerous goods transport enterprise safety evaluation index system. Finally, case analysis to five dangerous goods transport enterprises in the Inner Mongolia Autonomous Region validates the feasibility and effectiveness of this model for dangerous goods transport enterprise recognition, which provides vital decision making basis for recognizing the dangerous goods transport enterprises.
Vertical electro-absorption modulator design and its integration in a VCSEL
NASA Astrophysics Data System (ADS)
Marigo-Lombart, L.; Calvez, S.; Arnoult, A.; Thienpont, H.; Almuneau, G.; Panajotov, K.
2018-04-01
Electro-absorption modulators, either embedded in CMOS technology or integrated with a semiconductor laser, are of high interest for many applications such as optical communications, signal processing and 3D imaging. Recently, the integration of a surface-normal electro-absorption modulator into a vertical-cavity surface-emitting laser has been considered. In this paper we implement a simple quantum well electro-absorption model and design and optimize an asymmetric Fabry-Pérot semiconductor modulator while considering all physical properties within figures of merit. We also extend this model to account for the impact of temperature on the different parameters involved in the calculation of the absorption, such as refractive indices and exciton transition broadening. Two types of vertical modulator structures have been fabricated and experimentally characterized by reflectivity and photocurrent measurements demonstrating a very good agreement with our model. Finally, preliminary results of an electro-absorption modulator vertically integrated with a vertical-cavity surface-emitting laser device are presented, showing good modulation performances required for high speed communications.
NASA Technical Reports Server (NTRS)
Smith, James A.
1992-01-01
The inversion of the leaf area index (LAI) canopy parameter from optical spectral reflectance measurements is obtained using a backpropagation artificial neural network trained using input-output pairs generated by a multiple scattering reflectance model. The problem of LAI estimation over sparse canopies (LAI < 1.0) with varying soil reflectance backgrounds is particularly difficult. Standard multiple regression methods applied to canopies within a single homogeneous soil type yield good results but perform unacceptably when applied across soil boundaries, resulting in absolute percentage errors of >1000 percent for low LAI. Minimization methods applied to merit functions constructed from differences between measured reflectances and predicted reflectances using multiple-scattering models are unacceptably sensitive to a good initial guess for the desired parameter. In contrast, the neural network reported generally yields absolute percentage errors of <30 percent when weighting coefficients trained on one soil type were applied to predicted canopy reflectance at a different soil background.
Zumpano, Camila Eugênia; Mendonça, Tânia Maria da Silva; Silva, Carlos Henrique Martins da; Correia, Helena; Arnold, Benjamin; Pinto, Rogério de Melo Costa
2017-01-23
This study aimed to perform the cross-cultural adaptation and validation of the Patient-Reported Outcomes Measurement Information System (PROMIS) Global Health scale in the Portuguese language. The ten Global Health items were cross-culturally adapted by the method proposed in the Functional Assessment of Chronic Illness Therapy (FACIT). The instrument's final version in Portuguese was self-administered by 1,010 participants in Brazil. The scale's precision was verified by floor and ceiling effects analysis, reliability of internal consistency, and test-retest reliability. Exploratory and confirmatory factor analyses were used to assess the construct's validity and instrument's dimensionality. Calibration of the items used the Gradual Response Model proposed by Samejima. Four global items required adjustments after the pretest. Analysis of the psychometric properties showed that the Global Health scale has good reliability, with Cronbach's alpha of 0.83 and intra-class correlation of 0.89. Exploratory and confirmatory factor analyses showed good fit in the previously established two-dimensional model. The Global Physical Health and Global Mental Health scale showed good latent trait coverage according to the Gradual Response Model. The PROMIS Global Health items showed equivalence in Portuguese compared to the original version and satisfactory psychometric properties for application in clinical practice and research in the Brazilian population.
A novel double loop control model design for chemical unstable processes.
Cong, Er-Ding; Hu, Ming-Hui; Tu, Shan-Tung; Xuan, Fu-Zhen; Shao, Hui-He
2014-03-01
In this manuscript, based on Smith predictor control scheme for unstable process in industry, an improved double loop control model is proposed for chemical unstable processes. Inner loop is to stabilize integrating the unstable process and transform the original process to first-order plus pure dead-time dynamic stable process. Outer loop is to enhance the performance of set point response. Disturbance controller is designed to enhance the performance of disturbance response. The improved control system is simple with exact physical meaning. The characteristic equation is easy to realize stabilization. Three controllers are separately design in the improved scheme. It is easy to design each controller and good control performance for the respective closed-loop transfer function separately. The robust stability of the proposed control scheme is analyzed. Finally, case studies illustrate that the improved method can give better system performance than existing design methods. © 2013 ISA Published by ISA All rights reserved.
Experimental evaluation of expendable supersonic nozzle concepts
NASA Technical Reports Server (NTRS)
Baker, V.; Kwon, O.; Vittal, B.; Berrier, B.; Re, R.
1990-01-01
Exhaust nozzles for expendable supersonic turbojet engine missile propulsion systems are required to be simple, short and compact, in addition to having good broad-range thrust-minus-drag performance. A series of convergent-divergent nozzle scale model configurations were designed and wind tunnel tested for a wide range of free stream Mach numbers and nozzle pressure ratios. The models included fixed geometry and simple variable exit area concepts. The experimental and analytical results show that the fixed geometry configurations tested have inferior off-design thrust-minus-drag performance in the transonic Mach range. A simple variable exit area configuration called the Axi-Quad nozzle, combining features of both axisymmetric and two-dimensional convergent-divergent nozzles, performed well over a broad range of operating conditions. Analytical predictions of the flow pattern as well as overall performance of the nozzles, using a fully viscous, compressible CFD code, compared very well with the test data.
Forecasting electricity usage using univariate time series models
NASA Astrophysics Data System (ADS)
Hock-Eam, Lim; Chee-Yin, Yip
2014-12-01
Electricity is one of the important energy sources. A sufficient supply of electricity is vital to support a country's development and growth. Due to the changing of socio-economic characteristics, increasing competition and deregulation of electricity supply industry, the electricity demand forecasting is even more important than before. It is imperative to evaluate and compare the predictive performance of various forecasting methods. This will provide further insights on the weakness and strengths of each method. In literature, there are mixed evidences on the best forecasting methods of electricity demand. This paper aims to compare the predictive performance of univariate time series models for forecasting the electricity demand using a monthly data of maximum electricity load in Malaysia from January 2003 to December 2013. Results reveal that the Box-Jenkins method produces the best out-of-sample predictive performance. On the other hand, Holt-Winters exponential smoothing method is a good forecasting method for in-sample predictive performance.
Kahnert, Michael; Nousiainen, Timo; Lindqvist, Hannakaisa; Ebert, Martin
2012-04-23
Light scattering by light absorbing carbon (LAC) aggregates encapsulated into sulfate shells is computed by use of the discrete dipole method. Computations are performed for a UV, visible, and IR wavelength, different particle sizes, and volume fractions. Reference computations are compared to three classes of simplified model particles that have been proposed for climate modeling purposes. Neither model matches the reference results sufficiently well. Remarkably, more realistic core-shell geometries fall behind homogeneous mixture models. An extended model based on a core-shell-shell geometry is proposed and tested. Good agreement is found for total optical cross sections and the asymmetry parameter. © 2012 Optical Society of America
Constant-parameter capture-recapture models
Brownie, C.; Hines, J.E.; Nichols, J.D.
1986-01-01
Jolly (1982, Biometrics 38, 301-321) presented modifications of the Jolly-Seber model for capture-recapture data, which assume constant survival and/or capture rates. Where appropriate, because of the reduced number of parameters, these models lead to more efficient estimators than the Jolly-Seber model. The tests to compare models given by Jolly do not make complete use of the data, and we present here the appropriate modifications, and also indicate how to carry out goodness-of-fit tests which utilize individual capture history information. We also describe analogous models for the case where young and adult animals are tagged. The availability of computer programs to perform the analysis is noted, and examples are given using output from these programs.
Filament winding cylinders. II - Validation of the process model
NASA Technical Reports Server (NTRS)
Calius, Emilio P.; Lee, Soo-Yong; Springer, George S.
1990-01-01
Analytical and experimental studies were performed to validate the model developed by Lee and Springer for simulating the manufacturing process of filament wound composite cylinders. First, results calculated by the Lee-Springer model were compared to results of the Calius-Springer thin cylinder model. Second, temperatures and strains calculated by the Lee-Springer model were compared to data. The data used in these comparisons were generated during the course of this investigation with cylinders made of Hercules IM-6G/HBRF-55 and Fiberite T-300/976 graphite-epoxy tows. Good agreement was found between the calculated and measured stresses and strains, indicating that the model is a useful representation of the winding and curing processes.
García Nieto, Paulino José; González Suárez, Victor Manuel; Álvarez Antón, Juan Carlos; Mayo Bayón, Ricardo; Sirgo Blanco, José Ángel; Díaz Fernández, Ana María
2015-01-01
The aim of this study was to obtain a predictive model able to perform an early detection of central segregation severity in continuous cast steel slabs. Segregation in steel cast products is an internal defect that can be very harmful when slabs are rolled in heavy plate mills. In this research work, the central segregation was studied with success using the data mining methodology based on multivariate adaptive regression splines (MARS) technique. For this purpose, the most important physical-chemical parameters are considered. The results of the present study are two-fold. In the first place, the significance of each physical-chemical variable on the segregation is presented through the model. Second, a model for forecasting segregation is obtained. Regression with optimal hyperparameters was performed and coefficients of determination equal to 0.93 for continuity factor estimation and 0.95 for average width were obtained when the MARS technique was applied to the experimental dataset, respectively. The agreement between experimental data and the model confirmed the good performance of the latter.
Influence of Wind Model Performance on Wave Forecasts of the Naval Oceanographic Office
NASA Astrophysics Data System (ADS)
Gay, P. S.; Edwards, K. L.
2017-12-01
Significant discrepancies between the Naval Oceanographic Office's significant wave height (SWH) predictions and observations have been noted in some model domains. The goal of this study is to evaluate these discrepancies and identify to what extent inaccuracies in the wind predictions may explain inaccuracies in SWH predictions. A one-year time series of data is evaluated at various locations in Southern California and eastern Florida. Correlations are generally quite good, ranging from 73% at Pendleton to 88% at both Santa Barbara, California, and Cape Canaveral, Florida. Correlations for month-long periods off Southern California drop off significantly in late spring through early autumn - less so off eastern Florida - likely due to weaker local wind seas and generally smaller SWH in addition to the influence of remotely-generated swell, which may not propagate accurately into and through the wave models. The results of this study suggest that it is likely that a change in meteorological and/or oceanographic conditions explains the change in model performance, partially as a result of a seasonal reduction in wind model performance in the summer months.
Shared Problem Models and Crew Decision Making
NASA Technical Reports Server (NTRS)
Orasanu, Judith; Statler, Irving C. (Technical Monitor)
1994-01-01
The importance of crew decision making to aviation safety has been well established through NTSB accident analyses: Crew judgment and decision making have been cited as causes or contributing factors in over half of all accidents in commercial air transport, general aviation, and military aviation. Yet the bulk of research on decision making has not proven helpful in improving the quality of decisions in the cockpit. One reason is that traditional analytic decision models are inappropriate to the dynamic complex nature of cockpit decision making and do not accurately describe what expert human decision makers do when they make decisions. A new model of dynamic naturalistic decision making is offered that may prove more useful for training or aiding cockpit decision making. Based on analyses of crew performance in full-mission simulation and National Transportation Safety Board accident reports, features that define effective decision strategies in abnormal or emergency situations have been identified. These include accurate situation assessment (including time and risk assessment), appreciation of the complexity of the problem, sensitivity to constraints on the decision, timeliness of the response, and use of adequate information. More effective crews also manage their workload to provide themselves with time and resources to make good decisions. In brief, good decisions are appropriate to the demands of the situation and reflect the crew's metacognitive skill. Effective crew decision making and overall performance are mediated by crew communication. Communication contributes to performance because it assures that all crew members have essential information, but it also regulates and coordinates crew actions and is the medium of collective thinking in response to a problem. This presentation will examine the relation between communication that serves to build performance. Implications of these findings for crew training will be discussed.
Prognostic indices for early mortality in ischaemic stroke - meta-analysis.
Mattishent, K; Kwok, C S; Mahtani, A; Pelpola, K; Myint, P K; Loke, Y K
2016-01-01
Several models have been developed to predict mortality in ischaemic stroke. We aimed to evaluate systematically the performance of published stroke prognostic scores. We searched MEDLINE and EMBASE in February 2014 for prognostic models (published between 2003 and 2014) used in predicting early mortality (<6 months) after ischaemic stroke. We evaluated discriminant ability of the tools through meta-analysis of the area under the curve receiver operating characteristic curve (AUROC) or c-statistic. We evaluated the following components of study validity: collection of prognostic variables, neuroimaging, treatment pathways and missing data. We identified 18 articles (involving 163 240 patients) reporting on the performance of prognostic models for mortality in ischaemic stroke, with 15 articles providing AUC for meta-analysis. Most studies were either retrospective, or post hoc analyses of prospectively collected data; all but three reported validation data. The iSCORE had the largest number of validation cohorts (five) within our systematic review and showed good performance in four different countries, pooled AUC 0.84 (95% CI 0.82-0.87). We identified other potentially useful prognostic tools that have yet to be as extensively validated as iSCORE - these include SOAR (2 studies, pooled AUC 0.79, 95% CI 0.78-0.80), GWTG (2 studies, pooled AUC 0.72, 95% CI 0.72-0.72) and PLAN (1 study, pooled AUC 0.85, 95% CI 0.84-0.87). Our meta-analysis has identified and summarized the performance of several prognostic scores with modest to good predictive accuracy for early mortality in ischaemic stroke, with the iSCORE having the broadest evidence base. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Levine, N. M.; Galbraith, D.; Christoffersen, B. J.; Imbuzeiro, H. A.; Restrepo-Coupe, N.; Malhi, Y.; Saleska, S. R.; Costa, M. H.; Phillips, O.; Andrade, A.; Moorcroft, P. R.
2011-12-01
The Amazonian rainforests play a vital role in global water, energy and carbon cycling. The sensitivity of this system to natural and anthropogenic disturbances therefore has important implications for the global climate. Some global models have predicted large-scale forest dieback and the savannization of Amazonia over the next century [Meehl et al., 2007]. While several studies have demonstrated the sensitivity of dynamic global vegetation models to changes in temperature, precipitation, and dry season length [e.g. Galbraith et al., 2010; Good et al., 2011], the ability of these models to accurately reproduce ecosystem dynamics of present-day transitional or low biomass tropical forests has not been demonstrated. A model-data intercomparison was conducted with four state-of-the-art terrestrial ecosystem models to evaluate the ability of these models to accurately represent structure, function, and long-term biomass dynamics over a range of Amazonian ecosystems. Each modeling group conducted a series of simulations for 14 sites including mature forest, transitional forest, savannah, and agricultural/pasture sites. All models were run using standard physical parameters and the same initialization procedure. Model results were compared against forest inventory and dendrometer data in addition to flux tower measurements. While the models compared well against field observations for the mature forest sites, significant differences were observed between predicted and measured ecosystem structure and dynamics for the transitional forest and savannah sites. The length of the dry season and soil sand content were good predictors of model performance. In addition, for the big leaf models, model performance was highest for sites dominated by late successional trees and lowest for sites with predominantly early and mid-successional trees. This study provides insight into tropical forest function and sensitivity to environmental conditions that will aid in predictions of the response of the Amazonian rainforest to future anthropogenically induced changes.
Effect of authority figures for pedestrian evacuation at metro stations
NASA Astrophysics Data System (ADS)
Song, Xiao; Zhang, Zenghui; Peng, Gongzhuang; Shi, Guoqiang
2017-01-01
Most pedestrian evacuation literatures are about routing algorithm, human intelligence and behavior etc. Few works studied how to fully explore the function of authority/security figures, who know more of the environment by simply being there every day. To evaluate the effect of authority figure (AF) in complex buildings, this paper fully investigates the AF related factors that may influence the evacuation effect of crowd, such as the number and locations of AFs, their spread of direction, calming effect and distribution strategies etc. Social force based modeling and simulation results show that these factors of AFs play important roles in evacuation efficiency, which means fewer AFs with right guiding strategy can have good evacuation performance. For our case study, Zhichun Avenue station, the conclusion is that deployment of four AFs is a good choice to achieve relatively high evacuation performance yet save cost.
Analysis of Logistics in Support of a Human Lunar Outpost
NASA Technical Reports Server (NTRS)
Cirillo, William; Earle, Kevin; Goodliff, Kandyce; Reeves, j. D.; Andrashko, Mark; Merrill, R. Gabe; Stromgren, Chel
2008-01-01
Strategic level analysis of the integrated behavior of lunar transportation system and lunar surface system architecture options is performed to inform NASA Constellation Program senior management on the benefit, viability, affordability, and robustness of system design choices. This paper presents an overview of the approach used to perform the campaign (strategic) analysis, with an emphasis on the logistics modeling and the impacts of logistics resupply on campaign behavior. An overview of deterministic and probabilistic analysis approaches is provided, with a discussion of the importance of each approach to understanding the integrated system behavior. The logistics required to support lunar surface habitation are analyzed from both 'macro-logistics' and 'micro-logistics' perspectives, where macro-logistics focuses on the delivery of goods to a destination and micro-logistics focuses on local handling of re-supply goods at a destination. An example campaign is provided to tie the theories of campaign analysis to results generation capabilities.
Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm
NASA Astrophysics Data System (ADS)
Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel
2011-09-01
The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.
NASA Astrophysics Data System (ADS)
Paik, Kwang-Jun; Park, Hyung-Gil; Seo, Jongsoo
2013-12-01
Simulations of cavitation flow and hull pressure fluctuation for a marine propeller operating behind a hull using the unsteady Reynolds-Averaged Navier-Stokes equations (RANS) are presented. A full hull body submerged under the free surface is modeled in the computational domain to simulate directly the wake field of the ship at the propeller plane. Simulations are performed in design and ballast draught conditions to study the effect of cavitation number. And two propellers with slightly different geometry are simulated to validate the detectability of the numerical simulation. All simulations are performed using a commercial CFD software FLUENT. Cavitation patterns of the simulations show good agreement with the experimental results carried out in Samsung CAvitation Tunnel (SCAT). The simulation results for the hull pressure fluctuation induced by a propeller are also compared with the experimental results showing good agreement in the tendency and amplitude, especially, for the first blade frequency.
Endoscopic third ventriculostomy in the treatment of childhood hydrocephalus.
Kulkarni, Abhaya V; Drake, James M; Mallucci, Conor L; Sgouros, Spyros; Roth, Jonathan; Constantini, Shlomi
2009-08-01
To develop a model to predict the probability of endoscopic third ventriculostomy (ETV) success in the treatment for hydrocephalus on the basis of a child's individual characteristics. We analyzed 618 ETVs performed consecutively on children at 12 international institutions to identify predictors of ETV success at 6 months. A multivariable logistic regression model was developed on 70% of the dataset (training set) and validated on 30% of the dataset (validation set). In the training set, 305/455 ETVs (67.0%) were successful. The regression model (containing patient age, cause of hydrocephalus, and previous cerebrospinal fluid shunt) demonstrated good fit (Hosmer-Lemeshow, P = .78) and discrimination (C statistic = 0.70). In the validation set, 105/163 ETVs (64.4%) were successful and the model maintained good fit (Hosmer-Lemeshow, P = .45), discrimination (C statistic = 0.68), and calibration (calibration slope = 0.88). A simplified ETV Success Score was devised that closely approximates the predicted probability of ETV success. Children most likely to succeed with ETV can now be accurately identified and spared the long-term complications of CSF shunting.
Thermal Modeling and Management of Solid Oxide Fuel Cells Operating with Internally Reformed Methane
NASA Astrophysics Data System (ADS)
Wu, Yiyang; Shi, Yixiang; Cai, Ningsheng; Ni, Meng
2018-06-01
A detailed three-dimensional mechanistic model of a large-scale solid oxide fuel cell (SOFC) unit running on partially pre-reformed methane is developed. The model considers the coupling effects of chemical and electrochemical reactions, mass transport, momentum and heat transfer in the SOFC unit. After model validation, parametric simulations are conducted to investigate how the methane pre-reforming ratio affects the transport and electrochemistry of the SOFC unit. It is found that the methane steam reforming reaction has a "smoothing effect", which can achieve more uniform distributions of gas compositions, current density and temperature among the cell plane. In the case of 1500 W/m2 power density output, adding 20% methane absorbs 50% of internal heat production inside the cell, reduces the maximum temperature difference inside the cell from 70 K to 22 K and reduces the cathode air supply by 75%, compared to the condition of completely pre-reforming of methane. Under specific operating conditions, the pre-reforming ratio of methane has an optimal range for obtaining a good temperature distribution and good cell performance.
Xie, Hai-Yang; Liu, Qian; Li, Jia-Hao; Fan, Liu-Yin; Cao, Cheng-Xi
2013-02-21
A novel moving redox reaction boundary (MRRB) model was developed for studying electrophoretic behaviors of analytes involving redox reaction on the principle of moving reaction boundary (MRB). Traditional potassium permanganate method was used to create the boundary model in agarose gel electrophoresis because of the rapid reaction rate associated with MnO(4)(-) ions and Fe(2+) ions. MRB velocity equation was proposed to describe the general functional relationship between velocity of moving redox reaction boundary (V(MRRB)) and concentration of reactant, and can be extrapolated to similar MRB techniques. Parameters affecting the redox reaction boundary were investigated in detail. Under the selected conditions, good linear relationship between boundary movement distance and time were obtained. The potential application of MRRB in electromigration redox reaction titration was performed in two different concentration levels. The precision of the V(MRRB) was studied and the relative standard deviations were below 8.1%, illustrating the good repeatability achieved in this experiment. The proposed MRRB model enriches the MRB theory and also provides a feasible realization of manual control of redox reaction process in electrophoretic analysis.
Implementation of algebraic stress models in a general 3-D Navier-Stokes method (PAB3D)
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.
1995-01-01
A three-dimensional multiblock Navier-Stokes code, PAB3D, which was developed for propulsion integration and general aerodynamic analysis, has been used extensively by NASA Langley and other organizations to perform both internal (exhaust) and external flow analysis of complex aircraft configurations. This code was designed to solve the simplified Reynolds Averaged Navier-Stokes equations. A two-equation k-epsilon turbulence model has been used with considerable success, especially for attached flows. Accurate predicting of transonic shock wave location and pressure recovery in separated flow regions has been more difficult. Two algebraic Reynolds stress models (ASM) have been recently implemented in the code that greatly improved the code's ability to predict these difficult flow conditions. Good agreement with Direct Numerical Simulation (DNS) for a subsonic flat plate was achieved with ASM's developed by Shih, Zhu, and Lumley and Gatski and Speziale. Good predictions were also achieved at subsonic and transonic Mach numbers for shock location and trailing edge boattail pressure recovery on a single-engine afterbody/nozzle model.
Observability and synchronization of neuron models.
Aguirre, Luis A; Portes, Leonardo L; Letellier, Christophe
2017-10-01
Observability is the property that enables recovering the state of a dynamical system from a reduced number of measured variables. In high-dimensional systems, it is therefore important to make sure that the variable recorded to perform the analysis conveys good observability of the system dynamics. The observability of a network of neuron models depends nontrivially on the observability of the node dynamics and on the topology of the network. The aim of this paper is twofold. First, to perform a study of observability using four well-known neuron models by computing three different observability coefficients. This not only clarifies observability properties of the models but also shows the limitations of applicability of each type of coefficients in the context of such models. Second, to study the emergence of phase synchronization in networks composed of neuron models. This is done performing multivariate singular spectrum analysis which, to the best of the authors' knowledge, has not been used in the context of networks of neuron models. It is shown that it is possible to detect phase synchronization: (i) without having to measure all the state variables, but only one (that provides greatest observability) from each node and (ii) without having to estimate the phase.
NASA Astrophysics Data System (ADS)
Porto, P.; Cogliandro, V.; Callegari, G.
2018-01-01
In this paper, long-term sediment yield data, collected in a small (1.38 ha) Calabrian catchment (W2), reafforested with eucalyptus trees (Eucalyptus occidentalis Engl.) are used to validate the performance of the SEdiment Delivery Distributed Model (SEDD) in areas with high erosion rates. At first step, the SEDD model was calibrated using field data collected in previous field campaigns undertaken during the period 1978-1994. This first phase allowed the model calibration parameter β to be calculated using direct measurements of rainfall, runoff, and sediment output. The model was then validated in its calibrated form for an independent period (2006-2016) for which new measurements of rainfall, runoff and sediment output are also available. The analysis, carried out at event and annual scale showed good agreement between measured and predicted values of sediment yield and suggested that the SEDD model can be seen as an appropriate means of evaluating erosion risk associated with manmade plantations in marginal areas. Further work is however required to test the performance of the SEDD model as a prediction tool in different geomorphic contexts.
Sociopolitical and economic elements to explain the environmental performance of countries.
Almeida, Thiago Alexandre das Neves; García-Sánchez, Isabel-María
2017-01-01
The present research explains environmental performance using an ecological composite index as the dependent variable and focusing on two national dimensions: sociopolitical characteristics and economics. Environmental performance is measured using the Composite Index of Environmental Performance (CIEP) indicator proposed by García-Sánchez et al. (2015). The first model performs a factor analysis to aggregate the variables according to each analyzed dimension. In the second model, the estimation is run using only single variables. Both models are estimated using generalized least square estimation (GLS) using panel data from 152 countries and 6 years. The results show that sociopolitical factors and international trade have a positive effect on environmental performance. When the variables are separately analyzed, democracy and social policy have a positive effect on environmental performance while transport, infrastructure, consumption of goods, and tourism have a negative effect. Further observation is that the trade-off between importing and exporting countries overshadows the pollution caused by production. It was also observed that infrastructure has a negative coefficient for developing countries and positive for developed countries. The best performances are in the democratic and richer countries that are located in Europe, while the worst environmental performance is by the nondemocratic and the poorest countries, which are on the African continent.
Morris, Ralph E; McNally, Dennis E; Tesche, Thomas W; Tonnesen, Gail; Boylan, James W; Brewer, Patricia
2005-11-01
The Visibility Improvement State and Tribal Association of the Southeast (VISTAS) is one of five Regional Planning Organizations that is charged with the management of haze, visibility, and other regional air quality issues in the United States. The VISTAS Phase I work effort modeled three episodes (January 2002, July 1999, and July 2001) to identify the optimal model configuration(s) to be used for the 2002 annual modeling in Phase II. Using model configurations recommended in the Phase I analysis, 2002 annual meteorological (Mesoscale Meterological Model [MM5]), emissions (Sparse Matrix Operator Kernal Emissions [SMOKE]), and air quality (Community Multiscale Air Quality [CMAQ]) simulations were performed on a 36-km grid covering the continental United States and a 12-km grid covering the Eastern United States. Model estimates were then compared against observations. This paper presents the results of the preliminary CMAQ model performance evaluation for the initial 2002 annual base case simulation. Model performance is presented for the Eastern United States using speciated fine particle concentration and wet deposition measurements from several monitoring networks. Initial results indicate fairly good performance for sulfate with fractional bias values generally within +/-20%. Nitrate is overestimated in the winter by approximately +50% and underestimated in the summer by more than -100%. Organic carbon exhibits a large summer underestimation bias of approximately -100% with much improved performance seen in the winter with a bias near zero. Performance for elemental carbon is reasonable with fractional bias values within +/- 40%. Other fine particulate (soil) and coarse particular matter exhibit large (80-150%) overestimation in the winter but improved performance in the summer. The preliminary 2002 CMAQ runs identified several areas of enhancements to improve model performance, including revised temporal allocation factors for ammonia emissions to improve nitrate performance and addressing missing processes in the secondary organic aerosol module to improve OC performance.
Validation of OpenFoam for heavy gas dispersion applications.
Mack, A; Spruijt, M P N
2013-11-15
In the present paper heavy gas dispersion calculations were performed with OpenFoam. For a wind tunnel test case, numerical data was validated with experiments. For a full scale numerical experiment, a code to code comparison was performed with numerical results obtained from Fluent. The validation was performed in a gravity driven environment (slope), where the heavy gas induced the turbulence. For the code to code comparison, a hypothetical heavy gas release into a strongly turbulent atmospheric boundary layer including terrain effects was selected. The investigations were performed for SF6 and CO2 as heavy gases applying the standard k-ɛ turbulence model. A strong interaction of the heavy gas with the turbulence is present which results in a strong damping of the turbulence and therefore reduced heavy gas mixing. Especially this interaction, based on the buoyancy effects, was studied in order to ensure that the turbulence-buoyancy coupling is the main driver for the reduced mixing and not the global behaviour of the turbulence modelling. For both test cases, comparisons were performed between OpenFoam and Fluent solutions which were mainly in good agreement with each other. Beside steady state solutions, the time accuracy was investigated. In the low turbulence environment (wind tunnel test) which for both codes (laminar solutions) was in good agreement, also with the experimental data. The turbulent solutions of OpenFoam were in much better agreement with the experimental results than the Fluent solutions. Within the strong turbulence environment, both codes showed an excellent comparability. Copyright © 2013 Elsevier B.V. All rights reserved.