NASA Astrophysics Data System (ADS)
Tian, Lizhi; Xiong, Zhenhua; Wu, Jianhua; Ding, Han
2017-05-01
Feedforward-feedback control is widely used in motion control of piezoactuator systems. Due to the phase lag caused by incomplete dynamics compensation, the performance of the composite controller is greatly limited at high frequency. This paper proposes a new rate-dependent model to improve the high-frequency tracking performance by reducing dynamics compensation error. The rate-dependent model is designed as a function of the input and input variation rate to describe the input-output relationship of the residual system dynamics which mainly performs as phase lag in a wide frequency band. Then the direct inversion of the proposed rate-dependent model is used to compensate the residual system dynamics. Using the proposed rate-dependent model as feedforward term, the open loop performance can be improved significantly at medium-high frequency. Then, combining the with feedback controller, the composite controller can provide enhanced close loop performance from low frequency to high frequency. At the frequency of 1 Hz, the proposed controller presents the same performance as previous methods. However, at the frequency of 900 Hz, the tracking error is reduced to be 30.7% of the decoupled approach.
Dependability and performability analysis
NASA Technical Reports Server (NTRS)
Trivedi, Kishor S.; Ciardo, Gianfranco; Malhotra, Manish; Sahner, Robin A.
1993-01-01
Several practical issues regarding specifications and solution of dependability and performability models are discussed. Model types with and without rewards are compared. Continuous-time Markov chains (CTMC's) are compared with (continuous-time) Markov reward models (MRM's) and generalized stochastic Petri nets (GSPN's) are compared with stochastic reward nets (SRN's). It is shown that reward-based models could lead to more concise model specifications and solution of a variety of new measures. With respect to the solution of dependability and performability models, three practical issues were identified: largeness, stiffness, and non-exponentiality, and a variety of approaches are discussed to deal with them, including some of the latest research efforts.
Semantic concept-enriched dependence model for medical information retrieval.
Choi, Sungbin; Choi, Jinwook; Yoo, Sooyoung; Kim, Heechun; Lee, Youngho
2014-02-01
In medical information retrieval research, semantic resources have been mostly used by expanding the original query terms or estimating the concept importance weight. However, implicit term-dependency information contained in semantic concept terms has been overlooked or at least underused in most previous studies. In this study, we incorporate a semantic concept-based term-dependence feature into a formal retrieval model to improve its ranking performance. Standardized medical concept terms used by medical professionals were assumed to have implicit dependency within the same concept. We hypothesized that, by elaborately revising the ranking algorithms to favor documents that preserve those implicit dependencies, the ranking performance could be improved. The implicit dependence features are harvested from the original query using MetaMap. These semantic concept-based dependence features were incorporated into a semantic concept-enriched dependence model (SCDM). We designed four different variants of the model, with each variant having distinct characteristics in the feature formulation method. We performed leave-one-out cross validations on both a clinical document corpus (TREC Medical records track) and a medical literature corpus (OHSUMED), which are representative test collections in medical information retrieval research. Our semantic concept-enriched dependence model consistently outperformed other state-of-the-art retrieval methods. Analysis shows that the performance gain has occurred independently of the concept's explicit importance in the query. By capturing implicit knowledge with regard to the query term relationships and incorporating them into a ranking model, we could build a more robust and effective retrieval model, independent of the concept importance. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Khan, F.; Pilz, J.; Spöck, G.
2017-12-01
Spatio-temporal dependence structures play a pivotal role in understanding the meteorological characteristics of a basin or sub-basin. This further affects the hydrological conditions and consequently will provide misleading results if these structures are not taken into account properly. In this study we modeled the spatial dependence structure between climate variables including maximum, minimum temperature and precipitation in the Monsoon dominated region of Pakistan. For temperature, six, and for precipitation four meteorological stations have been considered. For modelling the dependence structure between temperature and precipitation at multiple sites, we utilized C-Vine, D-Vine and Student t-copula models. For temperature, multivariate mixture normal distributions and for precipitation gamma distributions have been used as marginals under the copula models. A comparison was made between C-Vine, D-Vine and Student t-copula by observational and simulated spatial dependence structure to choose an appropriate model for the climate data. The results show that all copula models performed well, however, there are subtle differences in their performances. The copula models captured the patterns of spatial dependence structures between climate variables at multiple meteorological sites, however, the t-copula showed poor performance in reproducing the dependence structure with respect to magnitude. It was observed that important statistics of observed data have been closely approximated except of maximum values for temperature and minimum values for minimum temperature. Probability density functions of simulated data closely follow the probability density functions of observational data for all variables. C and D-Vines are better tools when it comes to modelling the dependence between variables, however, Student t-copulas compete closely for precipitation. Keywords: Copula model, C-Vine, D-Vine, Spatial dependence structure, Monsoon dominated region of Pakistan, Mixture models, EM algorithm.
Keshavarzi, Sareh; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Pakfetrat, Maryam
2012-01-01
BACKGROUND. In many studies with longitudinal data, time-dependent covariates can only be measured intermittently (not at all observation times), and this presents difficulties for standard statistical analyses. This situation is common in medical studies, and methods that deal with this challenge would be useful. METHODS. In this study, we performed the seemingly unrelated regression (SUR) based models, with respect to each observation time in longitudinal data with intermittently observed time-dependent covariates and further compared these models with mixed-effect regression models (MRMs) under three classic imputation procedures. Simulation studies were performed to compare the sample size properties of the estimated coefficients for different modeling choices. RESULTS. In general, the proposed models in the presence of intermittently observed time-dependent covariates showed a good performance. However, when we considered only the observed values of the covariate without any imputations, the resulted biases were greater. The performances of the proposed SUR-based models in comparison with MRM using classic imputation methods were nearly similar with approximately equal amounts of bias and MSE. CONCLUSION. The simulation study suggests that the SUR-based models work as efficiently as MRM in the case of intermittently observed time-dependent covariates. Thus, it can be used as an alternative to MRM.
Review of Methods for Buildings Energy Performance Modelling
NASA Astrophysics Data System (ADS)
Krstić, Hrvoje; Teni, Mihaela
2017-10-01
Research presented in this paper gives a brief review of methods used for buildings energy performance modelling. This paper gives also a comprehensive review of the advantages and disadvantages of available methods as well as the input parameters used for modelling buildings energy performance. European Directive EPBD obliges the implementation of energy certification procedure which gives an insight on buildings energy performance via exiting energy certificate databases. Some of the methods for buildings energy performance modelling mentioned in this paper are developed by employing data sets of buildings which have already undergone an energy certification procedure. Such database is used in this paper where the majority of buildings in the database have already gone under some form of partial retrofitting - replacement of windows or installation of thermal insulation but still have poor energy performance. The case study presented in this paper utilizes energy certificates database obtained from residential units in Croatia (over 400 buildings) in order to determine the dependence between buildings energy performance and variables from database by using statistical dependencies tests. Building energy performance in database is presented with building energy efficiency rate (from A+ to G) which is based on specific annual energy needs for heating for referential climatic data [kWh/(m2a)]. Independent variables in database are surfaces and volume of the conditioned part of the building, building shape factor, energy used for heating, CO2 emission, building age and year of reconstruction. Research results presented in this paper give an insight in possibilities of methods used for buildings energy performance modelling. Further on it gives an analysis of dependencies between buildings energy performance as a dependent variable and independent variables from the database. Presented results could be used for development of new building energy performance predictive model.
NASA Astrophysics Data System (ADS)
Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng
2017-06-01
A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o
ERIC Educational Resources Information Center
Angeli, Charoula; Valanides, Nicos; Polemitou, Eirini; Fraggoulidou, Elena
2014-01-01
The study examined the interaction between field dependence-independence (FD/I) and learning with modeling software and simulations, and their effect on children's performance. Participants were randomly assigned into two groups. Group A first learned with a modeling tool and then with simulations. Group B learned first with simulations and then…
Time-Dependent Testing Evaluation and Modeling for Rubber Stopper Seal Performance.
Zeng, Qingyu; Zhao, Xia
2018-01-01
Sufficient rubber stopper sealing performance throughout the entire sealed product life cycle is essential for maintaining container closure integrity in the parenteral packaging industry. However, prior publications have lacked systematic considerations for the time-dependent influence on sealing performance that results from the viscoelastic characteristics of the rubber stoppers. In this paper, we report results of an effort to study these effects by applying both compression stress relaxation testing and residual seal force testing for time-dependent experimental data collection. These experiments were followed by modeling fit calculations based on the Maxwell-Wiechert theory modified with the Kohlrausch-Williams-Watts stretched exponential function, resulting in a nonlinear, time-dependent sealing force model. By employing both testing evaluations and modeling calculations, an in-depth understanding of the time-dependent effects on rubber stopper sealing force was developed. Both testing and modeling data show good consistency, demonstrating that the sealing force decays exponentially over time and eventually levels off because of the viscoelastic nature of the rubber stoppers. The nonlinearity of stress relaxation derives from the viscoelastic characteristics of the rubber stoppers coupled with the large stopper compression deformation into restrained geometry conditions. The modeling fit with capability to handle actual testing data can be employed as a tool to calculate the compression stress relaxation and residual seal force throughout the entire sealed product life cycle. In addition to being time-dependent, stress relaxation is also experimentally shown to be temperature-dependent. The present work provides a new, integrated methodology framework and some fresh insights to the parenteral packaging industry for practically and proactively considering, designing, setting up, controlling, and managing stopper sealing performance throughout the entire sealed product life cycle. LAY ABSTRACT: Historical publications in the parenteral packaging industry have lacked systematic considerations for the time-dependent influence on the sealing performance that results from effects of viscoelastic characteristic of the rubber stoppers. This study applied compression stress relaxation testing and residual seal force testing for time-dependent experimental data collection. These experiments were followed by modeling fit calculations based on the Maxwell-Wiechert theory modified with the Kohlrausch-Williams-Watts stretched exponential function, resulting in a nonlinear, time-dependent sealing force model. Experimental and modeling data show good consistency, demonstrating that sealing force decays exponentially over time and eventually levels off. The nonlinearity of stress relaxation derives from the viscoelastic characteristics of the rubber stoppers coupled with the large stopper compression deformation into restrained geometry conditions. In addition to being time-dependent stress relaxation, it is also experimentally shown to be temperature-dependent. The present work provides a new, integrated methodology framework and some fresh insights to the industry for practically and proactively considering, designing, setting up, controlling, and managing of the stopper sealing performance throughout the entire sealed product life cycle. © PDA, Inc. 2018.
Spatial frequency dependence of target signature for infrared performance modeling
NASA Astrophysics Data System (ADS)
Du Bosq, Todd; Olson, Jeffrey
2011-05-01
The standard model used to describe the performance of infrared imagers is the U.S. Army imaging system target acquisition model, based on the targeting task performance metric. The model is characterized by the resolution and sensitivity of the sensor as well as the contrast and task difficulty of the target set. The contrast of the target is defined as a spatial average contrast. The model treats the contrast of the target set as spatially white, or constant, over the bandlimit of the sensor. Previous experiments have shown that this assumption is valid under normal conditions and typical target sets. However, outside of these conditions, the treatment of target signature can become the limiting factor affecting model performance accuracy. This paper examines target signature more carefully. The spatial frequency dependence of the standard U.S. Army RDECOM CERDEC Night Vision 12 and 8 tracked vehicle target sets is described. The results of human perception experiments are modeled and evaluated using both frequency dependent and independent target signature definitions. Finally the function of task difficulty and its relationship to a target set is discussed.
Dose-dependent model of caffeine effects on human vigilance during total sleep deprivation.
Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Wesensten, Nancy J; Kamimori, Gary H; Balkin, Thomas J; Reifman, Jaques
2014-10-07
Caffeine is the most widely consumed stimulant to counter sleep-loss effects. While the pharmacokinetics of caffeine in the body is well-understood, its alertness-restoring effects are still not well characterized. In fact, mathematical models capable of predicting the effects of varying doses of caffeine on objective measures of vigilance are not available. In this paper, we describe a phenomenological model of the dose-dependent effects of caffeine on psychomotor vigilance task (PVT) performance of sleep-deprived subjects. We used the two-process model of sleep regulation to quantify performance during sleep loss in the absence of caffeine and a dose-dependent multiplier factor derived from the Hill equation to model the effects of single and repeated caffeine doses. We developed and validated the model fits and predictions on PVT lapse (number of reaction times exceeding 500 ms) data from two separate laboratory studies. At the population-average level, the model captured the effects of a range of caffeine doses (50-300 mg), yielding up to a 90% improvement over the two-process model. Individual-specific caffeine models, on average, predicted the effects up to 23% better than population-average caffeine models. The proposed model serves as a useful tool for predicting the dose-dependent effects of caffeine on the PVT performance of sleep-deprived subjects and, therefore, can be used for determining caffeine doses that optimize the timing and duration of peak performance. Published by Elsevier Ltd.
DOT National Transportation Integrated Search
2008-04-01
The objective of this research is to develop alternative time-dependent travel demand models of hurricane evacuation travel and to compare the performance of these models with each other and with the state-of-the-practice models in current use. Speci...
Measures of GCM Performance as Functions of Model Parameters Affecting Clouds and Radiation
NASA Astrophysics Data System (ADS)
Jackson, C.; Mu, Q.; Sen, M.; Stoffa, P.
2002-05-01
This abstract is one of three related presentations at this meeting dealing with several issues surrounding optimal parameter and uncertainty estimation of model predictions of climate. Uncertainty in model predictions of climate depends in part on the uncertainty produced by model approximations or parameterizations of unresolved physics. Evaluating these uncertainties is computationally expensive because one needs to evaluate how arbitrary choices for any given combination of model parameters affects model performance. Because the computational effort grows exponentially with the number of parameters being investigated, it is important to choose parameters carefully. Evaluating whether a parameter is worth investigating depends on two considerations: 1) does reasonable choices of parameter values produce a large range in model response relative to observational uncertainty? and 2) does the model response depend non-linearly on various combinations of model parameters? We have decided to narrow our attention to selecting parameters that affect clouds and radiation, as it is likely that these parameters will dominate uncertainties in model predictions of future climate. We present preliminary results of ~20 to 30 AMIPII style climate model integrations using NCAR's CCM3.10 that show model performance as functions of individual parameters controlling 1) critical relative humidity for cloud formation (RHMIN), and 2) boundary layer critical Richardson number (RICR). We also explore various definitions of model performance that include some or all observational data sources (surface air temperature and pressure, meridional and zonal winds, clouds, long and short-wave cloud forcings, etc...) and evaluate in a few select cases whether the model's response depends non-linearly on the parameter values we have selected.
NASA Astrophysics Data System (ADS)
Kajiwara, Itsuro; Furuya, Keiichiro; Ishizuka, Shinichi
2018-07-01
Model-based controllers with adaptive design variables are often used to control an object with time-dependent characteristics. However, the controller's performance is influenced by many factors such as modeling accuracy and fluctuations in the object's characteristics. One method to overcome these negative factors is to tune model-based controllers. Herein we propose an online tuning method to maintain control performance for an object that exhibits time-dependent variations. The proposed method employs the poles of the controller as design variables because the poles significantly impact performance. Specifically, we use the simultaneous perturbation stochastic approximation (SPSA) to optimize a model-based controller with multiple design variables. Moreover, a vibration control experiment of an object with time-dependent characteristics as the temperature is varied demonstrates that the proposed method allows adaptive control and stably maintains the closed-loop characteristics.
NASA Astrophysics Data System (ADS)
Šarolić, A.; Živković, Z.; Reilly, J. P.
2016-06-01
The electrostimulation excitation threshold of a nerve depends on temporal and frequency parameters of the stimulus. These dependences were investigated in terms of: (1) strength-duration (SD) curve for a single monophasic rectangular pulse, and (2) frequency dependence of the excitation threshold for a continuous sinusoidal current. Experiments were performed on the single-axon measurement setup based on Lumbricus terrestris having unmyelinated nerve fibers. The simulations were performed using the well-established SENN model for a myelinated nerve. Although the unmyelinated experimental model differs from the myelinated simulation model, both refer to a single axon. Thus we hypothesized that the dependence on temporal and frequency parameters should be very similar. The comparison was made possible by normalizing each set of results to the SD time constant and the rheobase current of each model, yielding the curves that show the temporal and frequency dependencies regardless of the model differences. The results reasonably agree, suggesting that this experimental setup and method of comparison with SENN model can be used for further studies of waveform effect on nerve excitability, including unmyelinated neurons.
Šarolić, A; Živković, Z; Reilly, J P
2016-06-21
The electrostimulation excitation threshold of a nerve depends on temporal and frequency parameters of the stimulus. These dependences were investigated in terms of: (1) strength-duration (SD) curve for a single monophasic rectangular pulse, and (2) frequency dependence of the excitation threshold for a continuous sinusoidal current. Experiments were performed on the single-axon measurement setup based on Lumbricus terrestris having unmyelinated nerve fibers. The simulations were performed using the well-established SENN model for a myelinated nerve. Although the unmyelinated experimental model differs from the myelinated simulation model, both refer to a single axon. Thus we hypothesized that the dependence on temporal and frequency parameters should be very similar. The comparison was made possible by normalizing each set of results to the SD time constant and the rheobase current of each model, yielding the curves that show the temporal and frequency dependencies regardless of the model differences. The results reasonably agree, suggesting that this experimental setup and method of comparison with SENN model can be used for further studies of waveform effect on nerve excitability, including unmyelinated neurons.
NASA Astrophysics Data System (ADS)
Chen, Dar-Hsin; Chou, Heng-Chih; Wang, David; Zaabar, Rim
2011-06-01
Most empirical research of the path-dependent, exotic-option credit risk model focuses on developed markets. Taking Taiwan as an example, this study investigates the bankruptcy prediction performance of the path-dependent, barrier option model in the emerging market. We adopt Duan's (1994) [11], (2000) [12] transformed-data maximum likelihood estimation (MLE) method to directly estimate the unobserved model parameters, and compare the predictive ability of the barrier option model to the commonly adopted credit risk model, Merton's model. Our empirical findings show that the barrier option model is more powerful than Merton's model in predicting bankruptcy in the emerging market. Moreover, we find that the barrier option model predicts bankruptcy much better for highly-leveraged firms. Finally, our findings indicate that the prediction accuracy of the credit risk model can be improved by higher asset liquidity and greater financial transparency.
An Introduction to Markov Modeling: Concepts and Uses
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Lau, Sonie (Technical Monitor)
1998-01-01
Kharkov modeling is a modeling technique that is widely useful for dependability analysis of complex fault tolerant systems. It is very flexible in the type of systems and system behavior it can model. It is not, however, the most appropriate modeling technique for every modeling situation. The first task in obtaining a reliability or availability estimate for a system is selecting which modeling technique is most appropriate to the situation at hand. A person performing a dependability analysis must confront the question: is Kharkov modeling most appropriate to the system under consideration, or should another technique be used instead? The need to answer this gives rise to other more basic questions regarding Kharkov modeling: what are the capabilities and limitations of Kharkov modeling as a modeling technique? How does it relate to other modeling techniques? What kind of system behavior can it model? What kinds of software tools are available for performing dependability analyses with Kharkov modeling techniques? These questions and others will be addressed in this tutorial.
Sun, Fuqiang; Liu, Le; Li, Xiaoyang; Liao, Haitao
2016-01-01
Accelerated degradation testing (ADT) is an efficient technique for evaluating the lifetime of a highly reliable product whose underlying failure process may be traced by the degradation of the product’s performance parameters with time. However, most research on ADT mainly focuses on a single performance parameter. In reality, the performance of a modern product is usually characterized by multiple parameters, and the degradation paths are usually nonlinear. To address such problems, this paper develops a new s-dependent nonlinear ADT model for products with multiple performance parameters using a general Wiener process and copulas. The general Wiener process models the nonlinear ADT data, and the dependency among different degradation measures is analyzed using the copula method. An engineering case study on a tuner’s ADT data is conducted to demonstrate the effectiveness of the proposed method. The results illustrate that the proposed method is quite effective in estimating the lifetime of a product with s-dependent performance parameters. PMID:27509499
Sun, Fuqiang; Liu, Le; Li, Xiaoyang; Liao, Haitao
2016-08-06
Accelerated degradation testing (ADT) is an efficient technique for evaluating the lifetime of a highly reliable product whose underlying failure process may be traced by the degradation of the product's performance parameters with time. However, most research on ADT mainly focuses on a single performance parameter. In reality, the performance of a modern product is usually characterized by multiple parameters, and the degradation paths are usually nonlinear. To address such problems, this paper develops a new s-dependent nonlinear ADT model for products with multiple performance parameters using a general Wiener process and copulas. The general Wiener process models the nonlinear ADT data, and the dependency among different degradation measures is analyzed using the copula method. An engineering case study on a tuner's ADT data is conducted to demonstrate the effectiveness of the proposed method. The results illustrate that the proposed method is quite effective in estimating the lifetime of a product with s-dependent performance parameters.
Bayesian semiparametric estimation of covariate-dependent ROC curves
Rodríguez, Abel; Martínez, Julissa C.
2014-01-01
Receiver operating characteristic (ROC) curves are widely used to measure the discriminating power of medical tests and other classification procedures. In many practical applications, the performance of these procedures can depend on covariates such as age, naturally leading to a collection of curves associated with different covariate levels. This paper develops a Bayesian heteroscedastic semiparametric regression model and applies it to the estimation of covariate-dependent ROC curves. More specifically, our approach uses Gaussian process priors to model the conditional mean and conditional variance of the biomarker of interest for each of the populations under study. The model is illustrated through an application to the evaluation of prostate-specific antigen for the diagnosis of prostate cancer, which contrasts the performance of our model against alternative models. PMID:24174579
NASA Astrophysics Data System (ADS)
Nie, Shida; Zhuang, Ye; Wang, Yong; Guo, Konghui
2018-01-01
The performance of velocity & displacement-dependent damper (VDD), inspired by the semi-active control, is analyzed. The main differences among passive, displacement-dependent and semi-active dampers are compared on their damping properties. Valve assemblies of VDD are modelled to get an insight into its working principle. The mechanical structure composed by four valve assemblies helps to enable VDD to approach the performance by those semi-active control dampers. The valve structure parameters are determined by the suggested two-step process. Hydraulic model of the damper is built with AMEsim. Simulation result of F-V curves, which is similar to those of semi-active control damper, demonstrates that VDD could achieve the similar performance of semi-active control damper. The performance of a quarter vehicle model employing VDD is analyzed and compared with semi-active suspension. Simulation results show that VDD could perform as good as a semi-active control damper. In addition, no add-on hardware or energy consumption is needed for VDD to achieve the remarkable performance.
McCauley, Peter; Kalachev, Leonid V; Mollicone, Daniel J; Banks, Siobhan; Dinges, David F; Van Dongen, Hans P A
2013-12-01
Recent experimental observations and theoretical advances have indicated that the homeostatic equilibrium for sleep/wake regulation--and thereby sensitivity to neurobehavioral impairment from sleep loss--is modulated by prior sleep/wake history. This phenomenon was predicted by a biomathematical model developed to explain changes in neurobehavioral performance across days in laboratory studies of total sleep deprivation and sustained sleep restriction. The present paper focuses on the dynamics of neurobehavioral performance within days in this biomathematical model of fatigue. Without increasing the number of model parameters, the model was updated by incorporating time-dependence in the amplitude of the circadian modulation of performance. The updated model was calibrated using a large dataset from three laboratory experiments on psychomotor vigilance test (PVT) performance, under conditions of sleep loss and circadian misalignment; and validated using another large dataset from three different laboratory experiments. The time-dependence of circadian amplitude resulted in improved goodness-of-fit in night shift schedules, nap sleep scenarios, and recovery from prior sleep loss. The updated model predicts that the homeostatic equilibrium for sleep/wake regulation--and thus sensitivity to sleep loss--depends not only on the duration but also on the circadian timing of prior sleep. This novel theoretical insight has important implications for predicting operator alertness during work schedules involving circadian misalignment such as night shift work.
Oscillating in synchrony with a metronome: serial dependence, limit cycle dynamics, and modeling.
Torre, Kjerstin; Balasubramaniam, Ramesh; Delignières, Didier
2010-07-01
We analyzed serial dependencies in periods and asynchronies collected during oscillations performed in synchrony with a metronome. Results showed that asynchronies contain 1/f fluctuations, and the series of periods contain antipersistent dependence. The analysis of the phase portrait revealed a specific asymmetry induced by synchronization. We propose a hybrid limit cycle model including a cycle-dependent stiffness parameter provided with fractal properties, and a parametric driving function based on velocity. This model accounts for most experimentally evidenced statistical features, including serial dependence and limit cycle dynamics. We discuss the results and modeling choices within the framework of event-based and emergent timing.
Modeling the Direct and Indirect Determinants of Different Types of Individual Job Performance
2008-06-01
cognitions , and self-regulation). A different model was found to describe the process depending on whether the performance dimension was an element of...performing the behaviors they indicated they intended to perform, and assembled a battery of existing instruments to measure cognitive ability, personality...model came from the task performance dimension. For this dimension, knowledge, skill, cognitive choice aspects of motivation, and self-regulation
A computational approach to compare regression modelling strategies in prediction research.
Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H
2016-08-25
It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.
3D inelastic analysis methods for hot section components
NASA Technical Reports Server (NTRS)
Dame, L. T.; Chen, P. C.; Hartle, M. S.; Huang, H. T.
1985-01-01
The objective is to develop analytical tools capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. Three models were developed. A simple model performs time dependent inelastic analysis using the power law creep equation. The second model is the classical model of Professors Walter Haisler and David Allen of Texas A and M University. The third model is the unified model of Bodner, Partom, et al. All models were customized for linear variation of loads and temperatures with all material properties and constitutive models being temperature dependent.
McCauley, Peter; Kalachev, Leonid V.; Mollicone, Daniel J.; Banks, Siobhan; Dinges, David F.; Van Dongen, Hans P. A.
2013-01-01
Recent experimental observations and theoretical advances have indicated that the homeostatic equilibrium for sleep/wake regulation—and thereby sensitivity to neurobehavioral impairment from sleep loss—is modulated by prior sleep/wake history. This phenomenon was predicted by a biomathematical model developed to explain changes in neurobehavioral performance across days in laboratory studies of total sleep deprivation and sustained sleep restriction. The present paper focuses on the dynamics of neurobehavioral performance within days in this biomathematical model of fatigue. Without increasing the number of model parameters, the model was updated by incorporating time-dependence in the amplitude of the circadian modulation of performance. The updated model was calibrated using a large dataset from three laboratory experiments on psychomotor vigilance test (PVT) performance, under conditions of sleep loss and circadian misalignment; and validated using another large dataset from three different laboratory experiments. The time-dependence of circadian amplitude resulted in improved goodness-of-fit in night shift schedules, nap sleep scenarios, and recovery from prior sleep loss. The updated model predicts that the homeostatic equilibrium for sleep/wake regulation—and thus sensitivity to sleep loss—depends not only on the duration but also on the circadian timing of prior sleep. This novel theoretical insight has important implications for predicting operator alertness during work schedules involving circadian misalignment such as night shift work. Citation: McCauley P; Kalachev LV; Mollicone DJ; Banks S; Dinges DF; Van Dongen HPA. Dynamic circadian modulation in a biomathematical model for the effects of sleep and sleep loss on waking neurobehavioral performance. SLEEP 2013;36(12):1987-1997. PMID:24293775
Modeling the data management system of Space Station Freedom with DEPEND
NASA Technical Reports Server (NTRS)
Olson, Daniel P.; Iyer, Ravishankar K.; Boyd, Mark A.
1993-01-01
Some of the features and capabilities of the DEPEND simulation-based modeling tool are described. A study of a 1553B local bus subsystem of the Space Station Freedom Data Management System (SSF DMS) is used to illustrate some types of system behavior that can be important to reliability and performance evaluations of this type of spacecraft. A DEPEND model of the subsystem is used to illustrate how these types of system behavior can be modeled, and shows what kinds of engineering and design questions can be answered through the use of these modeling techniques. DEPEND's process-based simulation environment is shown to provide a flexible method for modeling complex interactions between hardware and software elements of a fault-tolerant computing system.
Analysis of Solar Cell Efficiency for Venus Atmosphere and Surface Missions
NASA Technical Reports Server (NTRS)
Landis, Geoffrey A.; Haag, Emily
2013-01-01
A simplified model of solar power in the Venus environment is developed, in which the solar intensity, solar spectrum, and temperature as a function of altitude is applied to a model of photovoltaic performance, incorporating the temperature and intensity dependence of the open-circuit voltage and the temperature dependence of the bandgap and spectral response of the cell. We use this model to estimate the performance of solar cells for both the surface of Venus and for atmospheric probes at altitudes from the surface up to 60 km. The model shows that photovoltaic cells will produce power even at the surface of Venus.
Gobas, Frank A P C; Lai, Hao-Feng; Mackay, Donald; Padilla, Lauren E; Goetz, Andy; Jackson, Scott H
2018-10-15
A time-dependent environmental fate and food-web bioaccumulation model is developed to improve the evaluation of the behaviour of non-ionic hydrophobic organic pesticides in farm ponds. The performance of the model was tested by simulating the behaviour of 3 hydrophobic organic pesticides, i.e., metaflumizone (CAS Number: 139968-49-3), kresoxim-methyl (CAS Number: 144167-04-4) and pyraclostrobin (CAS Number: 175013-18-0), in microcosm studies and a Bluegill bioconcentration study for metaflumizone. In general, model-calculated concentrations of the pesticides were in reasonable agreement with the observed concentrations. Also, calculated bioaccumulation metrics were in good agreement with observed values. The model's application to simulate concentrations of organic pesticides in water, sediment and biota of farm ponds after episodic pesticide applications is illustrated. It is further shown that the time dependent model has substantially better accuracy in simulating the concentrations of pesticides in farm ponds resulting from episodic pesticide application than corresponding steady-state models. The time dependent model is particularly useful in describing the behaviour of highly hydrophobic pesticides that have a potential to biomagnify in aquatic food-webs. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dehghan Banadaki, Arash
Predicting the ultimate performance of asphalt concrete under realistic loading conditions is the main key to developing better-performing materials, designing long-lasting pavements, and performing reliable lifecycle analysis for pavements. The fatigue performance of asphalt concrete depends on the mechanical properties of the constituent materials, namely asphalt binder and aggregate. This dependent link between performance and mechanical properties is extremely complex, and experimental techniques often are used to try to characterize the performance of hot mix asphalt. However, given the seemingly uncountable number of mixture designs and loading conditions, it is simply not economical to try to understand and characterize the material behavior solely by experimentation. It is well known that analytical and computational modeling methods can be combined with experimental techniques to reduce the costs associated with understanding and characterizing the mechanical behavior of the constituent materials. This study aims to develop a multiscale micromechanical lattice-based model to predict cracking in asphalt concrete using component material properties. The proposed algorithm, while capturing different phenomena for different scales, also minimizes the need for laboratory experiments. The developed methodology builds on a previously developed lattice model and the viscoelastic continuum damage model to link the component material properties to the mixture fatigue performance. The resulting lattice model is applied to predict the dynamic modulus mastercurves for different scales. A framework for capturing the so-called structuralization effects is introduced that significantly improves the accuracy of the modulus prediction. Furthermore, air voids are added to the model to help capture this important micromechanical feature that affects the fatigue performance of asphalt concrete as well as the modulus value. The effects of rate dependency are captured by implementing the viscoelastic fracture criterion. In the end, an efficient cyclic loading framework is developed to evaluate the damage accumulation in the material that is caused by long-sustained cyclic loads.
Specifying and Refining a Measurement Model for a Computer-Based Interactive Assessment
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
2004-01-01
The challenges of modeling students' performance in computer-based interactive assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance. This article describes a Bayesian approach to modeling and estimating cognitive models…
Deriving the polarization behavior of many-layer mirror coatings
NASA Astrophysics Data System (ADS)
White, Amanda J.; Harrington, David M.; Sueoka, Stacey R.
2018-06-01
End-to-end models of astronomical instrument performance are becoming commonplace to demonstrate feasibility and guarantee performance at large observatories. Astronomical techniques like adaptive optics and high contrast imaging have made great strides towards making detailed performance predictions, however, for polarimetric techniques, fundamental tools for predicting performance do not exist. One big missing piece is predicting the wavelength and field of view dependence of a many-mirror articulated optical system particularly with complex protected metal coatings. Predicting polarization performance of instruments requires combining metrology of mirror coatings, tools to create mirror coating models, and optical modeling software for polarized beam propagation. The inability to predict instrument induced polarization or to define polarization performance expectations has far reaching implications for up and coming major observatories, such as the Daniel K. Inouye Solar Telescope (DKIST), that aim to take polarization measurements at unprecedented sensitivity and resolution.Here we present a method for modelling the wavelength dependent refractive index of an optic using Berreman calculus - a mathematical formalism that describes how an electromagnetic field propagates through a birefringent medium. From Berreman calculus, we can better predict the Mueller matrix, diattenuation, and retardance of an arbitrary thicknesses of amorphous many-layer coatings as well as stacks of birefringent crystals from laboratory measurements. This will allow for the wavelength dependent refractive index to be accurately determined and the polarization behavior to be derived for a given optic.
Measured Visual Motion Sensitivity at Fixed Contrast in the Periphery and Far Periphery
2017-08-01
group Soldier performance. Soldier performance depends on visual detection of enemy personnel and materiel. Vision modeling in IWARS is neither...a highly time-critical and order- dependent activity, these unrealistic characterizations of target detection time and order severely limit the...recognize that MVTs should depend on target contrast, so we selected a target design different from that used in the Monaco et al. (2007) study. Based
Lu, Zeqin; Jhoja, Jaspreet; Klein, Jackson; Wang, Xu; Liu, Amy; Flueckiger, Jonas; Pond, James; Chrostowski, Lukas
2017-05-01
This work develops an enhanced Monte Carlo (MC) simulation methodology to predict the impacts of layout-dependent correlated manufacturing variations on the performance of photonics integrated circuits (PICs). First, to enable such performance prediction, we demonstrate a simple method with sub-nanometer accuracy to characterize photonics manufacturing variations, where the width and height for a fabricated waveguide can be extracted from the spectral response of a racetrack resonator. By measuring the spectral responses for a large number of identical resonators spread over a wafer, statistical results for the variations of waveguide width and height can be obtained. Second, we develop models for the layout-dependent enhanced MC simulation. Our models use netlist extraction to transfer physical layouts into circuit simulators. Spatially correlated physical variations across the PICs are simulated on a discrete grid and are mapped to each circuit component, so that the performance for each component can be updated according to its obtained variations, and therefore, circuit simulations take the correlated variations between components into account. The simulation flow and theoretical models for our layout-dependent enhanced MC simulation are detailed in this paper. As examples, several ring-resonator filter circuits are studied using the developed enhanced MC simulation, and statistical results from the simulations can predict both common-mode and differential-mode variations of the circuit performance.
NASA Astrophysics Data System (ADS)
Wei, Kai; Wang, Feng; Wang, Ping; Liu, Zi-xuan; Zhang, Pan
2017-03-01
The soft under baseplate pad of WJ-8 rail fastener frequently used in China's high-speed railways was taken as the study subject, and a laboratory test was performed to measure its temperature and frequency-dependent dynamic performance at 0.3 Hz and at -60°C to 20°C with intervals of 2.5°C. Its higher frequency-dependent results at different temperatures were then further predicted based on the time-temperature superposition (TTS) and Williams-Landel-Ferry (WLF) formula. The fractional derivative Kelvin-Voigt (FDKV) model was used to represent the temperature- and frequency-dependent dynamic properties of the tested rail pad. By means of the FDKV model for rail pads and vehicle-track coupled dynamic theory, high-speed vehicle-track coupled vibrations due to temperature- and frequency-dependent dynamic properties of rail pads was investigated. Finally, further combining with the measured frequency-dependent dynamic performance of vehicle's rubber primary suspension, the high-speed vehicle-track coupled vibration responses were discussed. It is found that the storage stiffness and loss factor of the tested rail pad are sensitive to low temperatures or high frequencies. The proposed FDKV model for the frequency-dependent storage stiffness and loss factors of the tested rail pad can basically meet the fitting precision, especially at ordinary temperatures. The numerical simulation results indicate that the vertical vibration levels of high-speed vehicle-track coupled systems calculated with the FDKV model for rail pads in time domain are higher than those calculated with the ordinary Kelvin-Voigt (KV) model for rail pads. Additionally, the temperature- and frequency-dependent dynamic properties of the tested rail pads would alter the vertical vibration acceleration levels (VALs) of the car body and bogie in 1/3 octave frequencies above 31.5 Hz, especially enlarge the vertical VALs of the wheel set and rail in 1/3 octave frequencies of 31.5-100 Hz and above 315 Hz, which are the dominant frequencies of ground vibration acceleration and rolling noise (or bridge noise) caused by high-speed railways respectively. Since the fractional derivative value of the adopted rubber primary suspension, unlike the tested rail pad, is very close to 1, its frequency-dependent dynamic performance has little effect on high-speed vehicle-track coupled vibration responses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly
Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition,more » substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.« less
Models and techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1978-01-01
Progress in the development of system models and techniques for the formulation and evaluation of aircraft computer system effectiveness is reported. Topics covered include: analysis of functional dependence: a prototype software package, METAPHOR, developed to aid the evaluation of performability; and a comprehensive performability modeling and evaluation exercise involving the SIFT computer.
Specifying and Refining a Measurement Model for a Simulation-Based Assessment. CSE Report 619.
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
2004-01-01
The challenges of modeling students' performance in simulation-based assessments include accounting for multiple aspects of knowledge and skill that arise in different situations and the conditional dependencies among multiple aspects of performance in a complex assessment. This paper describes a Bayesian approach to modeling and estimating…
Monti, S.; Cooper, G. F.
1998-01-01
We present a new Bayesian classifier for computer-aided diagnosis. The new classifier builds upon the naive-Bayes classifier, and models the dependencies among patient findings in an attempt to improve its performance, both in terms of classification accuracy and in terms of calibration of the estimated probabilities. This work finds motivation in the argument that highly calibrated probabilities are necessary for the clinician to be able to rely on the model's recommendations. Experimental results are presented, supporting the conclusion that modeling the dependencies among findings improves calibration. PMID:9929288
Low-energy fusion dynamics of weakly bound nuclei: A time dependent perspective
NASA Astrophysics Data System (ADS)
Diaz-Torres, A.; Boselli, M.
2016-05-01
Recent dynamical fusion models for weakly bound nuclei at low incident energies, based on a time-dependent perspective, are briefly presented. The main features of both the PLATYPUS model and a new quantum approach are highlighted. In contrast to existing timedependent quantum models, the present quantum approach separates the complete and incomplete fusion from the total fusion. Calculations performed within a toy model for 6Li + 209Bi at near-barrier energies show that converged excitation functions for total, complete and incomplete fusion can be determined with the time-dependent wavepacket dynamics.
Using hybrid method to evaluate the green performance in uncertainty.
Tseng, Ming-Lang; Lan, Lawrence W; Wang, Ray; Chiu, Anthony; Cheng, Hui-Ping
2011-04-01
Green performance measure is vital for enterprises in making continuous improvements to maintain sustainable competitive advantages. Evaluation of green performance, however, is a challenging task due to the dependence complexity of the aspects, criteria, and the linguistic vagueness of some qualitative information and quantitative data together. To deal with this issue, this study proposes a novel approach to evaluate the dependence aspects and criteria of firm's green performance. The rationale of the proposed approach, namely green network balanced scorecard, is using balanced scorecard to combine fuzzy set theory with analytical network process (ANP) and importance-performance analysis (IPA) methods, wherein fuzzy set theory accounts for the linguistic vagueness of qualitative criteria and ANP converts the relations among the dependence aspects and criteria into an intelligible structural modeling used IPA. For the empirical case study, four dependence aspects and 34 green performance criteria for PCB firms in Taiwan were evaluated. The managerial implications are discussed.
Defining a Cancer Dependency Map.
Tsherniak, Aviad; Vazquez, Francisca; Montgomery, Phil G; Weir, Barbara A; Kryukov, Gregory; Cowley, Glenn S; Gill, Stanley; Harrington, William F; Pantel, Sasha; Krill-Burger, John M; Meyers, Robin M; Ali, Levi; Goodale, Amy; Lee, Yenarae; Jiang, Guozhi; Hsiao, Jessica; Gerath, William F J; Howell, Sara; Merkel, Erin; Ghandi, Mahmoud; Garraway, Levi A; Root, David E; Golub, Todd R; Boehm, Jesse S; Hahn, William C
2017-07-27
Most human epithelial tumors harbor numerous alterations, making it difficult to predict which genes are required for tumor survival. To systematically identify cancer dependencies, we analyzed 501 genome-scale loss-of-function screens performed in diverse human cancer cell lines. We developed DEMETER, an analytical framework that segregates on- from off-target effects of RNAi. 769 genes were differentially required in subsets of these cell lines at a threshold of six SDs from the mean. We found predictive models for 426 dependencies (55%) by nonlinear regression modeling considering 66,646 molecular features. Many dependencies fall into a limited number of classes, and unexpectedly, in 82% of models, the top biomarkers were expression based. We demonstrated the basis behind one such predictive model linking hypermethylation of the UBB ubiquitin gene to a dependency on UBC. Together, these observations provide a foundation for a cancer dependency map that facilitates the prioritization of therapeutic targets. Copyright © 2017 Elsevier Inc. All rights reserved.
Citizen Schools' Partner-Dependent Expanded Learning Model
ERIC Educational Resources Information Center
Schwarz, Eric; McCann, Emily
2011-01-01
In 2005, the Clarence Edwards middle school in Boston was failing. It was one of the lowest-performing schools in the city and on the verge of closure. Today, the school is thriving as one of Boston's highest-performing middle schools. The catalyst for this dramatic turnaround was the implementation of a new, partner-dependent expanded learning…
NASA Astrophysics Data System (ADS)
Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.
2016-12-01
Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.
NASA Technical Reports Server (NTRS)
Xiao, Yegao; Bhat, Ishwara; Abedin, M. Nurul
2005-01-01
InP/InGaAs avalanche photodiodes (APDs) are being widely utilized in optical receivers for modern long haul and high bit-rate optical fiber communication systems. The separate absorption, grading, charge, and multiplication (SAGCM) structure is an important design consideration for APDs with high performance characteristics. Time domain modeling techniques have been previously developed to provide better understanding and optimize design issues by saving time and cost for the APD research and development. In this work, performance dependences on multiplication layer thickness have been investigated by time domain modeling. These performance characteristics include breakdown field and breakdown voltage, multiplication gain, excess noise factor, frequency response and bandwidth etc. The simulations are performed versus various multiplication layer thicknesses with certain fixed values for the areal charge sheet density whereas the values for the other structure and material parameters are kept unchanged. The frequency response is obtained from the impulse response by fast Fourier transformation. The modeling results are presented and discussed, and design considerations, especially for high speed operation at 10 Gbit/s, are further analyzed.
The performance of discrete models of low Reynolds number swimmers.
Wang, Qixuan; Othmer, Hans G
2015-12-01
Swimming by shape changes at low Reynolds number is widely used in biology and understanding how the performance of movement depends on the geometric pattern of shape changes is important to understand swimming of microorganisms and in designing low Reynolds number swimming models. The simplest models of shape changes are those that comprise a series of linked spheres that can change their separation and/or their size. Herein we compare the performance of three models in which these modes are used in different ways.
Dynamics of a Hogg-Huberman Model with Time Dependent Reevaluation Rates
NASA Astrophysics Data System (ADS)
Tanaka, Toshijiro; Kurihara, Tetsuya; Inoue, Masayoshi
2006-05-01
The dynamical behavior of the Hogg-Huberman model with time-dependent reevaluation rates is studied. The time dependence of the reevaluation rate that agents using one of resources decide to consider their resource choice is obtained in terms of states of the system. It is seen that the change of fraction of agents using one resource is suppressed to be smaller than that in the case of a fixed reevaluation rate and the chaos control in the system associated with time-dependent reevaluation rates can be performed by the system itself.
A joint frailty-copula model between tumour progression and death for meta-analysis.
Emura, Takeshi; Nakatochi, Masahiro; Murotani, Kenta; Rondeau, Virginie
2017-12-01
Dependent censoring often arises in biomedical studies when time to tumour progression (e.g., relapse of cancer) is censored by an informative terminal event (e.g., death). For meta-analysis combining existing studies, a joint survival model between tumour progression and death has been considered under semicompeting risks, which induces dependence through the study-specific frailty. Our paper here utilizes copulas to generalize the joint frailty model by introducing additional source of dependence arising from intra-subject association between tumour progression and death. The practical value of the new model is particularly evident for meta-analyses in which only a few covariates are consistently measured across studies and hence there exist residual dependence. The covariate effects are formulated through the Cox proportional hazards model, and the baseline hazards are nonparametrically modeled on a basis of splines. The estimator is then obtained by maximizing a penalized log-likelihood function. We also show that the present methodologies are easily modified for the competing risks or recurrent event data, and are generalized to accommodate left-truncation. Simulations are performed to examine the performance of the proposed estimator. The method is applied to a meta-analysis for assessing a recently suggested biomarker CXCL12 for survival in ovarian cancer patients. We implement our proposed methods in R joint.Cox package.
Risk of dependence associated with health, social support, and lifestyle
Alcañiz, Manuela; Brugulat, Pilar; Guillén, Montserrat; Medina-Bustos, Antonia; Mompart-Penina, Anna; Solé-Auró, Aïda
2015-01-01
OBJECTIVE To analyze the prevalence of individuals at risk of dependence and its associated factors. METHODS The study was based on data from the Catalan Health Survey, Spain conducted in 2010 and 2011. Logistic regression models from a random sample of 3,842 individuals aged ≥ 15 years were used to classify individuals according to the state of their personal autonomy. Predictive models were proposed to identify indicators that helped distinguish dependent individuals from those at risk of dependence. Variables on health status, social support, and lifestyles were considered. RESULTS We found that 18.6% of the population presented a risk of dependence, especially after age 65. Compared with this group, individuals who reported dependence (11.0%) had difficulties performing activities of daily living and had to receive support to perform them. Habits such as smoking, excessive alcohol consumption, and being sedentary were associated with a higher probability of dependence, particularly for women. CONCLUSIONS Difficulties in carrying out activities of daily living precede the onset of dependence. Preserving personal autonomy and function without receiving support appear to be a preventive factor. Adopting an active and healthy lifestyle helps reduce the risk of dependence. PMID:26018786
Risk of dependence associated with health, social support, and lifestyle.
Alcañiz, Manuela; Brugulat, Pilar; Guillén, Montserrat; Medina-Bustos, Antonia; Mompart-Penina, Anna; Solé-Auró, Aïda
2015-01-01
OBJECTIVE To analyze the prevalence of individuals at risk of dependence and its associated factors. METHODS The study was based on data from the Catalan Health Survey, Spain conducted in 2010 and 2011. Logistic regression models from a random sample of 3,842 individuals aged ≥ 15 years were used to classify individuals according to the state of their personal autonomy. Predictive models were proposed to identify indicators that helped distinguish dependent individuals from those at risk of dependence. Variables on health status, social support, and lifestyles were considered. RESULTS We found that 18.6% of the population presented a risk of dependence, especially after age 65. Compared with this group, individuals who reported dependence (11.0%) had difficulties performing activities of daily living and had to receive support to perform them. Habits such as smoking, excessive alcohol consumption, and being sedentary were associated with a higher probability of dependence, particularly for women. CONCLUSIONS Difficulties in carrying out activities of daily living precede the onset of dependence. Preserving personal autonomy and function without receiving support appear to be a preventive factor. Adopting an active and healthy lifestyle helps reduce the risk of dependence.
Geospace Environment Modeling 2008-2009 Challenge: Ground Magnetic Field Perturbations
NASA Technical Reports Server (NTRS)
Pulkkinen, A.; Kuznetsova, M.; Ridley, A.; Raeder, J.; Vapirev, A.; Weimer, D.; Weigel, R. S.; Wiltberger, M.; Millward, G.; Rastatter, L.;
2011-01-01
Acquiring quantitative metrics!based knowledge about the performance of various space physics modeling approaches is central for the space weather community. Quantification of the performance helps the users of the modeling products to better understand the capabilities of the models and to choose the approach that best suits their specific needs. Further, metrics!based analyses are important for addressing the differences between various modeling approaches and for measuring and guiding the progress in the field. In this paper, the metrics!based results of the ground magnetic field perturbation part of the Geospace Environment Modeling 2008 2009 Challenge are reported. Predictions made by 14 different models, including an ensemble model, are compared to geomagnetic observatory recordings from 12 different northern hemispheric locations. Five different metrics are used to quantify the model performances for four storm events. It is shown that the ranking of the models is strongly dependent on the type of metric used to evaluate the model performance. None of the models rank near or at the top systematically for all used metrics. Consequently, one cannot pick the absolute winner : the choice for the best model depends on the characteristics of the signal one is interested in. Model performances vary also from event to event. This is particularly clear for root!mean!square difference and utility metric!based analyses. Further, analyses indicate that for some of the models, increasing the global magnetohydrodynamic model spatial resolution and the inclusion of the ring current dynamics improve the models capability to generate more realistic ground magnetic field fluctuations.
Morgan, R; Gallagher, M
2012-01-01
In this paper we extend a previously proposed randomized landscape generator in combination with a comparative experimental methodology to study the behavior of continuous metaheuristic optimization algorithms. In particular, we generate two-dimensional landscapes with parameterized, linear ridge structure, and perform pairwise comparisons of algorithms to gain insight into what kind of problems are easy and difficult for one algorithm instance relative to another. We apply this methodology to investigate the specific issue of explicit dependency modeling in simple continuous estimation of distribution algorithms. Experimental results reveal specific examples of landscapes (with certain identifiable features) where dependency modeling is useful, harmful, or has little impact on mean algorithm performance. Heat maps are used to compare algorithm performance over a large number of landscape instances and algorithm trials. Finally, we perform a meta-search in the landscape parameter space to find landscapes which maximize the performance between algorithms. The results are related to some previous intuition about the behavior of these algorithms, but at the same time lead to new insights into the relationship between dependency modeling in EDAs and the structure of the problem landscape. The landscape generator and overall methodology are quite general and extendable and can be used to examine specific features of other algorithms.
The Performance of Local Dependence Measures with Psychological Data
ERIC Educational Resources Information Center
Houts, Carrie R.; Edwards, Michael C.
2013-01-01
The violation of the assumption of local independence when applying item response theory (IRT) models has been shown to have a negative impact on all estimates obtained from the given model. Numerous indices and statistics have been proposed to aid analysts in the detection of local dependence (LD). A Monte Carlo study was conducted to evaluate…
Matsubara, Takamitsu; Morimoto, Jun
2013-08-01
In this study, we propose a multiuser myoelectric interface that can easily adapt to novel users. When a user performs different motions (e.g., grasping and pinching), different electromyography (EMG) signals are measured. When different users perform the same motion (e.g., grasping), different EMG signals are also measured. Therefore, designing a myoelectric interface that can be used by multiple users to perform multiple motions is difficult. To cope with this problem, we propose for EMG signals a bilinear model that is composed of two linear factors: 1) user dependent and 2) motion dependent. By decomposing the EMG signals into these two factors, the extracted motion-dependent factors can be used as user-independent features. We can construct a motion classifier on the extracted feature space to develop the multiuser interface. For novel users, the proposed adaptation method estimates the user-dependent factor through only a few interactions. The bilinear EMG model with the estimated user-dependent factor can extract the user-independent features from the novel user data. We applied our proposed method to a recognition task of five hand gestures for robotic hand control using four-channel EMG signals measured from subject forearms. Our method resulted in 73% accuracy, which was statistically significantly different from the accuracy of standard nonmultiuser interfaces, as the result of a two-sample t -test at a significance level of 1%.
Panzer, Stefan; Kennedy, Deanna; Wang, Chaoyi; Shea, Charles H
2018-02-01
An experiment was conducted to determine if the performance and learning of a multi-frequency (1:2) coordination pattern between the limbs are enhanced when a model is provided prior to each acquisition trial. Research has indicated very effective performance of a wide variety of bimanual coordination tasks when Lissajous plots with goal templates are provided, but this research has also found that participants become dependent on this information and perform quite poorly when it is withdrawn. The present experiment was designed to test three forms of modeling (Lissajous with template, Lissajous without template, and limb model), but in each situations, the model was presented prior to practice and not available during the performance of the task. This was done to decrease dependency on the model and increase the development of an internal reference of correctness that could be applied on test trials. A control condition was also collected, where a metronome was used to guide the movement. Following less than 7 min of practice, participants in the three modeling conditions performed the first test block very effectively; however, performance of the control condition was quite poor. Note that Test 1 was performed under the same conditions as used during acquisition. Test 2 was conducted with no augmented information provided prior to or during the performance of the task. Only participants in the limb model condition were able to maintain performance on Test 2. The findings suggest that a very simple intuitive display can provide the necessary information to form an effective internal representation of the coordination pattern which can be used guide performance when the augmented display is withdrawn.
Real-time individualization of the unified model of performance.
Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Balkin, Thomas J; Reifman, Jaques
2017-12-01
Existing mathematical models for predicting neurobehavioural performance are not suited for mobile computing platforms because they cannot adapt model parameters automatically in real time to reflect individual differences in the effects of sleep loss. We used an extended Kalman filter to develop a computationally efficient algorithm that continually adapts the parameters of the recently developed Unified Model of Performance (UMP) to an individual. The algorithm accomplishes this in real time as new performance data for the individual become available. We assessed the algorithm's performance by simulating real-time model individualization for 18 subjects subjected to 64 h of total sleep deprivation (TSD) and 7 days of chronic sleep restriction (CSR) with 3 h of time in bed per night, using psychomotor vigilance task (PVT) data collected every 2 h during wakefulness. This UMP individualization process produced parameter estimates that progressively approached the solution produced by a post-hoc fitting of model parameters using all data. The minimum number of PVT measurements needed to individualize the model parameters depended upon the type of sleep-loss challenge, with ~30 required for TSD and ~70 for CSR. However, model individualization depended upon the overall duration of data collection, yielding increasingly accurate model parameters with greater number of days. Interestingly, reducing the PVT sampling frequency by a factor of two did not notably hamper model individualization. The proposed algorithm facilitates real-time learning of an individual's trait-like responses to sleep loss and enables the development of individualized performance prediction models for use in a mobile computing platform. © 2017 European Sleep Research Society.
ERIC Educational Resources Information Center
Angeli, Charoula; Valanides, Nicos
2013-01-01
The present study investigated the problem-solving performance of 101 university students and their interactions with a computer modeling tool in order to solve a complex problem. Based on their performance on the hidden figures test, students were assigned to three groups of field-dependent (FD), field-mixed (FM), and field-independent (FI)…
NASA Astrophysics Data System (ADS)
Sixdenier, Fabien; Yade, Ousseynou; Martin, Christian; Bréard, Arnaud; Vollaire, Christian
2018-05-01
Electromagnetic interference (EMI) filters design is a rather difficult task where engineers have to choose adequate magnetic materials, design the magnetic circuit and choose the size and number of turns. The final design must achieve the attenuation requirements (constraints) and has to be as compact as possible (goal). Alternating current (AC) analysis is a powerful tool to predict global impedance or attenuation of any filter. However, AC analysis are generally performed without taking into account the frequency-dependent complex permeability behaviour of soft magnetic materials. That's why, we developed two frequency-dependent complex permeability models able to be included into SPICE models. After an identification process, the performances of each model are compared to measurements made on a realistic EMI filter prototype in common mode (CM) and differential mode (DM) to see the benefit of the approach. Simulation results are in good agreement with the measured ones especially in the middle frequency range.
Wang, Wen-Cheng; Cho, Wen-Chien; Chen, Yin-Jen
2014-01-01
It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models. PMID:24574916
Wang, Wen-Cheng; Cho, Wen-Chien; Chen, Yin-Jen
2014-01-01
It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models.
NASA Astrophysics Data System (ADS)
Pandey, Harsh; Underhill, Patrick T.
2015-11-01
The electrophoretic mobility of molecules such as λ -DNA depends on the conformation of the molecule. It has been shown that electrohydrodynamic interactions between parts of the molecule lead to a mobility that depends on conformation and can explain some experimental observations. We have developed a new coarse-grained model that incorporates these changes of mobility into a bead-spring chain model. Brownian dynamics simulations have been performed using this model. The model reproduces the cross-stream migration that occurs in capillary electrophoresis when pressure-driven flow is applied parallel or antiparallel to the electric field. The model also reproduces the change of mobility when the molecule is stretched significantly in an extensional field. We find that the conformation-dependent mobility can lead to a new type of unraveling of the molecule in strong fields. This occurs when different parts of the molecule have different mobilities and the electric field is large.
Modelling Pollutant Dispersion in a Street Network
NASA Astrophysics Data System (ADS)
Salem, N. Ben; Garbero, V.; Salizzoni, P.; Lamaison, G.; Soulhac, L.
2015-04-01
This study constitutes a further step in the analysis of the performances of a street network model to simulate atmospheric pollutant dispersion in urban areas. The model, named SIRANE, is based on the decomposition of the urban atmosphere into two sub-domains: the urban boundary layer, whose dynamics is assumed to be well established, and the urban canopy, represented as a series of interconnected boxes. Parametric laws govern the mass exchanges between the boxes under the assumption that the pollutant dispersion within the canopy can be fully simulated by modelling three main bulk transfer phenomena: channelling along street axes, transfers at street intersections, and vertical exchange between street canyons and the overlying atmosphere. Here, we aim to evaluate the reliability of the parametrizations adopted to simulate these phenomena, by focusing on their possible dependence on the external wind direction. To this end, we test the model against concentration measurements within an idealized urban district whose geometrical layout closely matches the street network represented in SIRANE. The analysis is performed for an urban array with a fixed geometry and a varying wind incidence angle. The results show that the model provides generally good results with the reference parametrizations adopted in SIRANE and that its performances are quite robust for a wide range of the model parameters. This proves the reliability of the street network approach in simulating pollutant dispersion in densely built city districts. The results also show that the model performances may be improved by considering a dependence of the wind fluctuations at street intersections and of the vertical exchange velocity on the direction of the incident wind. This opens the way for further investigations to clarify the dependence of these parameters on wind direction and street aspect ratios.
Na, Hyuntae; Song, Guang
2015-07-01
In a recent work we developed a method for deriving accurate simplified models that capture the essentials of conventional all-atom NMA and identified two best simplified models: ssNMA and eANM, both of which have a significantly higher correlation with NMA in mean square fluctuation calculations than existing elastic network models such as ANM and ANMr2, a variant of ANM that uses the inverse of the squared separation distances as spring constants. Here, we examine closely how the performance of these elastic network models depends on various factors, namely, the presence of hydrogen atoms in the model, the quality of input structures, and the effect of crystal packing. The study reveals the strengths and limitations of these models. Our results indicate that ssNMA and eANM are the best fine-grained elastic network models but their performance is sensitive to the quality of input structures. When the quality of input structures is poor, ANMr2 is a good alternative for computing mean-square fluctuations while ANM model is a good alternative for obtaining normal modes. © 2015 Wiley Periodicals, Inc.
Design of Malaria Diagnostic Criteria for the Sysmex XE-2100 Hematology Analyzer
Campuzano-Zuluaga, Germán; Álvarez-Sánchez, Gonzalo; Escobar-Gallo, Gloria Elcy; Valencia-Zuluaga, Luz Marina; Ríos-Orrego, Alexandra Marcela; Pabón-Vidal, Adriana; Miranda-Arboleda, Andrés Felipe; Blair-Trujillo, Silvia; Campuzano-Maya, Germán
2010-01-01
Thick film, the standard diagnostic procedure for malaria, is not always ordered promptly. A failsafe diagnostic strategy using an XE-2100 analyzer is proposed, and for this strategy, malaria diagnostic models for the XE-2100 were developed and tested for accuracy. Two hundred eighty-one samples were distributed into Plasmodium vivax, P. falciparum, and acute febrile syndrome groups for model construction. Model validation was performed using 60% of malaria cases and a composite control group of samples from AFS and healthy participants from endemic and non-endemic regions. For P. vivax, two observer-dependent models (accuracy = 95.3–96.9%), one non–observer-dependent model using built-in variables (accuracy = 94.7%), and one non–observer-dependent model using new and built-in variables (accuracy = 96.8%) were developed. For P. falciparum, two non–observer-dependent models (accuracies = 85% and 89%) were developed. These models could be used by health personnel or be integrated as a malaria alarm for the XE-2100 to prompt early malaria microscopic diagnosis. PMID:20207864
NASA Technical Reports Server (NTRS)
Haisler, W. E.
1983-01-01
An uncoupled constitutive model for predicting the transient response of thermal and rate dependent, inelastic material behavior was developed. The uncoupled model assumes that there is a temperature below which the total strain consists essentially of elastic and rate insensitive inelastic strains only. Above this temperature, the rate dependent inelastic strain (creep) dominates. The rate insensitive inelastic strain component is modelled in an incremental form with a yield function, blow rule and hardening law. Revisions to the hardening rule permit the model to predict temperature-dependent kinematic-isotropic hardening behavior, cyclic saturation, asymmetric stress-strain response upon stress reversal, and variable Bauschinger effect. The rate dependent inelastic strain component is modelled using a rate equation in terms of back stress, drag stress and exponent n as functions of temperature and strain. A sequence of hysteresis loops and relaxation tests are utilized to define the rate dependent inelastic strain rate. Evaluation of the model has been performed by comparison with experiments involving various thermal and mechanical load histories on 5086 aluminum alloy, 304 stainless steel and Hastelloy X.
A method for diagnosing time dependent faults using model-based reasoning systems
NASA Technical Reports Server (NTRS)
Goodrich, Charles H.
1995-01-01
This paper explores techniques to apply model-based reasoning to equipment and systems which exhibit dynamic behavior (that which changes as a function of time). The model-based system of interest is KATE-C (Knowledge based Autonomous Test Engineer) which is a C++ based system designed to perform monitoring and diagnosis of Space Shuttle electro-mechanical systems. Methods of model-based monitoring and diagnosis are well known and have been thoroughly explored by others. A short example is given which illustrates the principle of model-based reasoning and reveals some limitations of static, non-time-dependent simulation. This example is then extended to demonstrate representation of time-dependent behavior and testing of fault hypotheses in that environment.
Further Investigations of Gravity Modeling on Surface-Interacting Vehicle Simulations
NASA Technical Reports Server (NTRS)
Madden, Michael M.
2009-01-01
A vehicle simulation is "surface-interacting" if the state of the vehicle (position, velocity, and acceleration) relative to the surface is important. Surface-interacting simulations perform ascent, entry, descent, landing, surface travel, or atmospheric flight. The dynamics of surface-interacting simulations are influenced by the modeling of gravity. Gravity is the sum of gravitation and the centrifugal acceleration due to the world s rotation. Both components are functions of position relative to the world s center and that position for a given set of geodetic coordinates (latitude, longitude, and altitude) depends on the world model (world shape and dynamics). Thus, gravity fidelity depends on the fidelities of the gravitation model and the world model and on the interaction of the gravitation and world model. A surface-interacting simulation cannot treat the gravitation separately from the world model. This paper examines the actual performance of different pairs of world and gravitation models (or direct gravity models) on the travel of a subsonic civil transport in level flight under various starting conditions.
Gravity Modeling Effects on Surface-Interacting Vehicles in Supersonic Flight
NASA Technical Reports Server (NTRS)
Madden, Michael M.
2010-01-01
A vehicle simulation is "surface-interacting" if the state of the vehicle (position, velocity, and acceleration) relative to the surface is important. Surface-interacting simulations per-form ascent, entry, descent, landing, surface travel, or atmospheric flight. The dynamics of surface-interacting simulations are influenced by the modeling of gravity. Gravity is the sum of gravitation and the centrifugal acceleration due to the world s rotation. Both components are functions of position relative to the world s center and that position for a given set of geodetic coordinates (latitude, longitude, and altitude) depends on the world model (world shape and dynamics). Thus, gravity fidelity depends on the fidelities of the gravitation model and the world model and on the interaction of these two models. A surface-interacting simulation cannot treat gravitation separately from the world model. This paper examines the actual performance of different pairs of world and gravitation models (or direct gravity models) on the travel of a supersonic aircraft in level flight under various start-ing conditions.
The Comparative Performance of Conditional Independence Indices
ERIC Educational Resources Information Center
Kim, Doyoung; De Ayala, R. J.; Ferdous, Abdullah A.; Nering, Michael L.
2011-01-01
To realize the benefits of item response theory (IRT), one must have model-data fit. One facet of a model-data fit investigation involves assessing the tenability of the conditional item independence (CII) assumption. In this Monte Carlo study, the comparative performance of 10 indices for identifying conditional item dependence is assessed. The…
NASA Astrophysics Data System (ADS)
Menshikh, V.; Samorokovskiy, A.; Avsentev, O.
2018-03-01
The mathematical model of optimizing the allocation of resources to reduce the time for management decisions and algorithms to solve the general problem of resource allocation. The optimization problem of choice of resources in organizational systems in order to reduce the total execution time of a job is solved. This problem is a complex three-level combinatorial problem, for the solving of which it is necessary to implement the solution to several specific problems: to estimate the duration of performing each action, depending on the number of performers within the group that performs this action; to estimate the total execution time of all actions depending on the quantitative composition of groups of performers; to find such a distribution of the existing resource of performers in groups to minimize the total execution time of all actions. In addition, algorithms to solve the general problem of resource allocation are proposed.
An in-depth review of photovoltaic system performance models
NASA Technical Reports Server (NTRS)
Smith, J. H.; Reiter, L. R.
1984-01-01
The features, strong points and shortcomings of 10 numerical models commonly applied to assessing photovoltaic performance are discussed. The models range in capabilities from first-order approximations to full circuit level descriptions. Account is taken, at times, of the cell and module characteristics, the orientation and geometry, array-level factors, the power-conditioning equipment, the overall plant performance, O and M effects, and site-specific factors. Areas of improvement and/or necessary extensions are identified for several of the models. Although the simplicity of a model was found not necessarily to affect the accuracy of the data generated, the use of any one model was dependent on the application.
Jones, C Jessie; Rutledge, Dana N; Aquino, Jordan
2010-07-01
The purposes of this study were to determine whether people with and without fibromyalgia (FM) age 50 yr and above showed differences in physical performance and perceived functional ability and to determine whether age, gender, depression, and physical activity level altered the impact of FM status on these factors. Dependent variables included perceived function and 6 performance measures (multidimensional balance, aerobic endurance, overall functional mobility, lower body strength, and gait velocity-normal or fast). Independent (predictor) variables were FM status, age, gender, depression, and physical activity level. Results indicated significant differences between adults with and without FM on all physical-performance measures and perceived function. Linear-regression models showed that the contribution of significant predictors was in expected directions. All regression models were significant, accounting for 16-65% of variance in the dependent variables.
NASA Astrophysics Data System (ADS)
Nacif el Alaoui, Reda
Mechanical structure-property relations have been quantified for AISI 4140 steel. under different strain rates and temperatures. The structure-property relations were used. to calibrate a microstructure-based internal state variable plasticity-damage model for. monotonic tension, compression and torsion plasticity, as well as damage evolution. Strong stress state and temperature dependences were observed for the AISI 4140 steel. Tension tests on three different notched Bridgman specimens were undertaken to study. the damage-triaxiality dependence for model validation purposes. Fracture surface. analysis was performed using Scanning Electron Microscopy (SEM) to quantify the void. nucleation and void sizes in the different specimens. The stress-strain behavior exhibited. a fairly large applied stress state (tension, compression dependence, and torsion), a. moderate temperature dependence, and a relatively small strain rate dependence.
Term Dependence: Truncating the Bahadur Lazarsfeld Expansion.
ERIC Educational Resources Information Center
Losee, Robert M., Jr.
1994-01-01
Studies the performance of probabilistic information retrieval systems using differing statistical dependence assumptions when estimating the probabilities inherent in the retrieval model. Experimental results using the Bahadur Lazarsfeld expansion on the Cystic Fibrosis database are discussed that suggest that incorporating term dependence…
Performance of distributed multiscale simulations
Borgdorff, J.; Ben Belgacem, M.; Bona-Casas, C.; Fazendeiro, L.; Groen, D.; Hoenen, O.; Mizeranschi, A.; Suter, J. L.; Coster, D.; Coveney, P. V.; Dubitzky, W.; Hoekstra, A. G.; Strand, P.; Chopard, B.
2014-01-01
Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. PMID:24982258
An EOQ model for weibull distribution deterioration with time-dependent cubic demand and backlogging
NASA Astrophysics Data System (ADS)
Santhi, G.; Karthikeyan, K.
2017-11-01
In this article we introduce an economic order quantity model with weibull deterioration and time dependent cubic demand rate where holding costs as a linear function of time. Shortages are allowed in the inventory system are partially and fully backlogging. The objective of this model is to minimize the total inventory cost by using the optimal order quantity and the cycle length. The proposed model is illustrated by numerical examples and the sensitivity analysis is performed to study the effect of changes in parameters on the optimum solutions.
Modulation transfer function cascade model for a sampled IR imaging system.
de Luca, L; Cardone, G
1991-05-01
The performance of the infrared scanning radiometer (IRSR) is strongly stressed in convective heat transfer applications where high spatial frequencies in the signal that describes the thermal image are present. The need to characterize more deeply the system spatial resolution has led to the formulation of a cascade model for the evaluation of the actual modulation transfer function of a sampled IR imaging system. The model can yield both the aliasing band and the averaged modulation response for a general sampling subsystem. For a line scan imaging system, which is the case of a typical IRSR, a rule of thumb that states whether the combined sampling-imaging system is either imaging-dependent or sampling-dependent is proposed. The model is tested by comparing it with other noncascade models as well as by ad hoc measurements performed on a commercial digitized IRSR.
PERFORMANCE AND ANALYSIS OF AQUIFER TESTS WITH IMPLICATIONS FOR CONTAMINANT TRANSPORT MODELING
The scale-dependence of dispersivity values used in contaminant transport models to estimate the spreading of contaminant plumes by hydrodynamic dispersion processes was investigated and found to be an artifact of conventional modeling approaches (especially, vertically averaged ...
NASA Astrophysics Data System (ADS)
Savitri, D.
2018-01-01
This articel discusses a predator prey model with anti-predator on intermediate predator using ratio dependent functional responses. Dynamical analysis performed on the model includes determination of equilibrium point, stability and simulation. Three kinds of equilibrium points have been discussed, namely the extinction of prey point, the extinction of intermediate predator point and the extinction of predator point are exists under certain conditions. It can be shown that the result of numerical simulations are in accordance with analitical results
1993-02-01
3.1.2. Modeling of Environment ....................... 6 3.1.3. Ray Tracing and Radiosity ..................... 8 3.2. Reflectivity Review...SIG modeling is dependent on proper treatment of its effects. 3.1.3 Ray Tracing and Radiosity Prior to reviewing reflectivity, a brief look is made of...methods of applying complex theoretical energy propagation algorithms. Two such methods are ray tracing and radiosity (Goral, et al, 1984). Ray tracing is a
Kumar, A.; Kalnaus, Sergiy; Simunovic, Srdjan; ...
2016-09-12
We performed finite element simulations of spherical indentation of Li-ion pouch cells. Our model fully resolves different layers in the cell. The results of the layer resolved models were compared to the models available in the literature that treat the cell as an equivalent homogenized continuum material. Simulations were carried out for different sizes of the spherical indenter. Here, we show that calibration of a failure criterion for the cell in the homogenized model depends on the indenter size, whereas in the layer-resoled model, such dependency is greatly diminished.
(abstract) Simple Spreadsheet Thermal Models for Cryogenic Applications
NASA Technical Reports Server (NTRS)
Nash, A. E.
1994-01-01
Self consistent circuit analog thermal models, that can be run in commercial spreadsheet programs on personal computers, have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. These models have been used to analyze the Cryogenic Telescope Test Facility (CTTF). The facility will be on line in early 1995 for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison of the model predictions and actual performance of this facility will be presented.
Simple Spreadsheet Thermal Models for Cryogenic Applications
NASA Technical Reports Server (NTRS)
Nash, Alfred
1995-01-01
Self consistent circuit analog thermal models that can be run in commercial spreadsheet programs on personal computers have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. these models have been used to analyze the SIRTF Telescope Test Facility (STTF). The facility has been brought on line for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison between the models' predictions and actual performance of this facility will be presented.
Striving for success or addiction? Exercise dependence among elite Australian athletes.
McNamara, Justin; McCabe, Marita P
2012-01-01
Exercise dependence is a condition that involves a preoccupation and involvement with training and exercise, and has serious health and performance consequences for athletes. We examined the validity of a biopsychosocial model to explain the development and maintenance of exercise dependence among elite Australian athletes. Participants were 234 elite Australian athletes recruited from institutes and academies of sport. Thirty-four percent of elite athletes were classified as having exercise dependence based on high scores on the measure of exercise dependence. These athletes had a higher body mass index, and more extreme and maladaptive exercise beliefs compared to non-dependent athletes. They also reported higher pressure from coaches and teammates, and lower social support, compared to athletes who were not exercise dependent. These results support the utility of a biopsychosocial model of exercise dependence in understanding the aetiology of exercise dependence among elite athletes. Limitations of the study and future research directions are highlighted.
NASA Astrophysics Data System (ADS)
Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.
2018-05-01
Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed hydrology. However, a thorough validation and a comparison with other methods are recommended before using the JBC method, since it may perform worse than the IBC method for some cases due to bias nonstationarity of climate model outputs.
Aerodynamic Parameters of High Performance Aircraft Estimated from Wind Tunnel and Flight Test Data
NASA Technical Reports Server (NTRS)
Klein, Vladislav; Murphy, Patrick C.
1998-01-01
A concept of system identification applied to high performance aircraft is introduced followed by a discussion on the identification methodology. Special emphasis is given to model postulation using time invariant and time dependent aerodynamic parameters, model structure determination and parameter estimation using ordinary least squares an mixed estimation methods, At the same time problems of data collinearity detection and its assessment are discussed. These parts of methodology are demonstrated in examples using flight data of the X-29A and X-31A aircraft. In the third example wind tunnel oscillatory data of the F-16XL model are used. A strong dependence of these data on frequency led to the development of models with unsteady aerodynamic terms in the form of indicial functions. The paper is completed by concluding remarks.
Comparison of algorithms to generate event times conditional on time-dependent covariates.
Sylvestre, Marie-Pierre; Abrahamowicz, Michal
2008-06-30
The Cox proportional hazards model with time-dependent covariates (TDC) is now a part of the standard statistical analysis toolbox in medical research. As new methods involving more complex modeling of time-dependent variables are developed, simulations could often be used to systematically assess the performance of these models. Yet, generating event times conditional on TDC requires well-designed and efficient algorithms. We compare two classes of such algorithms: permutational algorithms (PAs) and algorithms based on a binomial model. We also propose a modification of the PA to incorporate a rejection sampler. We performed a simulation study to assess the accuracy, stability, and speed of these algorithms in several scenarios. Both classes of algorithms generated data sets that, once analyzed, provided virtually unbiased estimates with comparable variances. In terms of computational efficiency, the PA with the rejection sampler reduced the time necessary to generate data by more than 50 per cent relative to alternative methods. The PAs also allowed more flexibility in the specification of the marginal distributions of event times and required less calibration.
Identification of phreatophytic groundwater dependent ecosystems using geospatial technologies
NASA Astrophysics Data System (ADS)
Perez Hoyos, Isabel Cristina
The protection of groundwater dependent ecosystems (GDEs) is increasingly being recognized as an essential aspect for the sustainable management and allocation of water resources. Ecosystem services are crucial for human well-being and for a variety of flora and fauna. However, the conservation of GDEs is only possible if knowledge about their location and extent is available. Several studies have focused on the identification of GDEs at specific locations using ground-based measurements. However, recent progress in technologies such as remote sensing and their integration with geographic information systems (GIS) has provided alternative ways to map GDEs at much larger spatial extents. This study is concerned with the discovery of patterns in geospatial data sets using data mining techniques for mapping phreatophytic GDEs in the United States at 1 km spatial resolution. A methodology to identify the probability of an ecosystem to be groundwater dependent is developed. Probabilities are obtained by modeling the relationship between the known locations of GDEs and main factors influencing groundwater dependency, namely water table depth (WTD) and aridity index (AI). A methodology is proposed to predict WTD at 1 km spatial resolution using relevant geospatial data sets calibrated with WTD observations. An ensemble learning algorithm called random forest (RF) is used in order to model the distribution of groundwater in three study areas: Nevada, California, and Washington, as well as in the entire United States. RF regression performance is compared with a single regression tree (RT). The comparison is based on contrasting training error, true prediction error, and variable importance estimates of both methods. Additionally, remote sensing variables are omitted from the process of fitting the RF model to the data to evaluate the deterioration in the model performance when these variables are not used as an input. Research results suggest that although the prediction accuracy of a single RT is reduced in comparison with RFs, single trees can still be used to understand the interactions that might be taking place between predictor variables and the response variable. Regarding RF, there is a great potential in using the power of an ensemble of trees for prediction of WTD. The superior capability of RF to accurately map water table position in Nevada, California, and Washington demonstrate that this technique can be applied at scales larger than regional levels. It is also shown that the removal of remote sensing variables from the RF training process degrades the performance of the model. Using the predicted WTD, the probability of an ecosystem to be groundwater dependent (GDE probability) is estimated at 1 km spatial resolution. The modeling technique is evaluated in the state of Nevada, USA to develop a systematic approach for the identification of GDEs and it is then applied in the United States. The modeling approach selected for the development of the GDE probability map results from a comparison of the performance of classification trees (CT) and classification forests (CF). Predictive performance evaluation for the selection of the most accurate model is achieved using a threshold independent technique, and the prediction accuracy of both models is assessed in greater detail using threshold-dependent measures. The resulting GDE probability map can potentially be used for the definition of conservation areas since it can be translated into a binary classification map with two classes: GDE and NON-GDE. These maps are created by selecting a probability threshold. It is demonstrated that the choice of this threshold has dramatic effects on deterministic model performance measures.
Agarwal, Rahul; Chen, Zhe; Kloosterman, Fabian; Wilson, Matthew A; Sarma, Sridevi V
2016-07-01
Pyramidal neurons recorded from the rat hippocampus and entorhinal cortex, such as place and grid cells, have diverse receptive fields, which are either unimodal or multimodal. Spiking activity from these cells encodes information about the spatial position of a freely foraging rat. At fine timescales, a neuron's spike activity also depends significantly on its own spike history. However, due to limitations of current parametric modeling approaches, it remains a challenge to estimate complex, multimodal neuronal receptive fields while incorporating spike history dependence. Furthermore, efforts to decode the rat's trajectory in one- or two-dimensional space from hippocampal ensemble spiking activity have mainly focused on spike history-independent neuronal encoding models. In this letter, we address these two important issues by extending a recently introduced nonparametric neural encoding framework that allows modeling both complex spatial receptive fields and spike history dependencies. Using this extended nonparametric approach, we develop novel algorithms for decoding a rat's trajectory based on recordings of hippocampal place cells and entorhinal grid cells. Results show that both encoding and decoding models derived from our new method performed significantly better than state-of-the-art encoding and decoding models on 6 minutes of test data. In addition, our model's performance remains invariant to the apparent modality of the neuron's receptive field.
Premium analysis for copula model: A case study for Malaysian motor insurance claims
NASA Astrophysics Data System (ADS)
Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah
2014-06-01
This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.
Hug, T; Maurer, M
2012-01-01
Distributed (decentralized) wastewater treatment can, in many situations, be a valuable alternative to a centralized sewer network and wastewater treatment plant. However, it is critical for its acceptance whether the same overall treatment performance can be achieved without on-site staff, and whether its performance can be measured. In this paper we argue and illustrate that the system performance depends not only on the design performance and reliability of the individual treatment units, but also significantly on the monitoring scheme, i.e. on the reliability of the process information. For this purpose, we present a simple model of a fleet of identical treatment units. Thereby, their performance depends on four stochastic variables: the reliability of the treatment unit, the respond time for the repair of failed units, the reliability of on-line sensors, and the frequency of routine inspections. The simulated scenarios show a significant difference between the true performance and the observations by the sensors and inspections. The results also illustrate the trade-off between investing in reactor and sensor technology and in human interventions in order to achieve a certain target performance. Modeling can quantify such effects and thereby support the identification of requirements for the centralized monitoring of distributed treatment units. The model approach is generic and can be extended and applied to various distributed wastewater treatment technologies and contexts.
Chen, Ran; Zhang, Yuntao; Sahneh, Faryad Darabi; Scoglio, Caterina M; Wohlleben, Wendel; Haase, Andrea; Monteiro-Riviere, Nancy A; Riviere, Jim E
2014-09-23
Quantitative characterization of nanoparticle interactions with their surrounding environment is vital for safe nanotechnological development and standardization. A recent quantitative measure, the biological surface adsorption index (BSAI), has demonstrated promising applications in nanomaterial surface characterization and biological/environmental prediction. This paper further advances the approach beyond the application of five descriptors in the original BSAI to address the concentration dependence of the descriptors, enabling better prediction of the adsorption profile and more accurate categorization of nanomaterials based on their surface properties. Statistical analysis on the obtained adsorption data was performed based on three different models: the original BSAI, a concentration-dependent polynomial model, and an infinite dilution model. These advancements in BSAI modeling showed a promising development in the application of quantitative predictive modeling in biological applications, nanomedicine, and environmental safety assessment of nanomaterials.
Marshall, Leon; Carvalheiro, Luísa G; Aguirre-Gutiérrez, Jesús; Bos, Merijn; de Groot, G Arjen; Kleijn, David; Potts, Simon G; Reemer, Menno; Roberts, Stuart; Scheper, Jeroen; Biesmeijer, Jacobus C
2015-10-01
Species distribution models (SDM) are increasingly used to understand the factors that regulate variation in biodiversity patterns and to help plan conservation strategies. However, these models are rarely validated with independently collected data and it is unclear whether SDM performance is maintained across distinct habitats and for species with different functional traits. Highly mobile species, such as bees, can be particularly challenging to model. Here, we use independent sets of occurrence data collected systematically in several agricultural habitats to test how the predictive performance of SDMs for wild bee species depends on species traits, habitat type, and sampling technique. We used a species distribution modeling approach parametrized for the Netherlands, with presence records from 1990 to 2010 for 193 Dutch wild bees. For each species, we built a Maxent model based on 13 climate and landscape variables. We tested the predictive performance of the SDMs with independent datasets collected from orchards and arable fields across the Netherlands from 2010 to 2013, using transect surveys or pan traps. Model predictive performance depended on species traits and habitat type. Occurrence of bee species specialized in habitat and diet was better predicted than generalist bees. Predictions of habitat suitability were also more precise for habitats that are temporally more stable (orchards) than for habitats that suffer regular alterations (arable), particularly for small, solitary bees. As a conservation tool, SDMs are best suited to modeling rarer, specialist species than more generalist and will work best in long-term stable habitats. The variability of complex, short-term habitats is difficult to capture in such models and historical land use generally has low thematic resolution. To improve SDMs' usefulness, models require explanatory variables and collection data that include detailed landscape characteristics, for example, variability of crops and flower availability. Additionally, testing SDMs with field surveys should involve multiple collection techniques.
Prior Design for Dependent Dirichlet Processes: An Application to Marathon Modeling
F. Pradier, Melanie; J. R. Ruiz, Francisco; Perez-Cruz, Fernando
2016-01-01
This paper presents a novel application of Bayesian nonparametrics (BNP) for marathon data modeling. We make use of two well-known BNP priors, the single-p dependent Dirichlet process and the hierarchical Dirichlet process, in order to address two different problems. First, we study the impact of age, gender and environment on the runners’ performance. We derive a fair grading method that allows direct comparison of runners regardless of their age and gender. Unlike current grading systems, our approach is based not only on top world records, but on the performances of all runners. The presented methodology for comparison of densities can be adopted in many other applications straightforwardly, providing an interesting perspective to build dependent Dirichlet processes. Second, we analyze the running patterns of the marathoners in time, obtaining information that can be valuable for training purposes. We also show that these running patterns can be used to predict finishing time given intermediate interval measurements. We apply our models to New York City, Boston and London marathons. PMID:26821155
Study on individual stochastic model of GNSS observations for precise kinematic applications
NASA Astrophysics Data System (ADS)
Próchniewicz, Dominik; Szpunar, Ryszard
2015-04-01
The proper definition of mathematical positioning model, which is defined by functional and stochastic models, is a prerequisite to obtain the optimal estimation of unknown parameters. Especially important in this definition is realistic modelling of stochastic properties of observations, which are more receiver-dependent and time-varying than deterministic relationships. This is particularly true with respect to precise kinematic applications which are characterized by weakening model strength. In this case, incorrect or simplified definition of stochastic model causes that the performance of ambiguity resolution and accuracy of position estimation can be limited. In this study we investigate the methods of describing the measurement noise of GNSS observations and its impact to derive precise kinematic positioning model. In particular stochastic modelling of individual components of the variance-covariance matrix of observation noise performed using observations from a very short baseline and laboratory GNSS signal generator, is analyzed. Experimental test results indicate that the utilizing the individual stochastic model of observations including elevation dependency and cross-correlation instead of assumption that raw measurements are independent with the same variance improves the performance of ambiguity resolution as well as rover positioning accuracy. This shows that the proposed stochastic assessment method could be a important part in complex calibration procedure of GNSS equipment.
Collecting the chemical structures and data for necessary QSAR modeling is facilitated by available public databases and open data. However, QSAR model performance is dependent on the quality of data and modeling methodology used. This study developed robust QSAR models for physi...
Smeers, Inge; Decorte, Ronny; Van de Voorde, Wim; Bekaert, Bram
2018-05-01
DNA methylation is a promising biomarker for forensic age prediction. A challenge that has emerged in recent studies is the fact that prediction errors become larger with increasing age due to interindividual differences in epigenetic ageing rates. This phenomenon of non-constant variance or heteroscedasticity violates an assumption of the often used method of ordinary least squares (OLS) regression. The aim of this study was to evaluate alternative statistical methods that do take heteroscedasticity into account in order to provide more accurate, age-dependent prediction intervals. A weighted least squares (WLS) regression is proposed as well as a quantile regression model. Their performances were compared against an OLS regression model based on the same dataset. Both models provided age-dependent prediction intervals which account for the increasing variance with age, but WLS regression performed better in terms of success rate in the current dataset. However, quantile regression might be a preferred method when dealing with a variance that is not only non-constant, but also not normally distributed. Ultimately the choice of which model to use should depend on the observed characteristics of the data. Copyright © 2018 Elsevier B.V. All rights reserved.
Capturing tensile size-dependency in polymer nanofiber elasticity.
Yuan, Bo; Wang, Jun; Han, Ray P S
2015-02-01
As the name implies, tensile size-dependency refers to the size-dependent response under uniaxial tension. It defers markedly from bending size-dependency in terms of onset and magnitude of the size-dependent response; the former begins earlier but rises to a smaller value than the latter. Experimentally, tensile size-dependent behavior is much harder to capture than its bending counterpart. This is also true in the computational effort; bending size-dependency models are more prevalent and well-developed. Indeed, many have questioned the existence of tensile size-dependency. However, recent experiments seem to support the existence of this phenomenon. Current strain gradient elasticity theories can accurately predict bending size-dependency but are unable to track tensile size-dependency. To rectify this deficiency a higher-order strain gradient elasticity model is constructed by including the second gradient of the strain into the deformation energy. Tensile experiments involving 10 wt% polycaprolactone nanofibers are performed to calibrate and verify our model. The results reveal that for the selected nanofibers, their size-dependency begins when their diameters reduce to 600 nm and below. Further, their characteristic length-scale parameter is found to be 1095.8 nm. Copyright © 2014 Elsevier Ltd. All rights reserved.
Performance analysis of wideband data and television channels. [space shuttle communications
NASA Technical Reports Server (NTRS)
Geist, J. M.
1975-01-01
Several aspects are discussed of space shuttle communications, including the return link (shuttle-to-ground) relayed through a satellite repeater (TDRS). The repeater exhibits nonlinear amplification and an amplitude-dependent phase shift. Models were developed for various link configurations, and computer simulation programs based on these models are described. Certain analytical results on system performance were also obtained. For the system parameters assumed, the results indicate approximately 1 db degradation relative to a link employing a linear repeater. While this degradation is dependent upon the repeater, filter bandwidths, and modulation parameters used, the programs can accommodate changes to any of these quantities. Thus the programs can be applied to determine the performance with any given set of parameters, or used as an aid in link design.
NASA Astrophysics Data System (ADS)
Bellier, Joseph; Bontron, Guillaume; Zin, Isabella
2017-12-01
Meteorological ensemble forecasts are nowadays widely used as input of hydrological models for probabilistic streamflow forecasting. These forcings are frequently biased and have to be statistically postprocessed, using most of the time univariate techniques that apply independently to individual locations, lead times and weather variables. Postprocessed ensemble forecasts therefore need to be reordered so as to reconstruct suitable multivariate dependence structures. The Schaake shuffle and ensemble copula coupling are the two most popular methods for this purpose. This paper proposes two adaptations of them that make use of meteorological analogues for reconstructing spatiotemporal dependence structures of precipitation forecasts. Performances of the original and adapted techniques are compared through a multistep verification experiment using real forecasts from the European Centre for Medium-Range Weather Forecasts. This experiment evaluates not only multivariate precipitation forecasts but also the corresponding streamflow forecasts that derive from hydrological modeling. Results show that the relative performances of the different reordering methods vary depending on the verification step. In particular, the standard Schaake shuffle is found to perform poorly when evaluated on streamflow. This emphasizes the crucial role of the precipitation spatiotemporal dependence structure in hydrological ensemble forecasting.
Smart Swarms of Bacteria-Inspired Agents with Performance Adaptable Interactions
Shklarsh, Adi; Ariel, Gil; Schneidman, Elad; Ben-Jacob, Eshel
2011-01-01
Collective navigation and swarming have been studied in animal groups, such as fish schools, bird flocks, bacteria, and slime molds. Computer modeling has shown that collective behavior of simple agents can result from simple interactions between the agents, which include short range repulsion, intermediate range alignment, and long range attraction. Here we study collective navigation of bacteria-inspired smart agents in complex terrains, with adaptive interactions that depend on performance. More specifically, each agent adjusts its interactions with the other agents according to its local environment – by decreasing the peers' influence while navigating in a beneficial direction, and increasing it otherwise. We show that inclusion of such performance dependent adaptable interactions significantly improves the collective swarming performance, leading to highly efficient navigation, especially in complex terrains. Notably, to afford such adaptable interactions, each modeled agent requires only simple computational capabilities with short-term memory, which can easily be implemented in simple swarming robots. PMID:21980274
Smart swarms of bacteria-inspired agents with performance adaptable interactions.
Shklarsh, Adi; Ariel, Gil; Schneidman, Elad; Ben-Jacob, Eshel
2011-09-01
Collective navigation and swarming have been studied in animal groups, such as fish schools, bird flocks, bacteria, and slime molds. Computer modeling has shown that collective behavior of simple agents can result from simple interactions between the agents, which include short range repulsion, intermediate range alignment, and long range attraction. Here we study collective navigation of bacteria-inspired smart agents in complex terrains, with adaptive interactions that depend on performance. More specifically, each agent adjusts its interactions with the other agents according to its local environment--by decreasing the peers' influence while navigating in a beneficial direction, and increasing it otherwise. We show that inclusion of such performance dependent adaptable interactions significantly improves the collective swarming performance, leading to highly efficient navigation, especially in complex terrains. Notably, to afford such adaptable interactions, each modeled agent requires only simple computational capabilities with short-term memory, which can easily be implemented in simple swarming robots.
On the performance of satellite precipitation products in riverine flood modeling: A review
NASA Astrophysics Data System (ADS)
Maggioni, Viviana; Massari, Christian
2018-03-01
This work is meant to summarize lessons learned on using satellite precipitation products for riverine flood modeling and to propose future directions in this field of research. Firstly, the most common satellite precipitation products (SPPs) during the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Mission (GPM) eras are reviewed. Secondly, we discuss the main errors and uncertainty sources in these datasets that have the potential to affect streamflow and runoff model simulations. Thirdly, past studies that focused on using SPPs for predicting streamflow and runoff are analyzed. As the impact of floods depends not only on the characteristics of the flood itself, but also on the characteristics of the region (population density, land use, geophysical and climatic factors), a regional analysis is required to assess the performance of hydrologic models in monitoring and predicting floods. The performance of SPP-forced hydrological models was shown to largely depend on several factors, including precipitation type, seasonality, hydrological model formulation, topography. Across several basins around the world, the bias in SPPs was recognized as a major issue and bias correction methods of different complexity were shown to significantly reduce streamflow errors. Model re-calibration was also raised as a viable option to improve SPP-forced streamflow simulations, but caution is necessary when recalibrating models with SPP, which may result in unrealistic parameter values. From a general standpoint, there is significant potential for using satellite observations in flood forecasting, but the performance of SPP in hydrological modeling is still inadequate for operational purposes.
SHER: a colored petri net based random mobility model for wireless communications.
Khan, Naeem Akhtar; Ahmad, Farooq; Khan, Sher Afzal
2015-01-01
In wireless network research, simulation is the most imperative technique to investigate the network's behavior and validation. Wireless networks typically consist of mobile hosts; therefore, the degree of validation is influenced by the underlying mobility model, and synthetic models are implemented in simulators because real life traces are not widely available. In wireless communications, mobility is an integral part while the key role of a mobility model is to mimic the real life traveling patterns to study. The performance of routing protocols and mobility management strategies e.g. paging, registration and handoff is highly dependent to the selected mobility model. In this paper, we devise and evaluate the Show Home and Exclusive Regions (SHER), a novel two-dimensional (2-D) Colored Petri net (CPN) based formal random mobility model, which exhibits sociological behavior of a user. The model captures hotspots where a user frequently visits and spends time. Our solution eliminates six key issues of the random mobility models, i.e., sudden stops, memoryless movements, border effect, temporal dependency of velocity, pause time dependency, and speed decay in a single model. The proposed model is able to predict the future location of a mobile user and ultimately improves the performance of wireless communication networks. The model follows a uniform nodal distribution and is a mini simulator, which exhibits interesting mobility patterns. The model is also helpful to those who are not familiar with the formal modeling, and users can extract meaningful information with a single mouse-click. It is noteworthy that capturing dynamic mobility patterns through CPN is the most challenging and virulent activity of the presented research. Statistical and reachability analysis techniques are presented to elucidate and validate the performance of our proposed mobility model. The state space methods allow us to algorithmically derive the system behavior and rectify the errors of our proposed model.
SHER: A Colored Petri Net Based Random Mobility Model for Wireless Communications
Khan, Naeem Akhtar; Ahmad, Farooq; Khan, Sher Afzal
2015-01-01
In wireless network research, simulation is the most imperative technique to investigate the network’s behavior and validation. Wireless networks typically consist of mobile hosts; therefore, the degree of validation is influenced by the underlying mobility model, and synthetic models are implemented in simulators because real life traces are not widely available. In wireless communications, mobility is an integral part while the key role of a mobility model is to mimic the real life traveling patterns to study. The performance of routing protocols and mobility management strategies e.g. paging, registration and handoff is highly dependent to the selected mobility model. In this paper, we devise and evaluate the Show Home and Exclusive Regions (SHER), a novel two-dimensional (2-D) Colored Petri net (CPN) based formal random mobility model, which exhibits sociological behavior of a user. The model captures hotspots where a user frequently visits and spends time. Our solution eliminates six key issues of the random mobility models, i.e., sudden stops, memoryless movements, border effect, temporal dependency of velocity, pause time dependency, and speed decay in a single model. The proposed model is able to predict the future location of a mobile user and ultimately improves the performance of wireless communication networks. The model follows a uniform nodal distribution and is a mini simulator, which exhibits interesting mobility patterns. The model is also helpful to those who are not familiar with the formal modeling, and users can extract meaningful information with a single mouse-click. It is noteworthy that capturing dynamic mobility patterns through CPN is the most challenging and virulent activity of the presented research. Statistical and reachability analysis techniques are presented to elucidate and validate the performance of our proposed mobility model. The state space methods allow us to algorithmically derive the system behavior and rectify the errors of our proposed model. PMID:26267860
Primary Blast Injury Criteria for Animal/Human TBI Models using Field Validated Shock Tubes
2017-09-01
differential pathological response, which depends on the local tissue composition, and the response is to insult depends upon the cell type. regions...Neuroinflammation A single blast induces cell-type dependent increase in NADPH oxidase isoforms We have performed characterization of the spatial variations and...uniformly distribute and affect the whole brain. However, pathophysiological outcomes (e.g., NOX changes) in response to bTBI depend on the differential
ERIC Educational Resources Information Center
Gordovil-Merino, Amalia; Guardia-Olmos, Joan; Pero-Cebollero, Maribel
2012-01-01
In this paper, we used simulations to compare the performance of classical and Bayesian estimations in logistic regression models using small samples. In the performed simulations, conditions were varied, including the type of relationship between independent and dependent variable values (i.e., unrelated and related values), the type of variable…
Model Selection in Systems Biology Depends on Experimental Design
Silk, Daniel; Kirk, Paul D. W.; Barnes, Chris P.; Toni, Tina; Stumpf, Michael P. H.
2014-01-01
Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis. PMID:24922483
Model selection in systems biology depends on experimental design.
Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H
2014-06-01
Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.
Antunes, Gabriela; Faria da Silva, Samuel F; Simoes de Souza, Fabio M
2018-06-01
Mirror neurons fire action potentials both when the agent performs a certain behavior and watches someone performing a similar action. Here, we present an original mirror neuron model based on the spike-timing-dependent plasticity (STDP) between two morpho-electrical models of neocortical pyramidal neurons. Both neurons fired spontaneously with basal firing rate that follows a Poisson distribution, and the STDP between them was modeled by the triplet algorithm. Our simulation results demonstrated that STDP is sufficient for the rise of mirror neuron function between the pairs of neocortical neurons. This is a proof of concept that pairs of neocortical neurons associating sensory inputs to motor outputs could operate like mirror neurons. In addition, we used the mirror neuron model to investigate whether channelopathies associated with autism spectrum disorder could impair the modeled mirror function. Our simulation results showed that impaired hyperpolarization-activated cationic currents (Ih) affected the mirror function between the pairs of neocortical neurons coupled by STDP.
A local structure model for network analysis
Casleton, Emily; Nordman, Daniel; Kaiser, Mark
2017-04-01
The statistical analysis of networks is a popular research topic with ever widening applications. Exponential random graph models (ERGMs), which specify a model through interpretable, global network features, are common for this purpose. In this study we introduce a new class of models for network analysis, called local structure graph models (LSGMs). In contrast to an ERGM, a LSGM specifies a network model through local features and allows for an interpretable and controllable local dependence structure. In particular, LSGMs are formulated by a set of full conditional distributions for each network edge, e.g., the probability of edge presence/absence, depending onmore » neighborhoods of other edges. Additional model features are introduced to aid in specification and to help alleviate a common issue (occurring also with ERGMs) of model degeneracy. Finally, the proposed models are demonstrated on a network of tornadoes in Arkansas where a LSGM is shown to perform significantly better than a model without local dependence.« less
A local structure model for network analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casleton, Emily; Nordman, Daniel; Kaiser, Mark
The statistical analysis of networks is a popular research topic with ever widening applications. Exponential random graph models (ERGMs), which specify a model through interpretable, global network features, are common for this purpose. In this study we introduce a new class of models for network analysis, called local structure graph models (LSGMs). In contrast to an ERGM, a LSGM specifies a network model through local features and allows for an interpretable and controllable local dependence structure. In particular, LSGMs are formulated by a set of full conditional distributions for each network edge, e.g., the probability of edge presence/absence, depending onmore » neighborhoods of other edges. Additional model features are introduced to aid in specification and to help alleviate a common issue (occurring also with ERGMs) of model degeneracy. Finally, the proposed models are demonstrated on a network of tornadoes in Arkansas where a LSGM is shown to perform significantly better than a model without local dependence.« less
Skeletal muscle tensile strain dependence: hyperviscoelastic nonlinearity
Wheatley, Benjamin B; Morrow, Duane A; Odegard, Gregory M; Kaufman, Kenton R; Donahue, Tammy L Haut
2015-01-01
Introduction Computational modeling of skeletal muscle requires characterization at the tissue level. While most skeletal muscle studies focus on hyperelasticity, the goal of this study was to examine and model the nonlinear behavior of both time-independent and time-dependent properties of skeletal muscle as a function of strain. Materials and Methods Nine tibialis anterior muscles from New Zealand White rabbits were subject to five consecutive stress relaxation cycles of roughly 3% strain. Individual relaxation steps were fit with a three-term linear Prony series. Prony series coefficients and relaxation ratio were assessed for strain dependence using a general linear statistical model. A fully nonlinear constitutive model was employed to capture the strain dependence of both the viscoelastic and instantaneous components. Results Instantaneous modulus (p<0.0005) and mid-range relaxation (p<0.0005) increased significantly with strain level, while relaxation at longer time periods decreased with strain (p<0.0005). Time constants and overall relaxation ratio did not change with strain level (p>0.1). Additionally, the fully nonlinear hyperviscoelastic constitutive model provided an excellent fit to experimental data, while other models which included linear components failed to capture muscle function as accurately. Conclusions Material properties of skeletal muscle are strain-dependent at the tissue level. This strain dependence can be included in computational models of skeletal muscle performance with a fully nonlinear hyperviscoelastic model. PMID:26409235
Forster, Jeri E.; MaWhinney, Samantha; Ball, Erika L.; Fairclough, Diane
2011-01-01
Dropout is common in longitudinal clinical trials and when the probability of dropout depends on unobserved outcomes even after conditioning on available data, it is considered missing not at random and therefore nonignorable. To address this problem, mixture models can be used to account for the relationship between a longitudinal outcome and dropout. We propose a Natural Spline Varying-coefficient mixture model (NSV), which is a straightforward extension of the parametric Conditional Linear Model (CLM). We assume that the outcome follows a varying-coefficient model conditional on a continuous dropout distribution. Natural cubic B-splines are used to allow the regression coefficients to semiparametrically depend on dropout and inference is therefore more robust. Additionally, this method is computationally stable and relatively simple to implement. We conduct simulation studies to evaluate performance and compare methodologies in settings where the longitudinal trajectories are linear and dropout time is observed for all individuals. Performance is assessed under conditions where model assumptions are both met and violated. In addition, we compare the NSV to the CLM and a standard random-effects model using an HIV/AIDS clinical trial with probable nonignorable dropout. The simulation studies suggest that the NSV is an improvement over the CLM when dropout has a nonlinear dependence on the outcome. PMID:22101223
NON-HOMOGENEOUS POISSON PROCESS MODEL FOR GENETIC CROSSOVER INTERFERENCE.
Leu, Szu-Yun; Sen, Pranab K
2014-01-01
The genetic crossover interference is usually modeled with a stationary renewal process to construct the genetic map. We propose two non-homogeneous, also dependent, Poisson process models applied to the known physical map. The crossover process is assumed to start from an origin and to occur sequentially along the chromosome. The increment rate depends on the position of the markers and the number of crossover events occurring between the origin and the markers. We show how to obtain parameter estimates for the process and use simulation studies and real Drosophila data to examine the performance of the proposed models.
NASA Astrophysics Data System (ADS)
Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.
2013-12-01
This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.
A finite nonlinear hyper-viscoelastic model for soft biological tissues.
Panda, Satish Kumar; Buist, Martin Lindsay
2018-03-01
Soft tissues exhibit highly nonlinear rate and time-dependent stress-strain behaviour. Strain and strain rate dependencies are often modelled using a hyperelastic model and a discrete (standard linear solid) or continuous spectrum (quasi-linear) viscoelastic model, respectively. However, these models are unable to properly capture the materials characteristics because hyperelastic models are unsuited for time-dependent events, whereas the common viscoelastic models are insufficient for the nonlinear and finite strain viscoelastic tissue responses. The convolution integral based models can demonstrate a finite viscoelastic response; however, their derivations are not consistent with the laws of thermodynamics. The aim of this work was to develop a three-dimensional finite hyper-viscoelastic model for soft tissues using a thermodynamically consistent approach. In addition, a nonlinear function, dependent on strain and strain rate, was adopted to capture the nonlinear variation of viscosity during a loading process. To demonstrate the efficacy and versatility of this approach, the model was used to recreate the experimental results performed on different types of soft tissues. In all the cases, the simulation results were well matched (R 2 ⩾0.99) with the experimental data. Copyright © 2018 Elsevier Ltd. All rights reserved.
Nonparametric predictive inference for combining diagnostic tests with parametric copula
NASA Astrophysics Data System (ADS)
Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.
2017-09-01
Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.
Parallel Processing in Face Perception
ERIC Educational Resources Information Center
Martens, Ulla; Leuthold, Hartmut; Schweinberger, Stefan R.
2010-01-01
The authors examined face perception models with regard to the functional and temporal organization of facial identity and expression analysis. Participants performed a manual 2-choice go/no-go task to classify faces, where response hand depended on facial familiarity (famous vs. unfamiliar) and response execution depended on facial expression…
Building and Verifying a Predictive Model of Interruption Resumption
2012-03-01
field, the vocal module speaks, the motor module moves the body, and the con- figural and manipulative modules perform spatial proces- sing [14]–[16...person cannot remember themselves. As described earlier, the model depends critically upon the basic properties of declarative memories. When a...success because the model’s ability to re- trieve an episodic code depends critically on the amount of time spent on the interruption. Also recall that
DEPEND - A design environment for prediction and evaluation of system dependability
NASA Technical Reports Server (NTRS)
Goswami, Kumar K.; Iyer, Ravishankar K.
1990-01-01
The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described. DEPEND models both hardware and software components at a functional level, and allows automatic failure injection to assess system performance and reliability. It relieves the user of the work needed to inject failures, maintain statistics, and output reports. The automatic failure injection scheme is geared toward evaluating a system under high stress (workload) conditions. The failures that are injected can affect both hardware and software components. To illustrate the capability of the simulator, a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated. Experiments were conducted to determine the impact of failures on system performance and to identify the failures to which the system is especially susceptible.
Better Modeling of Electrostatic Discharge in an Insulator
NASA Technical Reports Server (NTRS)
Pekov, Mihail
2010-01-01
An improved mathematical model has been developed of the time dependence of buildup or decay of electric charge in a high-resistivity (nominally insulating) material. The model is intended primarily for use in extracting the DC electrical resistivity of such a material from voltage -vs.- current measurements performed repeatedly on a sample of the material over a time comparable to the longest characteristic times (typically of the order of months) that govern the evolution of relevant properties of the material. This model is an alternative to a prior simplistic macroscopic model that yields results differing from the results of the time-dependent measurements by two to three orders of magnitude.
Cognitive Style and Educational Performance. The Case of Public Schools in Bogotá, Colombia
ERIC Educational Resources Information Center
Hederich-Martínez, Christian; Camargo-Uribe, Angela
2016-01-01
This study analyses the relationships among educational performance, field dependence-independence cognitive style and factors traditionally associated with performance and style, to build a comprehensive model of factors associated with the levels of education performance of students in Bogotá. A total of 3003 students, of grades 8 and 10, from…
Modeling the irradiance and temperature rependence of photovoltaic modules in PVsyst
Sauer, Kenneth J.; Roessler, Thomas; Hansen, Clifford W.
2014-11-10
In order to reliably simulate the energy yield of photovoltaic (PV) systems, it is necessary to have an accurate model of how the PV modules perform with respect to irradiance and cell temperature. Building on previous work that addresses the irradiance dependence, two approaches to fit the temperature dependence of module power in PVsyst have been developed and are applied here to recent multi-irradiance and -temperature data for a standard Yingli Solar PV module type. The results demonstrate that it is possible to match the measured irradiance and temperature dependence of PV modules in PVsyst. As a result, improvements inmore » energy yield prediction using the optimized models relative to the PVsyst standard model are considered significant for decisions about project financing.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Xingshu; Alam, Muhammad Ashraful; Raguse, John
2015-10-15
In this paper, we develop a physics-based compact model for copper indium gallium diselenide (CIGS) and cadmium telluride (CdTe) heterojunction solar cells that attributes the failure of superposition to voltage-dependent carrier collection in the absorber layer, and interprets light-enhanced reverse breakdown as a consequence of tunneling-assisted Poole-Frenkel conduction. The temperature dependence of the model is validated against both simulation and experimental data for the entire range of bias conditions. The model can be used to characterize device parameters, optimize new designs, and most importantly, predict performance and reliability of solar panels including the effects of self-heating and reverse breakdown duemore » to partial-shading degradation.« less
A new technique for thermodynamic engine modeling
NASA Astrophysics Data System (ADS)
Matthews, R. D.; Peters, J. E.; Beckel, S. A.; Shizhi, M.
1983-12-01
Reference is made to the equations given by Matthews (1983) for piston engine performance, which show that this performance depends on four fundamental engine efficiencies (combustion, thermodynamic cycle or indicated thermal, volumetric, and mechanical) as well as on engine operation and design parameters. This set of equations is seen to suggest a different technique for engine modeling; that is, that each efficiency should be modeled individually and the efficiency submodels then combined to obtain an overall engine model. A simple method for predicting the combustion efficiency of piston engines is therefore required. Various methods are proposed here and compared with experimental results. These combustion efficiency models are then combined with various models for the volumetric, mechanical, and indicated thermal efficiencies to yield three different engine models of varying degrees of sophistication. Comparisons are then made of the predictions of the resulting engine models with experimental data. It is found that combustion efficiency is almost independent of load, speed, and compression ratio and is not strongly dependent on fuel type, at least so long as the hydrogen-to-carbon ratio is reasonably close to that for isooctane.
NASA Technical Reports Server (NTRS)
Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)
2001-01-01
Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.
Modeling climate change impacts on water trading.
Luo, Bin; Maqsood, Imran; Gong, Yazhen
2010-04-01
This paper presents a new method of evaluating the impacts of climate change on the long-term performance of water trading programs, through designing an indicator to measure the mean of periodic water volume that can be released by trading through a water-use system. The indicator is computed with a stochastic optimization model which can reflect the random uncertainty of water availability. The developed method was demonstrated in the Swift Current Creek watershed of Prairie Canada under two future scenarios simulated by a Canadian Regional Climate Model, in which total water availabilities under future scenarios were estimated using a monthly water balance model. Frequency analysis was performed to obtain the best probability distributions for both observed and simulated water quantity data. Results from the case study indicate that the performance of a trading system is highly scenario-dependent in future climate, with trading effectiveness highly optimistic or undesirable under different future scenarios. Trading effectiveness also largely depends on trading costs, with high costs resulting in failure of the trading program. (c) 2010 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, J.; Xue, X.
A comprehensive 3D CFD model is developed for a bi-electrode supported cell (BSC) SOFC. The model includes complicated transport phenomena of mass/heat transfer, charge (electron and ion) migration, and electrochemical reaction. The uniqueness of the modeling study is that functionally graded porous electrode property is taken into account, including not only linear but nonlinear porosity distributions. Extensive numerical analysis is performed to elucidate the effects of both porous microstructure distributions and operating condition on cell performance. Results indicate that cell performance is strongly dependent on both operating conditions and porous microstructure distributions of electrodes. Using the proposed fuel/gas feeding design,more » the uniform hydrogen distribution within porous anode is achieved; the oxygen distribution within the cathode is dependent on porous microstructure distributions as well as pressure loss conditions. Simulation results show that fairly uniform temperature distribution can be obtained with the proposed fuel/gas feeding design. The modeling results can be employed to guide experimental design of BSC test and provide pre-experimental analysis, as a result, to circumvent high cost associated with try-and-error experimental design and setup.« less
Stimulus Dependence of Correlated Variability across Cortical Areas
Cohen, Marlene R.
2016-01-01
The way that correlated trial-to-trial variability between pairs of neurons in the same brain area (termed spike count or noise correlation, rSC) depends on stimulus or task conditions can constrain models of cortical circuits and of the computations performed by networks of neurons (Cohen and Kohn, 2011). In visual cortex, rSC tends not to depend on stimulus properties (Kohn and Smith, 2005; Huang and Lisberger, 2009) but does depend on cognitive factors like visual attention (Cohen and Maunsell, 2009; Mitchell et al., 2009). However, neurons across visual areas respond to any visual stimulus or contribute to any perceptual decision, and the way that information from multiple areas is combined to guide perception is unknown. To gain insight into these issues, we recorded simultaneously from neurons in two areas of visual cortex (primary visual cortex, V1, and the middle temporal area, MT) while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. Correlations across, but not within, areas depend on stimulus direction and the presence of a second stimulus, and attention has opposite effects on correlations within and across areas. This observed pattern of cross-area correlations is predicted by a normalization model where MT units sum V1 inputs that are passed through a divisive nonlinearity. Together, our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. SIGNIFICANCE STATEMENT Correlations in the responses of pairs of neurons within the same cortical area have been a subject of growing interest in systems neuroscience. However, correlated variability between different cortical areas is likely just as important. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. The observed pattern of cross-area correlations was predicted by a simple normalization model. Our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. PMID:27413163
cellGPU: Massively parallel simulations of dynamic vertex models
NASA Astrophysics Data System (ADS)
Sussman, Daniel M.
2017-10-01
Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation
Habeeb, Christine M; Eklund, Robert C; Coffee, Pete
2017-06-01
This study explored person-related sources of variance in athletes' efficacy beliefs and performances when performing in pairs with distinguishable roles differing in partner dependence. College cheerleaders (n = 102) performed their role in repeated performance trials of two low- and two high-difficulty paired-stunt tasks with three different partners. Data were obtained on self-, other-, and collective efficacy beliefs and subjective performances, and objective performance assessments were obtained from digital recordings. Using the social relations model framework, total variance in each belief/assessment was partitioned, for each role, into numerical components of person-related variance relative to the self, the other, and the collective. Variance component by performance role by task-difficulty repeated-measures analysis of variances revealed that the largest person-related variance component differed by athlete role and increased in size in high-difficulty tasks. Results suggest that the extent the athlete's performance depends on a partner relates to the extent the partner is a source of self-, other-, and collective efficacy beliefs.
Toselli, Italo; Korotkova, Olga
2015-06-01
We generalize a recently introduced model for nonclassic turbulent spatial power spectrum involving anisotropy along two mutually orthogonal axes transverse to the direction of beam propagation by including two scale-dependent weighting factors for these directions. Such a turbulent model may be pertinent to atmospheric fluctuations in the refractive index in stratified regions well above the boundary layer and employed for air-air communication channels. When restricting ourselves to an unpolarized, coherent Gaussian beam and a weak turbulence regime, we examine the effects of such a turbulence type on the OOK FSO link performance by including the results on scintillation flux, probability of fade, SNR, and BERs.
NASA Astrophysics Data System (ADS)
Warwick, C. N.; Venkateshvaran, D.; Sirringhaus, H.
2015-09-01
We present measurements of the Seebeck coefficient in two high mobility organic small molecules, 2,7-dioctyl[1]benzothieno[3,2-b][1]benzothiophene (C8-BTBT) and 2,9-didecyl-dinaphtho[2,3-b:2',3'-f]thieno[3,2-b]thiophene (C10-DNTT). The measurements are performed in a field effect transistor structure with high field effect mobilities of approximately 3 cm2/V s. This allows us to observe both the charge concentration and temperature dependence of the Seebeck coefficient. We find a strong logarithmic dependence upon charge concentration and a temperature dependence within the measurement uncertainty. Despite performing the measurements on highly polycrystalline evaporated films, we see an agreement in the Seebeck coefficient with modelled values from Shi et al. [Chem. Mater. 26, 2669 (2014)] at high charge concentrations. We attribute deviations from the model at lower charge concentrations to charge trapping.
Hydrologic modeling strategy for the Islamic Republic of Mauritania, Africa
Friedel, Michael J.
2008-01-01
The government of Mauritania is interested in how to maintain hydrologic balance to ensure a long-term stable water supply for minerals-related, domestic, and other purposes. Because of the many complicating and competing natural and anthropogenic factors, hydrologists will perform quantitative analysis with specific objectives and relevant computer models in mind. Whereas various computer models are available for studying water-resource priorities, the success of these models to provide reliable predictions largely depends on adequacy of the model-calibration process. Predictive analysis helps us evaluate the accuracy and uncertainty associated with simulated dependent variables of our calibrated model. In this report, the hydrologic modeling process is reviewed and a strategy summarized for future Mauritanian hydrologic modeling studies.
Simulation-based performance analysis of EC-Earth 3.2.0 using Dimemas
NASA Astrophysics Data System (ADS)
Yepes Arbós, Xavier; César Acosta Cobos, Mario; Serradell Maronda, Kim; Sanchez Lorente, Alicia; Doblas Reyes, Francisco Javier
2017-04-01
Earth System Models (ESMs) are complex applications executed in supercomputing facilities due to their high demand on computing resources. However, not all these models perform a good resources usage and the energy efficiency can be well below a minimum acceptable. One example is EC-Earth, a global coupled climate model which integrates different component models to simulate the Earth system. The two main components used in this analysis are IFS as atmospheric model and NEMO as ocean model, both coupled via the OASIS3-MCT coupler. Preliminary results proved that EC-Earth does not have a good computational performance. For example, the scalability of this model using the T255L91 grid with 512 MPI processes for IFS and the ORCA1L75 grid with 128 MPI processes for NEMO achieves 40.3 of speedup. This means that the 81.2% of the resources are wasted. Therefore, it is necessary a performance analysis to find the bottlenecks of the model and thus, determine the most appropriate optimization techniques. Using traces of the model collected with profiling tools such as Extrae, Paraver and Dimemas, allow us to simulate the model behaviour on a configurable parallel platform and extrapolate the impact of hardware changes in the performance of EC-Earth. In this document we propose a state-of-art procedure which makes possible to evaluate the different characteristics of climate models in a very efficient way. Accordingly, the performance of EC-Earth in different scenarios, namely assuming an ideal machine, model sensitivity and limiting model due to coupling has been shown. By simulating these scenarios, we realized that each model has different characteristics. With the ideal machine, we have seen that there are some sources of inefficiency: about a 20.59% of the execution time is communication; and there are workload imbalances produced by data dependences both between IFS and NEMO and within each model. In addition, in the model sensitivity simulations, we have described the types of messages and detected data dependencies. In IFS, we have observed that latency affects the coupling between models due to a large amount of small communications, whereas bandwidth affects another region of the code with a few big messages. In NEMO, results show that the simulated latencies and bandwidths only affect slightly to its execution time. However, it has data dependencies solved inefficiently and workload imbalances. The last simulation performed to detect the slowest model due to coupling has revealed that IFS is slower than NEMO. Moreover, there is not enough bandwidth to transfer all the data in IFS, whereas in NEMO there is almost no contention. This study is useful to improve the computational efficiency of the model, adapt it to support ultra-high resolution (UHR) experiments and future exascale supercomputers, and help code developers to design new algorithms more machine-independent.
Spatiotemporal Variation in Distance Dependent Animal Movement Contacts: One Size Doesn’t Fit All
Brommesson, Peter; Wennergren, Uno; Lindström, Tom
2016-01-01
The structure of contacts that mediate transmission has a pronounced effect on the outbreak dynamics of infectious disease and simulation models are powerful tools to inform policy decisions. Most simulation models of livestock disease spread rely to some degree on predictions of animal movement between holdings. Typically, movements are more common between nearby farms than between those located far away from each other. Here, we assessed spatiotemporal variation in such distance dependence of animal movement contacts from an epidemiological perspective. We evaluated and compared nine statistical models, applied to Swedish movement data from 2008. The models differed in at what level (if at all), they accounted for regional and/or seasonal heterogeneities in the distance dependence of the contacts. Using a kernel approach to describe how probability of contacts between farms changes with distance, we developed a hierarchical Bayesian framework and estimated parameters by using Markov Chain Monte Carlo techniques. We evaluated models by three different approaches of model selection. First, we used Deviance Information Criterion to evaluate their performance relative to each other. Secondly, we estimated the log predictive posterior distribution, this was also used to evaluate their relative performance. Thirdly, we performed posterior predictive checks by simulating movements with each of the parameterized models and evaluated their ability to recapture relevant summary statistics. Independent of selection criteria, we found that accounting for regional heterogeneity improved model accuracy. We also found that accounting for seasonal heterogeneity was beneficial, in terms of model accuracy, according to two of three methods used for model selection. Our results have important implications for livestock disease spread models where movement is an important risk factor for between farm transmission. We argue that modelers should refrain from using methods to simulate animal movements that assume the same pattern across all regions and seasons without explicitly testing for spatiotemporal variation. PMID:27760155
Interdisciplinary Research: Performance and Policy Issues.
ERIC Educational Resources Information Center
Rossini, Frederick A.; Porter, Alan L.
1981-01-01
Successful interdisciplinary research performance, it is suggested, depends on such structural and process factors as leadership, team characteristics, study bounding, iteration, communication patterns, and epistemological factors. Appropriate frameworks for socially organizing the development of knowledge such as common group learning, modeling,…
Spatial-temporal modeling of malware propagation in networks.
Chen, Zesheng; Ji, Chuanyi
2005-09-01
Network security is an important task of network management. One threat to network security is malware (malicious software) propagation. One type of malware is called topological scanning that spreads based on topology information. The focus of this work is on modeling the spread of topological malwares, which is important for understanding their potential damages, and for developing countermeasures to protect the network infrastructure. Our model is motivated by probabilistic graphs, which have been widely investigated in machine learning. We first use a graphical representation to abstract the propagation of malwares that employ different scanning methods. We then use a spatial-temporal random process to describe the statistical dependence of malware propagation in arbitrary topologies. As the spatial dependence is particularly difficult to characterize, the problem becomes how to use simple (i.e., biased) models to approximate the spatially dependent process. In particular, we propose the independent model and the Markov model as simple approximations. We conduct both theoretical analysis and extensive simulations on large networks using both real measurements and synthesized topologies to test the performance of the proposed models. Our results show that the independent model can capture temporal dependence and detailed topology information and, thus, outperforms the previous models, whereas the Markov model incorporates a certain spatial dependence and, thus, achieves a greater accuracy in characterizing both transient and equilibrium behaviors of malware propagation.
NASA Astrophysics Data System (ADS)
Koran, John J., Jr.; Koran, Mary Lou
In a study designed to explore the effects of teacher anxiety and modeling on acquisition of a science teaching skill and concomitant student performance, 69 preservice secondary teachers and 295 eighth grade students were randomly assigned to microteaching sessions. Prior to microteaching, teachers were given an anxiety test, then randomly assigned to one of three treatments; a transcript model, a protocol model, or a control condition. Subsequently both teacher and student performance was assessed using written and behavioral measures. Analysis of variance indicated that subjects in the two modeling treatments significantly exceeded performance of control group subjects on all measures of the dependent variable, with the protocol model being generally superior to the transcript model. The differential effects of the modeling treatments were further reflected in student performance. Regression analysis of aptitude-treatment interactions indicated that teacher anxiety scores interacted significantly with instructional treatments, with high anxiety teachers performing best in the protocol modeling treatment. Again, this interaction was reflected in student performance, where students taught by highly anxious teachers performed significantly better when their teachers had received the protocol model. These results were discussed in terms of teacher concerns and a memory model of the effects of anxiety on performance.
Glackin, Brendan; Wall, Julie A.; McGinnity, Thomas M.; Maguire, Liam P.; McDaid, Liam J.
2010-01-01
Sound localization can be defined as the ability to identify the position of an input sound source and is considered a powerful aspect of mammalian perception. For low frequency sounds, i.e., in the range 270 Hz–1.5 KHz, the mammalian auditory pathway achieves this by extracting the Interaural Time Difference between sound signals being received by the left and right ear. This processing is performed in a region of the brain known as the Medial Superior Olive (MSO). This paper presents a Spiking Neural Network (SNN) based model of the MSO. The network model is trained using the Spike Timing Dependent Plasticity learning rule using experimentally observed Head Related Transfer Function data in an adult domestic cat. The results presented demonstrate how the proposed SNN model is able to perform sound localization with an accuracy of 91.82% when an error tolerance of ±10° is used. For angular resolutions down to 2.5°, it will be demonstrated how software based simulations of the model incur significant computation times. The paper thus also addresses preliminary implementation on a Field Programmable Gate Array based hardware platform to accelerate system performance. PMID:20802855
Air pollution simulations critically depend on the quality of the underlying meteorology. In phase 2 of the Air Quality Model Evaluation International Initiative (AQMEII-2), thirteen modeling groups from Europe and four groups from North America operating eight different regional...
The collection of chemical structures and associated experimental data for QSAR modeling is facilitated by the increasing number and size of public databases. However, the performance of QSAR models highly depends on the quality of the data used and the modeling methodology. The ...
NASA Astrophysics Data System (ADS)
Piotrowski, Adam P.; Napiorkowski, Jaroslaw J.
2018-06-01
A number of physical or data-driven models have been proposed to evaluate stream water temperatures based on hydrological and meteorological observations. However, physical models require a large amount of information that is frequently unavailable, while data-based models ignore the physical processes. Recently the air2stream model has been proposed as an intermediate alternative that is based on physical heat budget processes, but it is so simplified that the model may be applied like data-driven ones. However, the price for simplicity is the need to calibrate eight parameters that, although have some physical meaning, cannot be measured or evaluated a priori. As a result, applicability and performance of the air2stream model for a particular stream relies on the efficiency of the calibration method. The original air2stream model uses an inefficient 20-year old approach called Particle Swarm Optimization with inertia weight. This study aims at finding an effective and robust calibration method for the air2stream model. Twelve different optimization algorithms are examined on six different streams from northern USA (states of Washington, Oregon and New York), Poland and Switzerland, located in both high mountains, hilly and lowland areas. It is found that the performance of the air2stream model depends significantly on the calibration method. Two algorithms lead to the best results for each considered stream. The air2stream model, calibrated with the chosen optimization methods, performs favorably against classical streamwater temperature models. The MATLAB code of the air2stream model and the chosen calibration procedure (CoBiDE) are available as Supplementary Material on the Journal of Hydrology web page.
NASA Astrophysics Data System (ADS)
Utecht, Manuel; Klamroth, Tillmann
2018-07-01
Hot localised charge carriers on the Si(111)-7×7 surface are modelled by small charged clusters. Such resonances induce non-local desorption, i.e. more than 10 nm away from the injection site, of chlorobenzene in scanning tunnelling microscope experiments. We used such a cluster model to characterise resonance localisation and vibrational activation for positive and negative resonances recently. In this work, we investigate to which extent the model depends on details of the used cluster or quantum chemistry methods and try to identify the smallest possible cluster suitable for a description of the neutral surface and the ion resonances. Furthermore, a detailed analysis for different chemisorption orientations is performed. While some properties, as estimates of the resonance energy or absolute values for atomic changes, show such a dependency, the main findings are very robust with respect to changes in the model and/or the chemisorption geometry.
Manual control of yaw motion with combined visual and vestibular cues
NASA Technical Reports Server (NTRS)
Zacharias, G. L.; Young, L. R.
1977-01-01
Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation was modelled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A correction to the frequency responses is provided by a separate measurement of manual control performance in an analogous visual pursuit nulling task. The resulting dual-input describing function for motion perception dependence on combined cue presentation supports the complementary model, in which vestibular cues dominate sensation at frequencies above 0.05 Hz. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.
Bugana, Marco; Severi, Stefano; Sobie, Eric A.
2014-01-01
Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP). The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD) antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz) and fast (2 Hz) rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of AP duration rate dependence, and illustrates a strategy for the design of potentially beneficial antiarrhythmic drugs. PMID:24675446
Cummins, Megan A; Dalal, Pavan J; Bugana, Marco; Severi, Stefano; Sobie, Eric A
2014-03-01
Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP). The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD) antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz) and fast (2 Hz) rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of AP duration rate dependence, and illustrates a strategy for the design of potentially beneficial antiarrhythmic drugs.
Mortimer, Duncan; Segal, Leonie
2005-01-01
To compare the performance of competing and complementary interventions for prevention or treatment of problem drinking and alcohol dependence. To provide an example of how health maximising decision-makers might use performance measures such as cost per quality adjusted life year (QALY) league tables to formulate an optimal package of interventions for problem drinking and alcohol dependence. A time-dependent state-transition model was used to estimate QALYs gained per person for each intervention as compared to usual care in the relevant target population. Cost per QALY estimates for each of the interventions fall below any putative funding threshold for developed economies. Interventions for problem drinkers appear to offer better value than interventions targeted at those with a history of severe physical dependence. Formularies such as Australia's Medicare should include a comprehensive package of interventions for problem drinking and alcohol dependence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaur, Amandeep; Deepshikha; Vinayak, Karan Singh
2016-07-15
We performed a theoretical investigation of different mass-asymmetric reactions to access the direct impact of the density-dependent part of symmetry energy on multifragmentation. The simulations are performed for a specific set of reactions having same system mass and N/Z content, using isospin-dependent quantum molecular dynamics model to estimate the quantitative dependence of fragment production on themass-asymmetry factor (τ) for various symmetry energy forms. The dynamics associated with different mass-asymmetric reactions is explored and the direct role of symmetry energy is checked. Also a comparison with the experimental data (asymmetric reaction) is presented for a different equation of states (symmetry energymore » forms).« less
NASA Astrophysics Data System (ADS)
Rose, Sean D.; Roth, Jacob; Zimmerman, Cole; Reiser, Ingrid; Sidky, Emil Y.; Pan, Xiaochuan
2018-03-01
In this work we investigate an efficient implementation of a region-of-interest (ROI) based Hotelling observer (HO) in the context of parameter optimization for detection of a rod signal at two orientations in linear iterative image reconstruction for DBT. Our preliminary results suggest that ROI-HO performance trends may be efficiently estimated by modeling only the 2D plane perpendicular to the detector and containing the X-ray source trajectory. In addition, the ROI-HO is seen to exhibit orientation dependent trends in detectability as a function of the regularization strength employed in reconstruction. To further investigate the ROI-HO performance in larger 3D system models, we present and validate an iterative methodology for calculating the ROI-HO. Lastly, we present a real data study investigating the correspondence between ROI-HO performance trends and signal conspicuity. Conspicuity of signals in real data reconstructions is seen to track well with trends in ROI-HO detectability. In particular, we observe orientation dependent conspicuity matching the orientation dependent detectability of the ROI-HO.
Navarro, Albert; Casanovas, Georgina; Alvarado, Sergio; Moriña, David
Researchers in public health are often interested in examining the effect of several exposures on the incidence of a recurrent event. The aim of the present study is to assess how well the common-baseline hazard models perform to estimate the effect of multiple exposures on the hazard of presenting an episode of a recurrent event, in presence of event dependence and when the history of prior-episodes is unknown or is not taken into account. Through a comprehensive simulation study, using specific-baseline hazard models as the reference, we evaluate the performance of common-baseline hazard models by means of several criteria: bias, mean squared error, coverage, confidence intervals mean length and compliance with the assumption of proportional hazards. Results indicate that the bias worsen as event dependence increases, leading to a considerable overestimation of the exposure effect; coverage levels and compliance with the proportional hazards assumption are low or extremely low, worsening with increasing event dependence, effects to be estimated, and sample sizes. Common-baseline hazard models cannot be recommended when we analyse recurrent events in the presence of event dependence. It is important to have access to the history of prior-episodes per subject, it can permit to obtain better estimations of the effects of the exposures. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Segment-based acoustic models for continuous speech recognition
NASA Astrophysics Data System (ADS)
Ostendorf, Mari; Rohlicek, J. R.
1993-07-01
This research aims to develop new and more accurate stochastic models for speaker-independent continuous speech recognition, by extending previous work in segment-based modeling and by introducing a new hierarchical approach to representing intra-utterance statistical dependencies. These techniques, which are more costly than traditional approaches because of the large search space associated with higher order models, are made feasible through rescoring a set of HMM-generated N-best sentence hypotheses. We expect these different modeling techniques to result in improved recognition performance over that achieved by current systems, which handle only frame-based observations and assume that these observations are independent given an underlying state sequence. In the fourth quarter of the project, we have completed the following: (1) ported our recognition system to the Wall Street Journal task, a standard task in the ARPA community; (2) developed an initial dependency-tree model of intra-utterance observation correlation; and (3) implemented baseline language model estimation software. Our initial results on the Wall Street Journal task are quite good and represent significantly improved performance over most HMM systems reporting on the Nov. 1992 5k vocabulary test set.
Framework for assessing key variable dependencies in loose-abrasive grinding and polishing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J.S.; Aikens, D.M.; Brown, N.J.
1995-12-01
This memo describes a framework for identifying all key variables that determine the figuring performance of loose-abrasive lapping and polishing machines. This framework is intended as a tool for prioritizing R&D issues, assessing the completeness of process models and experimental data, and for providing a mechanism to identify any assumptions in analytical models or experimental procedures. Future plans for preparing analytical models or performing experiments can refer to this framework in establishing the context of the work.
The Effect of Realistic Versus Imaginary Aggressive Models on Children's Interpersonal Play.
ERIC Educational Resources Information Center
Stone, Robert D.; Hapkiewicz, Walter G.
It was the purpose of this study to assess the effects of films on children, using a measure of interpersonal aggression. It was anticipated that modeling effects would depend simultaneously upon the degree of realism of the model's performance (on a reality-fantasy dimension) and the similarity between the observer's task and the model's…
Neural network submodel as an abstraction tool: relating network performance to combat outcome
NASA Astrophysics Data System (ADS)
Jablunovsky, Greg; Dorman, Clark; Yaworsky, Paul S.
2000-06-01
Simulation of Command and Control (C2) networks has historically emphasized individual system performance with little architectural context or credible linkage to `bottom- line' measures of combat outcomes. Renewed interest in modeling C2 effects and relationships stems from emerging network intensive operational concepts. This demands improved methods to span the analytical hierarchy between C2 system performance models and theater-level models. Neural network technology offers a modeling approach that can abstract the essential behavior of higher resolution C2 models within a campaign simulation. The proposed methodology uses off-line learning of the relationships between network state and campaign-impacting performance of a complex C2 architecture and then approximation of that performance as a time-varying parameter in an aggregated simulation. Ultimately, this abstraction tool offers an increased fidelity of C2 system simulation that captures dynamic network dependencies within a campaign context.
Practical Techniques for Modeling Gas Turbine Engine Performance
NASA Technical Reports Server (NTRS)
Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.
2016-01-01
The cost and risk associated with the design and operation of gas turbine engine systems has led to an increasing dependence on mathematical models. In this paper, the fundamentals of engine simulation will be reviewed, an example performance analysis will be performed, and relationships useful for engine control system development will be highlighted. The focus will be on thermodynamic modeling utilizing techniques common in industry, such as: the Brayton cycle, component performance maps, map scaling, and design point criteria generation. In general, these topics will be viewed from the standpoint of an example turbojet engine model; however, demonstrated concepts may be adapted to other gas turbine systems, such as gas generators, marine engines, or high bypass aircraft engines. The purpose of this paper is to provide an example of gas turbine model generation and system performance analysis for educational uses, such as curriculum creation or student reference.
Braeye, Toon; Verheagen, Jan; Mignon, Annick; Flipse, Wim; Pierard, Denis; Huygen, Kris; Schirvel, Carole; Hens, Niel
2016-01-01
Introduction Surveillance networks are often not exhaustive nor completely complementary. In such situations, capture-recapture methods can be used for incidence estimation. The choice of estimator and their robustness with respect to the homogeneity and independence assumptions are however not well documented. Methods We investigated the performance of five different capture-recapture estimators in a simulation study. Eight different scenarios were used to detect and combine case-information. The scenarios increasingly violated assumptions of independence of samples and homogeneity of detection probabilities. Belgian datasets on invasive pneumococcal disease (IPD) and pertussis provided motivating examples. Results No estimator was unbiased in all scenarios. Performance of the parametric estimators depended on how much of the dependency and heterogeneity were correctly modelled. Model building was limited by parameter estimability, availability of additional information (e.g. covariates) and the possibilities inherent to the method. In the most complex scenario, methods that allowed for detection probabilities conditional on previous detections estimated the total population size within a 20–30% error-range. Parametric estimators remained stable if individual data sources lost up to 50% of their data. The investigated non-parametric methods were more susceptible to data loss and their performance was linked to the dependence between samples; overestimating in scenarios with little dependence, underestimating in others. Issues with parameter estimability made it impossible to model all suggested relations between samples for the IPD and pertussis datasets. For IPD, the estimates for the Belgian incidence for cases aged 50 years and older ranged from 44 to58/100,000 in 2010. The estimates for pertussis (all ages, Belgium, 2014) ranged from 24.2 to30.8/100,000. Conclusion We encourage the use of capture-recapture methods, but epidemiologists should preferably include datasets for which the underlying dependency structure is not too complex, a priori investigate this structure, compensate for it within the model and interpret the results with the remaining unmodelled heterogeneity in mind. PMID:27529167
Transforming RNA-Seq data to improve the performance of prognostic gene signatures.
Zwiener, Isabella; Frisch, Barbara; Binder, Harald
2014-01-01
Gene expression measurements have successfully been used for building prognostic signatures, i.e for identifying a short list of important genes that can predict patient outcome. Mostly microarray measurements have been considered, and there is little advice available for building multivariable risk prediction models from RNA-Seq data. We specifically consider penalized regression techniques, such as the lasso and componentwise boosting, which can simultaneously consider all measurements and provide both, multivariable regression models for prediction and automated variable selection. However, they might be affected by the typical skewness, mean-variance-dependency or extreme values of RNA-Seq covariates and therefore could benefit from transformations of the latter. In an analytical part, we highlight preferential selection of covariates with large variances, which is problematic due to the mean-variance dependency of RNA-Seq data. In a simulation study, we compare different transformations of RNA-Seq data for potentially improving detection of important genes. Specifically, we consider standardization, the log transformation, a variance-stabilizing transformation, the Box-Cox transformation, and rank-based transformations. In addition, the prediction performance for real data from patients with kidney cancer and acute myeloid leukemia is considered. We show that signature size, identification performance, and prediction performance critically depend on the choice of a suitable transformation. Rank-based transformations perform well in all scenarios and can even outperform complex variance-stabilizing approaches. Generally, the results illustrate that the distribution and potential transformations of RNA-Seq data need to be considered as a critical step when building risk prediction models by penalized regression techniques.
Transforming RNA-Seq Data to Improve the Performance of Prognostic Gene Signatures
Zwiener, Isabella; Frisch, Barbara; Binder, Harald
2014-01-01
Gene expression measurements have successfully been used for building prognostic signatures, i.e for identifying a short list of important genes that can predict patient outcome. Mostly microarray measurements have been considered, and there is little advice available for building multivariable risk prediction models from RNA-Seq data. We specifically consider penalized regression techniques, such as the lasso and componentwise boosting, which can simultaneously consider all measurements and provide both, multivariable regression models for prediction and automated variable selection. However, they might be affected by the typical skewness, mean-variance-dependency or extreme values of RNA-Seq covariates and therefore could benefit from transformations of the latter. In an analytical part, we highlight preferential selection of covariates with large variances, which is problematic due to the mean-variance dependency of RNA-Seq data. In a simulation study, we compare different transformations of RNA-Seq data for potentially improving detection of important genes. Specifically, we consider standardization, the log transformation, a variance-stabilizing transformation, the Box-Cox transformation, and rank-based transformations. In addition, the prediction performance for real data from patients with kidney cancer and acute myeloid leukemia is considered. We show that signature size, identification performance, and prediction performance critically depend on the choice of a suitable transformation. Rank-based transformations perform well in all scenarios and can even outperform complex variance-stabilizing approaches. Generally, the results illustrate that the distribution and potential transformations of RNA-Seq data need to be considered as a critical step when building risk prediction models by penalized regression techniques. PMID:24416353
Analysis of EDZ Development of Columnar Jointed Rock Mass in the Baihetan Diversion Tunnel
NASA Astrophysics Data System (ADS)
Hao, Xian-Jie; Feng, Xia-Ting; Yang, Cheng-Xiang; Jiang, Quan; Li, Shao-Jun
2016-04-01
Due to the time dependency of the crack propagation, columnar jointed rock masses exhibit marked time-dependent behaviour. In this study, in situ measurements, scanning electron microscope (SEM), back-analysis method and numerical simulations are presented to study the time-dependent development of the excavation damaged zone (EDZ) around underground diversion tunnels in a columnar jointed rock mass. Through in situ measurements of crack propagation and EDZ development, their extent is seen to have increased over time, despite the fact that the advancing face has passed. Similar to creep behaviour, the time-dependent EDZ development curve also consists of three stages: a deceleration stage, a stabilization stage, and an acceleration stage. A corresponding constitutive model of columnar jointed rock mass considering time-dependent behaviour is proposed. The time-dependent degradation coefficient of the roughness coefficient and residual friction angle in the Barton-Bandis strength criterion are taken into account. An intelligent back-analysis method is adopted to obtain the unknown time-dependent degradation coefficients for the proposed constitutive model. The numerical modelling results are in good agreement with the measured EDZ. Not only that, the failure pattern simulated by this time-dependent constitutive model is consistent with that observed in the scanning electron microscope (SEM) and in situ observation, indicating that this model could accurately simulate the failure pattern and time-dependent EDZ development of columnar joints. Moreover, the effects of the support system provided and the in situ stress on the time-dependent coefficients are studied. Finally, the long-term stability analysis of diversion tunnels excavated in columnar jointed rock masses is performed.
Impaired spatial processing in a mouse model of fragile X syndrome.
Ghilan, Mohamed; Bettio, Luis E B; Noonan, Athena; Brocardo, Patricia S; Gil-Mohapel, Joana; Christie, Brian R
2018-05-17
Fragile X syndrome (FXS) is the most common form of inherited intellectual impairment. The Fmr1 -/y mouse model has been previously shown to have deficits in context discrimination tasks but not in the elevated plus-maze. To further characterize this FXS mouse model and determine whether hippocampal-mediated behaviours are affected in these mice, dentate gyrus (DG)-dependent spatial processing and Cornu ammonis 1 (CA1)-dependent temporal order discrimination tasks were evaluated. In agreement with previous findings of long-term potentiation deficits in the DG of this transgenic model of FXS, the results reported here demonstrate that Fmr1 -/y mice perform poorly in the DG-dependent metric change spatial processing task. However, Fmr1 -/y mice did not present deficits in the CA1-dependent temporal order discrimination task, and were able to remember the order in which objects were presented to them to the same extent as their wild-type littermate controls. These data suggest that the previously reported subregional-specific differences in hippocampal synaptic plasticity observed in the Fmr1 -/y mouse model may manifest as selective behavioural deficits in hippocampal-dependent tasks. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
Babcock, Chad; Finley, Andrew O.; Bradford, John B.; Kolka, Randall K.; Birdsey, Richard A.; Ryan, Michael G.
2015-01-01
Many studies and production inventory systems have shown the utility of coupling covariates derived from Light Detection and Ranging (LiDAR) data with forest variables measured on georeferenced inventory plots through regression models. The objective of this study was to propose and assess the use of a Bayesian hierarchical modeling framework that accommodates both residual spatial dependence and non-stationarity of model covariates through the introduction of spatial random effects. We explored this objective using four forest inventory datasets that are part of the North American Carbon Program, each comprising point-referenced measures of above-ground forest biomass and discrete LiDAR. For each dataset, we considered at least five regression model specifications of varying complexity. Models were assessed based on goodness of fit criteria and predictive performance using a 10-fold cross-validation procedure. Results showed that the addition of spatial random effects to the regression model intercept improved fit and predictive performance in the presence of substantial residual spatial dependence. Additionally, in some cases, allowing either some or all regression slope parameters to vary spatially, via the addition of spatial random effects, further improved model fit and predictive performance. In other instances, models showed improved fit but decreased predictive performance—indicating over-fitting and underscoring the need for cross-validation to assess predictive ability. The proposed Bayesian modeling framework provided access to pixel-level posterior predictive distributions that were useful for uncertainty mapping, diagnosing spatial extrapolation issues, revealing missing model covariates, and discovering locally significant parameters.
NASA Astrophysics Data System (ADS)
Aleksandrov, A. S.; Dolgih, G. V.; Kalinin, A. L.
2017-11-01
It is established that under the influence of repeated loads the process of plastic deformation in soils and discrete materials is hereditary. To perform the mathematical modeling of plastic deformation, the authors applied the integral equation by solution of which they manage to obtain the power and logarithmic dependencies connecting plastic deformation with the number of repeated loads, the parameters of the material and components of the stress tensor in the principal axes. It is shown that these dependences generalize a number of models proposed earlier in Russia and abroad. Based on the analysis of the experimental data obtained during material testing in the dynamic devices of triaxial compression at different values of the stress deviator, the coefficients in the proposed models of deformation are determined. The authors determined the application domain for logarithmic and degree dependences.
Schwab, Joshua; Gruber, Susan; Blaser, Nello; Schomaker, Michael; van der Laan, Mark
2015-01-01
This paper describes a targeted maximum likelihood estimator (TMLE) for the parameters of longitudinal static and dynamic marginal structural models. We consider a longitudinal data structure consisting of baseline covariates, time-dependent intervention nodes, intermediate time-dependent covariates, and a possibly time-dependent outcome. The intervention nodes at each time point can include a binary treatment as well as a right-censoring indicator. Given a class of dynamic or static interventions, a marginal structural model is used to model the mean of the intervention-specific counterfactual outcome as a function of the intervention, time point, and possibly a subset of baseline covariates. Because the true shape of this function is rarely known, the marginal structural model is used as a working model. The causal quantity of interest is defined as the projection of the true function onto this working model. Iterated conditional expectation double robust estimators for marginal structural model parameters were previously proposed by Robins (2000, 2002) and Bang and Robins (2005). Here we build on this work and present a pooled TMLE for the parameters of marginal structural working models. We compare this pooled estimator to a stratified TMLE (Schnitzer et al. 2014) that is based on estimating the intervention-specific mean separately for each intervention of interest. The performance of the pooled TMLE is compared to the performance of the stratified TMLE and the performance of inverse probability weighted (IPW) estimators using simulations. Concepts are illustrated using an example in which the aim is to estimate the causal effect of delayed switch following immunological failure of first line antiretroviral therapy among HIV-infected patients. Data from the International Epidemiological Databases to Evaluate AIDS, Southern Africa are analyzed to investigate this question using both TML and IPW estimators. Our results demonstrate practical advantages of the pooled TMLE over an IPW estimator for working marginal structural models for survival, as well as cases in which the pooled TMLE is superior to its stratified counterpart. PMID:25909047
NASA Astrophysics Data System (ADS)
Yousefvand, Hossein Reza
2017-07-01
In this paper a self-consistent numerical approach to study the temperature and bias dependent characteristics of mid-infrared (mid-IR) quantum cascade lasers (QCLs) is presented which integrates a number of quantum mechanical models. The field-dependent laser parameters including the nonradiative scattering times, the detuning and energy levels, the escape activation energy, the backfilling excitation energy and dipole moment of the optical transition are calculated for a wide range of applied electric fields by a self-consistent solution of Schrodinger-Poisson equations. A detailed analysis of performance of the obtained structure is carried out within a self-consistent solution of the subband population rate equations coupled with carrier coherent transport equations through the sequential resonant tunneling, by taking into account the temperature and bias dependency of the relevant parameters. Furthermore, the heat transfer equation is included in order to calculate the carrier temperature inside the active region levels. This leads to a compact predictive model to analyze the temperature and electric field dependent characteristics of the mid-IR QCLs such as the light-current (L-I), electric field-current (F-I) and core temperature-electric field (T-F) curves. For a typical mid-IR QCL, a good agreement was found between the simulated temperature-dependent L-I characteristic and experimental data, which confirms validity of the model. It is found that the main characteristics of the device such as output power and turn-on delay time are degraded by interplay between the temperature and Stark effects.
ERIC Educational Resources Information Center
Angeli, Charoula
2013-01-01
An investigation was carried out to examine the effects of cognitive style on learners' performance and interaction during complex problem solving with a computer modeling tool. One hundred and nineteen undergraduates volunteered to participate in the study. Participants were first administered a test, and based on their test scores they were…
NASA Astrophysics Data System (ADS)
Anikin, A. S.
2018-06-01
Conditional statistical characteristics of the phase difference are considered depending on the ratio of instantaneous output signal amplitudes of spatially separated weakly directional antennas for the normal field model for paths with radio-wave scattering. The dependences obtained are related to the physical processes on the radio-wave propagation path. The normal model parameters are established at which the statistical characteristics of the phase difference depend on the ratio of the instantaneous amplitudes and hence can be used to measure the phase difference. Using Shannon's formula, the amount of information on the phase difference of signals contained in the ratio of their amplitudes is calculated depending on the parameters of the normal field model. Approaches are suggested to reduce the shift of phase difference measured for paths with radio-wave scattering. A comparison with results of computer simulation by the Monte Carlo method is performed.
Maximum Entropy Principle for Transportation
NASA Astrophysics Data System (ADS)
Bilich, F.; DaSilva, R.
2008-11-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
Ji, Shiqi; Zheng, Sheng; Wang, Fei; ...
2017-07-06
The temperature-dependent characteristics of the third-generation 10-kV/20-A SiC MOSFET including the static characteristics and switching performance are carried out in this paper. The steady-state characteristics, including saturation current, output characteristics, antiparallel diode, and parasitic capacitance, are tested. Here, a double pulse test platform is constructed including a circuit breaker and gate drive with >10-kV insulation and also a hotplate under the device under test for temperature-dependent characterization during switching transients. The switching performance is tested under various load currents and gate resistances at a 7-kV dc-link voltage from 25 to 125 C and compared with previous 10-kV MOSFETs. A simplemore » behavioral model with its parameter extraction method is proposed to predict the temperature-dependent characteristics of the 10-kV SiC MOSFET. The switching speed limitations, including the reverse recovery of SiC MOSFET's body diode, overvoltage caused by stray inductance, crosstalk, heat sink, and electromagnetic interference to the control are discussed based on simulations and experimental results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Shiqi; Zheng, Sheng; Wang, Fei
The temperature-dependent characteristics of the third-generation 10-kV/20-A SiC MOSFET including the static characteristics and switching performance are carried out in this paper. The steady-state characteristics, including saturation current, output characteristics, antiparallel diode, and parasitic capacitance, are tested. Here, a double pulse test platform is constructed including a circuit breaker and gate drive with >10-kV insulation and also a hotplate under the device under test for temperature-dependent characterization during switching transients. The switching performance is tested under various load currents and gate resistances at a 7-kV dc-link voltage from 25 to 125 C and compared with previous 10-kV MOSFETs. A simplemore » behavioral model with its parameter extraction method is proposed to predict the temperature-dependent characteristics of the 10-kV SiC MOSFET. The switching speed limitations, including the reverse recovery of SiC MOSFET's body diode, overvoltage caused by stray inductance, crosstalk, heat sink, and electromagnetic interference to the control are discussed based on simulations and experimental results.« less
Laszlo, Sarah; Plaut, David C
2012-03-01
The Parallel Distributed Processing (PDP) framework has significant potential for producing models of cognitive tasks that approximate how the brain performs the same tasks. To date, however, there has been relatively little contact between PDP modeling and data from cognitive neuroscience. In an attempt to advance the relationship between explicit, computational models and physiological data collected during the performance of cognitive tasks, we developed a PDP model of visual word recognition which simulates key results from the ERP reading literature, while simultaneously being able to successfully perform lexical decision-a benchmark task for reading models. Simulations reveal that the model's success depends on the implementation of several neurally plausible features in its architecture which are sufficiently domain-general to be relevant to cognitive modeling more generally. Copyright © 2011 Elsevier Inc. All rights reserved.
Modeling size effects on the transformation behavior of shape memory alloy micropillars
NASA Astrophysics Data System (ADS)
Peraza Hernandez, Edwin A.; Lagoudas, Dimitris C.
2015-07-01
The size dependence of the thermomechanical response of shape memory alloys (SMAs) at the micro and nano-scales has gained increasing attention in the engineering community due to existing and potential uses of SMAs as solid-state actuators and components for energy dissipation in small scale devices. Particularly, their recent uses in microelectromechanical systems (MEMS) have made SMAs attractive options as active materials in small scale devices. One factor limiting further application, however, is the inability to effectively and efficiently model the observed size dependence of the SMA behavior for engineering applications. Therefore, in this work, a constitutive model for the size-dependent behavior of SMAs is proposed. Experimental observations are used to motivate the extension of an existing thermomechanical constitutive model for SMAs to account for the scale effects. It is proposed that such effects can be captured via characteristic length dependent material parameters in a power-law manner. The size dependence of the transformation behavior of NiFeGa micropillars is investigated in detail and used as model prediction cases. The constitutive model is implemented in a finite element framework and used to simulate and predict the response of SMA micropillars with different sizes. The results show a good agreement with experimental data. A parametric study performed using the calibrated model shows that the influence of micropillar aspect ratio and taper angle on the compression response is significantly smaller than that of the micropillar average diameter. It is concluded that the model is able to capture the size dependent transformation response of the SMA micropillars. In addition, the simplicity of the calibration and implementation of the proposed model make it practical for the design and numerical analysis of small scale SMA components that exhibit size dependent responses.
Modeling Interval Temporal Dependencies for Complex Activities Understanding
2013-10-11
ORGANIZATION NAMES AND ADDRESSES U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS Human activity modeling...computer vision applications: human activity recognition and facial activity recognition. The results demonstrate the superior performance of the
Advances In High Temperature (Viscoelastoplastic) Material Modeling for Thermal Structural Analysis
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Saleeb, Atef F.
2005-01-01
Typical High Temperature Applications High Temperature Applications Demand High Performance Materials: 1) Complex Thermomechanical Loading; 2) Complex Material response requires Time-Dependent/Hereditary Models: Viscoelastic/Viscoplastic; and 3) Comprehensive Characterization (Tensile, Creep, Relaxation) for a variety of material systems.
A Robust Geometric Model for Argument Classification
NASA Astrophysics Data System (ADS)
Giannone, Cristina; Croce, Danilo; Basili, Roberto; de Cao, Diego
Argument classification is the task of assigning semantic roles to syntactic structures in natural language sentences. Supervised learning techniques for frame semantics have been recently shown to benefit from rich sets of syntactic features. However argument classification is also highly dependent on the semantics of the involved lexicals. Empirical studies have shown that domain dependence of lexical information causes large performance drops in outside domain tests. In this paper a distributional approach is proposed to improve the robustness of the learning model against out-of-domain lexical phenomena.
The increasing number and size of public databases is facilitating the collection of chemical structures and associated experimental data for QSAR modeling. However, the performance of QSAR models is highly dependent not only on the modeling methodology, but also on the quality o...
Fault tree models for fault tolerant hypercube multiprocessors
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Tuazon, Jezus O.
1991-01-01
Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.
Movement ecology: size-specific behavioral response of an invasive snail to food availability.
Snider, Sunny B; Gilliam, James F
2008-07-01
Immigration, emigration, migration, and redistribution describe processes that involve movement of individuals. These movements are an essential part of contemporary ecological models, and understanding how movement is affected by biotic and abiotic factors is important for effectively modeling ecological processes that depend on movement. We asked how phenotypic heterogeneity (body size) and environmental heterogeneity (food resource level) affect the movement behavior of an aquatic snail (Tarebia granifera), and whether including these phenotypic and environmental effects improves advection-diffusion models of movement. We postulated various elaborations of the basic advection diffusion model as a priori working hypotheses. To test our hypotheses we measured individual snail movements in experimental streams at high- and low-food resource treatments. Using these experimental movement data, we examined the dependency of model selection on resource level and body size using Akaike's Information Criterion (AIC). At low resources, large individuals moved faster than small individuals, producing a platykurtic movement distribution; including size dependency in the model improved model performance. In stark contrast, at high resources, individuals moved upstream together as a wave, and body size differences largely disappeared. The model selection exercise indicated that population heterogeneity is best described by the advection component of movement for this species, because the top-ranked model included size dependency in advection, but not diffusion. Also, all probable models included resource dependency. Thus population and environmental heterogeneities both influence individual movement behaviors and the population-level distribution kernels, and their interaction may drive variation in movement behaviors in terms of both advection rates and diffusion rates. A behaviorally informed modeling framework will integrate the sentient response of individuals in terms of movement and enhance our ability to accurately model ecological processes that depend on animal movement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, Riley E.; Mangan, Niall M.; Li, Jian V.
2016-11-21
In novel photovoltaic absorbers, it is often difficult to assess the root causes of low open-circuit voltages, which may be due to bulk recombination or sub-optimal contacts. In the present work, we discuss the role of temperature- and illumination-dependent device electrical measurements in quantifying and distinguishing these performance losses - in particular, for determining bounds on interface recombination velocities, band alignment, and minority carrier lifetime. We assess the accuracy of this approach by direct comparison to photoelectron spectroscopy. Then, we demonstrate how more computationally intensive model parameter fitting approaches can draw more insights from this broad measurement space. We applymore » this measurement and modeling approach to high-performance III-V and thin-film chalcogenide devices.« less
Human and Organizational Effectiveness: A Total Spectrum Model.
1983-09-01
performance , commitment, and satisfaction ; a phenomenon first detected during the well- known Hawthorne studies (Roethlisberger and Dickson 1939). Several...typically concentrated on explaining one of three general types of behavioral criteria: (a) performance , (b) job or need satisfaction , and (c...situation, and that directly influences dependent or criterion variables such as performance , satisfaction , effective- ness, and morale. Of particular
Worst error performance of continuous Kalman filters. [for deep space navigation and maneuvers
NASA Technical Reports Server (NTRS)
Nishimura, T.
1975-01-01
The worst error performance of estimation filters is investigated for continuous systems in this paper. The pathological performance study, without assuming any dynamical model such as Markov processes for perturbations, except for its bounded amplitude, will give practical and dependable criteria in establishing the navigation and maneuver strategy in deep space missions.
NASA Astrophysics Data System (ADS)
Jiang, L.; Luo, Y.; Yan, Y.; Hararuk, O.
2013-12-01
Mitigation of global changes will depend on reliable projection for the future situation. As the major tools to predict future climate, Earth System Models (ESMs) used in Coupled Model Intercomparison Project Phase 5 (CMIP5) for the IPCC Fifth Assessment Report have incorporated carbon cycle components, which account for the important fluxes of carbon between the ocean, atmosphere, and terrestrial biosphere carbon reservoirs; and therefore are expected to provide more detailed and more certain projections. However, ESMs are never perfect; and evaluating the ESMs can help us to identify uncertainties in prediction and give the priorities for model development. In this study, we benchmarked carbon in live vegetation in the terrestrial ecosystems simulated by 19 ESMs models from CMIP5 with an observationally estimated data set of global carbon vegetation pool 'Olson's Major World Ecosystem Complexes Ranked by Carbon in Live Vegetation: An Updated Database Using the GLC2000 Land Cover Product' by Gibbs (2006). Our aim is to evaluate the ability of ESMs to reproduce the global vegetation carbon pool at different scales and what are the possible causes for the bias. We found that the performance CMIP5 ESMs is very scale-dependent. While CESM1-BGC, CESM1-CAM5, CESM1-FASTCHEM and CESM1-WACCM, and NorESM1-M and NorESM1-ME (they share the same model structure) have very similar global sums with the observation data but they usually perform poorly at grid cell and biome scale. In contrast, MIROC-ESM and MIROC-ESM-CHEM simulate the best on at grid cell and biome scale but have larger differences in global sums than others. Our results will help improve CMIP5 ESMs for more reliable prediction.
Antonelli, Cristian; Mecozzi, Antonio; Shtaif, Mark; Winzer, Peter J
2015-02-09
Mode-dependent loss (MDL) is a major factor limiting the achievable information rate in multiple-input multiple-output space-division multiplexed systems. In this paper we show that its impact on system performance, which we quantify in terms of the capacity reduction relative to a reference MDL-free system, may depend strongly on the operation of the inline optical amplifiers. This dependency is particularly strong in low mode-count systems. In addition, we discuss ways in which the signal-to-noise ratio of the MDL-free reference system can be defined and quantify the differences in the predicted capacity loss. Finally, we stress the importance of correctly accounting for the effect of MDL on the accumulation of amplification noise.
Scattering of Acoustic Waves from Ocean Boundaries
2015-09-30
of buried mines and improve SONAR performance in shallow water. OBJECTIVES 1) Determination of the correct physical model of acoustic propagation... acoustic parameters in the ocean. APPROACH 1) Finite Element Modeling for Range Dependent Waveguides: Finite element modeling is applied to a...roughness measurements for reverberation modeling . GLISTEN data provide insight into the role of biology on acoustic propagation and scattering
Experimental and theoretical characterization of an AC electroosmotic micromixer.
Sasaki, Naoki; Kitamori, Takehiko; Kim, Haeng-Boo
2010-01-01
We have reported on a novel microfluidic mixer based on AC electroosmosis. To elucidate the mixer characteristics, we performed detailed measurements of mixing under various experimental conditions including applied voltage, frequency and solution viscosity. The results are discussed through comparison with results obtained from a theoretical model of AC electroosmosis. As predicted from the theoretical model, we found that a larger voltage (approximately 20 V(p-p)) led to more rapid mixing, while the dependence of the mixing on frequency (1-5 kHz) was insignificant under the present experimental conditions. Furthermore, the dependence of the mixing on viscosity was successfully explained by the theoretical model, and the applicability of the mixer in viscous solution (2.83 mPa s) was confirmed experimentally. By using these results, it is possible to estimate the mixing performance under given conditions. These estimations can provide guidelines for using the mixer in microfluidic chemical analysis.
Tuckwell, H C; Hanslik, T; Valleron, A J; Flahault, A
2003-06-01
A mathematical model is described which determines the impact of a schedule of vaccination on the time course of a certain class of diseases. The data are the demographic variables and parameters and age-dependent non-fatal and fatal case rates. Given the age- and time-dependent rates of vaccination including coverage and corresponding efficacies, various schedules may be distinguished by either the absolute numbers of cases and deaths avoided or the numbers of cases and deaths avoided per dose of vaccine. The model was applied to meningo-coccal serogroup C disease in France. The outcomes of six different vaccination schedules were examined. In absolute terms, a schedule in which all individuals aged between 2 and 20 years were vaccinated performed best, but this schedule and that in which only 1-year olds were vaccinated performed equally and best in terms of cases prevented, but not lives saved, per dose.
Temperature dependence of standard model CP violation.
Brauner, Tomáš; Taanila, Olli; Tranberg, Anders; Vuorinen, Aleksi
2012-01-27
We analyze the temperature dependence of CP violation effects in the standard model by determining the effective action of its bosonic fields, obtained after integrating out the fermions from the theory and performing a covariant gradient expansion. We find nonvanishing CP violating terms starting at the sixth order of the expansion, albeit only in the C-odd-P-even sector, with coefficients that depend on quark masses, Cabibbo-Kobayashi-Maskawa matrix elements, temperature and the magnitude of the Higgs field. The CP violating effects are observed to decrease rapidly with temperature, which has important implications for the generation of a matter-antimatter asymmetry in the early Universe. Our results suggest that the cold electroweak baryogenesis scenario may be viable within the standard model, provided the electroweak transition temperature is at most of order 1 GeV.
Cross-Dependency Inference in Multi-Layered Networks: A Collaborative Filtering Perspective.
Chen, Chen; Tong, Hanghang; Xie, Lei; Ying, Lei; He, Qing
2017-08-01
The increasingly connected world has catalyzed the fusion of networks from different domains, which facilitates the emergence of a new network model-multi-layered networks. Examples of such kind of network systems include critical infrastructure networks, biological systems, organization-level collaborations, cross-platform e-commerce, and so forth. One crucial structure that distances multi-layered network from other network models is its cross-layer dependency, which describes the associations between the nodes from different layers. Needless to say, the cross-layer dependency in the network plays an essential role in many data mining applications like system robustness analysis and complex network control. However, it remains a daunting task to know the exact dependency relationships due to noise, limited accessibility, and so forth. In this article, we tackle the cross-layer dependency inference problem by modeling it as a collective collaborative filtering problem. Based on this idea, we propose an effective algorithm Fascinate that can reveal unobserved dependencies with linear complexity. Moreover, we derive Fascinate-ZERO, an online variant of Fascinate that can respond to a newly added node timely by checking its neighborhood dependencies. We perform extensive evaluations on real datasets to substantiate the superiority of our proposed approaches.
Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks.
Chande, Ruchi D; Hargraves, Rosalyn Hobson; Ortiz-Robinson, Norma; Wayne, Jennifer S
2017-01-01
Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.
Reinforcement Learning of Two-Joint Virtual Arm Reaching in a Computer Model of Sensorimotor Cortex
Neymotin, Samuel A.; Chadderdon, George L.; Kerr, Cliff C.; Francis, Joseph T.; Lytton, William W.
2014-01-01
Neocortical mechanisms of learning sensorimotor control involve a complex series of interactions at multiple levels, from synaptic mechanisms to cellular dynamics to network connectomics. We developed a model of sensory and motor neocortex consisting of 704 spiking model neurons. Sensory and motor populations included excitatory cells and two types of interneurons. Neurons were interconnected with AMPA/NMDA and GABAA synapses. We trained our model using spike-timing-dependent reinforcement learning to control a two-joint virtual arm to reach to a fixed target. For each of 125 trained networks, we used 200 training sessions, each involving 15 s reaches to the target from 16 starting positions. Learning altered network dynamics, with enhancements to neuronal synchrony and behaviorally relevant information flow between neurons. After learning, networks demonstrated retention of behaviorally relevant memories by using proprioceptive information to perform reach-to-target from multiple starting positions. Networks dynamically controlled which joint rotations to use to reach a target, depending on current arm position. Learning-dependent network reorganization was evident in both sensory and motor populations: learned synaptic weights showed target-specific patterning optimized for particular reach movements. Our model embodies an integrative hypothesis of sensorimotor cortical learning that could be used to interpret future electrophysiological data recorded in vivo from sensorimotor learning experiments. We used our model to make the following predictions: learning enhances synchrony in neuronal populations and behaviorally relevant information flow across neuronal populations, enhanced sensory processing aids task-relevant motor performance and the relative ease of a particular movement in vivo depends on the amount of sensory information required to complete the movement. PMID:24047323
NASA Astrophysics Data System (ADS)
Shim, J. S.; Rastätter, L.; Kuznetsova, M.; Bilitza, D.; Codrescu, M.; Coster, A. J.; Emery, B. A.; Fedrizzi, M.; Förster, M.; Fuller-Rowell, T. J.; Gardner, L. C.; Goncharenko, L.; Huba, J.; McDonald, S. E.; Mannucci, A. J.; Namgaladze, A. A.; Pi, X.; Prokhorov, B. E.; Ridley, A. J.; Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Zhu, L.
2017-10-01
In order to assess current modeling capability of reproducing storm impacts on total electron content (TEC), we considered quantities such as TEC, TEC changes compared to quiet time values, and the maximum value of the TEC and TEC changes during a storm. We compared the quantities obtained from ionospheric models against ground-based GPS TEC measurements during the 2006 AGU storm event (14-15 December 2006) in the selected eight longitude sectors. We used 15 simulations obtained from eight ionospheric models, including empirical, physics-based, coupled ionosphere-thermosphere, and data assimilation models. To quantitatively evaluate performance of the models in TEC prediction during the storm, we calculated skill scores such as RMS error, Normalized RMS error (NRMSE), ratio of the modeled to observed maximum increase (Yield), and the difference between the modeled peak time and observed peak time. Furthermore, to investigate latitudinal dependence of the performance of the models, the skill scores were calculated for five latitude regions. Our study shows that RMSE of TEC and TEC changes of the model simulations range from about 3 TECU (total electron content unit, 1 TECU = 1016 el m-2) (in high latitudes) to about 13 TECU (in low latitudes), which is larger than latitudinal average GPS TEC error of about 2 TECU. Most model simulations predict TEC better than TEC changes in terms of NRMSE and the difference in peak time, while the opposite holds true in terms of Yield. Model performance strongly depends on the quantities considered, the type of metrics used, and the latitude considered.
Rating knowledge sharing in cross-domain collaborative filtering.
Li, Bin; Zhu, Xingquan; Li, Ruijiang; Zhang, Chengqi
2015-05-01
Cross-domain collaborative filtering (CF) aims to share common rating knowledge across multiple related CF domains to boost the CF performance. In this paper, we view CF domains as a 2-D site-time coordinate system, on which multiple related domains, such as similar recommender sites or successive time-slices, can share group-level rating patterns. We propose a unified framework for cross-domain CF over the site-time coordinate system by sharing group-level rating patterns and imposing user/item dependence across domains. A generative model, say ratings over site-time (ROST), which can generate and predict ratings for multiple related CF domains, is developed as the basic model for the framework. We further introduce cross-domain user/item dependence into ROST and extend it to two real-world cross-domain CF scenarios: 1) ROST (sites) for alleviating rating sparsity in the target domain, where multiple similar sites are viewed as related CF domains and some items in the target domain depend on their correspondences in the related ones; and 2) ROST (time) for modeling user-interest drift over time, where a series of time-slices are viewed as related CF domains and a user at current time-slice depends on herself in the previous time-slice. All these ROST models are instances of the proposed unified framework. The experimental results show that ROST (sites) can effectively alleviate the sparsity problem to improve rating prediction performance and ROST (time) can clearly track and visualize user-interest drift over time.
Watts, Alain; Gritton, Howard J; Sweigart, Jamie; Poe, Gina R
2012-09-26
Rapid eye movement (REM) sleep enhances hippocampus-dependent associative memory, but REM deprivation has little impact on striatum-dependent procedural learning. Antidepressant medications are known to inhibit REM sleep, but it is not well understood if antidepressant treatments impact learning and memory. We explored antidepressant REM suppression effects on learning by training animals daily on a spatial task under familiar and novel conditions, followed by training on a procedural memory task. Daily treatment with the antidepressant and norepinephrine reuptake inhibitor desipramine (DMI) strongly suppressed REM sleep in rats for several hours, as has been described in humans. We also found that DMI treatment reduced the spindle-rich transition-to-REM sleep state (TR), which has not been previously reported. DMI REM suppression gradually weakened performance on a once familiar hippocampus-dependent maze (reconsolidation error). DMI also impaired learning of the novel maze (consolidation error). Unexpectedly, learning of novel reward positions and memory of familiar positions were equally and oppositely correlated with amounts of TR sleep. Conversely, DMI treatment enhanced performance on a separate striatum-dependent, procedural T-maze task that was positively correlated with the amounts of slow-wave sleep (SWS). Our results suggest that learning strategy switches in patients taking REM sleep-suppressing antidepressants might serve to offset sleep-dependent hippocampal impairments to partially preserve performance. State-performance correlations support a model wherein reconsolidation of hippocampus-dependent familiar memories occurs during REM sleep, novel information is incorporated and consolidated during TR, and dorsal striatum-dependent procedural learning is augmented during SWS.
Watts, Alain; Gritton, Howard J.; Sweigart, Jamie
2012-01-01
Rapid eye movement (REM) sleep enhances hippocampus-dependent associative memory, but REM deprivation has little impact on striatum-dependent procedural learning. Antidepressant medications are known to inhibit REM sleep, but it is not well understood if antidepressant treatments impact learning and memory. We explored antidepressant REM suppression effects on learning by training animals daily on a spatial task under familiar and novel conditions, followed by training on a procedural memory task. Daily treatment with the antidepressant and norepinephrine reuptake inhibitor desipramine (DMI) strongly suppressed REM sleep in rats for several hours, as has been described in humans. We also found that DMI treatment reduced the spindle-rich transition-to-REM sleep state (TR), which has not been previously reported. DMI REM suppression gradually weakened performance on a once familiar hippocampus-dependent maze (reconsolidation error). DMI also impaired learning of the novel maze (consolidation error). Unexpectedly, learning of novel reward positions and memory of familiar positions were equally and oppositely correlated with amounts of TR sleep. Conversely, DMI treatment enhanced performance on a separate striatum-dependent, procedural T-maze task that was positively correlated with the amounts of slow-wave sleep (SWS). Our results suggest that learning strategy switches in patients taking REM sleep-suppressing antidepressants might serve to offset sleep-dependent hippocampal impairments to partially preserve performance. State–performance correlations support a model wherein reconsolidation of hippocampus-dependent familiar memories occurs during REM sleep, novel information is incorporated and consolidated during TR, and dorsal striatum-dependent procedural learning is augmented during SWS. PMID:23015432
How Nurses Decide to Ambulate Hospitalized Older Adults: Development of a Conceptual Model
ERIC Educational Resources Information Center
Doherty-King, Barbara; Bowers, Barbara
2011-01-01
Adults over the age of 65 years account for 60% of all hospital admissions and experience consequential negative outcomes directly related to hospitalization. Negative outcomes include falls, delirium, loss in ability to perform basic activities of daily living, and new walking dependence. New walking dependence, defined as the loss in ability to…
Chu, Haitao; Zhou, Yijie; Cole, Stephen R.; Ibrahim, Joseph G.
2010-01-01
Summary To evaluate the probabilities of a disease state, ideally all subjects in a study should be diagnosed by a definitive diagnostic or gold standard test. However, since definitive diagnostic tests are often invasive and expensive, it is generally unethical to apply them to subjects whose screening tests are negative. In this article, we consider latent class models for screening studies with two imperfect binary diagnostic tests and a definitive categorical disease status measured only for those with at least one positive screening test. Specifically, we discuss a conditional independent and three homogeneous conditional dependent latent class models and assess the impact of misspecification of the dependence structure on the estimation of disease category probabilities using frequentist and Bayesian approaches. Interestingly, the three homogeneous dependent models can provide identical goodness-of-fit but substantively different estimates for a given study. However, the parametric form of the assumed dependence structure itself is not “testable” from the data, and thus the dependence structure modeling considered here can only be viewed as a sensitivity analysis concerning a more complicated non-identifiable model potentially involving heterogeneous dependence structure. Furthermore, we discuss Bayesian model averaging together with its limitations as an alternative way to partially address this particularly challenging problem. The methods are applied to two cancer screening studies, and simulations are conducted to evaluate the performance of these methods. In summary, further research is needed to reduce the impact of model misspecification on the estimation of disease prevalence in such settings. PMID:20191614
Benoit, Julia S; Chan, Wenyaw; Doody, Rachelle S
2015-01-01
Parameter dependency within data sets in simulation studies is common, especially in models such as Continuous-Time Markov Chains (CTMC). Additionally, the literature lacks a comprehensive examination of estimation performance for the likelihood-based general multi-state CTMC. Among studies attempting to assess the estimation, none have accounted for dependency among parameter estimates. The purpose of this research is twofold: 1) to develop a multivariate approach for assessing accuracy and precision for simulation studies 2) to add to the literature a comprehensive examination of the estimation of a general 3-state CTMC model. Simulation studies are conducted to analyze longitudinal data with a trinomial outcome using a CTMC with and without covariates. Measures of performance including bias, component-wise coverage probabilities, and joint coverage probabilities are calculated. An application is presented using Alzheimer's disease caregiver stress levels. Comparisons of joint and component-wise parameter estimates yield conflicting inferential results in simulations from models with and without covariates. In conclusion, caution should be taken when conducting simulation studies aiming to assess performance and choice of inference should properly reflect the purpose of the simulation.
Detecting changes in dynamic and complex acoustic environments
Boubenec, Yves; Lawlor, Jennifer; Górska, Urszula; Shamma, Shihab; Englitz, Bernhard
2017-01-01
Natural sounds such as wind or rain, are characterized by the statistical occurrence of their constituents. Despite their complexity, listeners readily detect changes in these contexts. We here address the neural basis of statistical decision-making using a combination of psychophysics, EEG and modelling. In a texture-based, change-detection paradigm, human performance and reaction times improved with longer pre-change exposure, consistent with improved estimation of baseline statistics. Change-locked and decision-related EEG responses were found in a centro-parietal scalp location, whose slope depended on change size, consistent with sensory evidence accumulation. The potential's amplitude scaled with the duration of pre-change exposure, suggesting a time-dependent decision threshold. Auditory cortex-related potentials showed no response to the change. A dual timescale, statistical estimation model accounted for subjects' performance. Furthermore, a decision-augmented auditory cortex model accounted for performance and reaction times, suggesting that the primary cortical representation requires little post-processing to enable change-detection in complex acoustic environments. DOI: http://dx.doi.org/10.7554/eLife.24910.001 PMID:28262095
Learning non-local dependencies.
Kuhn, Gustav; Dienes, Zoltán
2008-01-01
This paper addresses the nature of the temporary storage buffer used in implicit or statistical learning. Kuhn and Dienes [Kuhn, G., and Dienes, Z. (2005). Implicit learning of nonlocal musical rules: implicitly learning more than chunks. Journal of Experimental Psychology-Learning Memory and Cognition, 31(6) 1417-1432] showed that people could implicitly learn a musical rule that was solely based on non-local dependencies. These results seriously challenge models of implicit learning that assume knowledge merely takes the form of linking adjacent elements (chunking). We compare two models that use a buffer to allow learning of long distance dependencies, the Simple Recurrent Network (SRN) and the memory buffer model. We argue that these models - as models of the mind - should not be evaluated simply by fitting them to human data but by determining the characteristic behaviour of each model. Simulations showed for the first time that the SRN could rapidly learn non-local dependencies. However, the characteristic performance of the memory buffer model rather than SRN more closely matched how people came to like different musical structures. We conclude that the SRN is more powerful than previous demonstrations have shown, but it's flexible learned buffer does not explain people's implicit learning (at least, the affective learning of musical structures) as well as fixed memory buffer models do.
Time-Dependent Traveling Wave Tube Model for Intersymbol Interference Investigations
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Andro, Monty; Downey, Alan (Technical Monitor)
2001-01-01
For the first time, a computational model has been used to provide a direct description of the effects of the traveling wave tube (TWT) on modulated digital signals. The TWT model comprehensively takes into account the effects of frequency dependent AM/AM and AM/PM conversion, gain and phase ripple; drive-induced oscillations; harmonic generation; intermodulation products; and backward waves. Thus, signal integrity can be investigated in the presence of these sources of potential distortion as a function of the physical geometry of the high power amplifier and the operational digital signal. This method promises superior predictive fidelity compared to methods using TWT models based on swept-amplitude and/or swept-frequency data. The fully three-dimensional (3D), time-dependent, TWT interaction model using the electromagnetic code MAFIA is presented. This model is used to investigate assumptions made in TWT black-box models used in communication system level simulations. In addition, digital signal performance, including intersymbol interference (ISI), is compared using direct data input into the MAFIA model and using the system level analysis tool, SPW.
Intersymbol Interference Investigations Using a 3D Time-Dependent Traveling Wave Tube Model
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Andro, Monty; Downey, Alan (Technical Monitor)
2001-01-01
For the first time, a physics based computational model has been used to provide a direct description of the effects of the TWT (Traveling Wave Tube) on modulated digital signals. The TWT model comprehensively takes into account the effects of frequency dependent AM/AM and AM/PM conversion; gain and phase ripple; drive-induced oscillations; harmonic generation; intermodulation products; and backward waves. Thus, signal integrity can be investigated in the presence of these sources of potential distortion as a function of the physical geometry of the high power amplifier and the operational digital signal. This method promises superior predictive fidelity compared to methods using TWT models based on swept amplitude and/or swept frequency data. The fully three-dimensional (3D), time-dependent, TWT interaction model using the electromagnetic code MAFIA is presented. This model is used to investigate assumptions made in TWT black box models used in communication system level simulations. In addition, digital signal performance, including intersymbol interference (ISI), is compared using direct data input into the MAFIA model and using the system level analysis tool, SPW (Signal Processing Worksystem).
Simulating Effects of High Angle of Attack on Turbofan Engine Performance
NASA Technical Reports Server (NTRS)
Liu, Yuan; Claus, Russell W.; Litt, Jonathan S.; Guo, Ten-Huei
2013-01-01
A method of investigating the effects of high angle of attack (AOA) flight on turbofan engine performance is presented. The methodology involves combining a suite of diverse simulation tools. Three-dimensional, steady-state computational fluid dynamics (CFD) software is used to model the change in performance of a commercial aircraft-type inlet and fan geometry due to various levels of AOA. Parallel compressor theory is then applied to assimilate the CFD data with a zero-dimensional, nonlinear, dynamic turbofan engine model. The combined model shows that high AOA operation degrades fan performance and, thus, negatively impacts compressor stability margins and engine thrust. In addition, the engine response to high AOA conditions is shown to be highly dependent upon the type of control system employed.
Determination of the Parameter Sets for the Best Performance of IPS-driven ENLIL Model
NASA Astrophysics Data System (ADS)
Yun, Jongyeon; Choi, Kyu-Cheol; Yi, Jonghyuk; Kim, Jaehun; Odstrcil, Dusan
2016-12-01
Interplanetary scintillation-driven (IPS-driven) ENLIL model was jointly developed by University of California, San Diego (UCSD) and National Aeronaucics and Space Administration/Goddard Space Flight Center (NASA/GSFC). The model has been in operation by Korean Space Weather Cetner (KSWC) since 2014. IPS-driven ENLIL model has a variety of ambient solar wind parameters and the results of the model depend on the combination of these parameters. We have conducted researches to determine the best combination of parameters to improve the performance of the IPS-driven ENLIL model. The model results with input of 1,440 combinations of parameters are compared with the Advanced Composition Explorer (ACE) observation data. In this way, the top 10 parameter sets showing best performance were determined. Finally, the characteristics of the parameter sets were analyzed and application of the results to IPS-driven ENLIL model was discussed.
The Long-Term Performance of Small-Cell Batteries Without Cell-Balancing Electronics
NASA Technical Reports Server (NTRS)
Pearson, C.; Thwaite, C.; Curzon, D.; Rao, G.
2006-01-01
Tests approx.8 yrs ago showed Sony HC do not imbalance. AEA developed a theory (ESPC 2002): a) Self-discharge (SD) decreases with state-of-charge (SOC); b) Cells diverge to a state of dynamic equilibrium; c) Equilibrium spread depends on cell SD uniformity. Balancing model verified against test data. Short-term measures of SD difficult in Sony cells and very small values, depends on technique. Long-term evidence supports lower SD at low SD. Battery testing best proof of performance, typically mission specific tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urniezius, Renaldas
2011-03-14
The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigidmore » body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.« less
Deep Recurrent Neural Networks for Human Activity Recognition
Murad, Abdulmajid
2017-01-01
Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs. PMID:29113103
Deep Recurrent Neural Networks for Human Activity Recognition.
Murad, Abdulmajid; Pyun, Jae-Young
2017-11-06
Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.
Berthet, Pierre; Hellgren-Kotaleski, Jeanette; Lansner, Anders
2012-01-01
Several studies have shown a strong involvement of the basal ganglia (BG) in action selection and dopamine dependent learning. The dopaminergic signal to striatum, the input stage of the BG, has been commonly described as coding a reward prediction error (RPE), i.e., the difference between the predicted and actual reward. The RPE has been hypothesized to be critical in the modulation of the synaptic plasticity in cortico-striatal synapses in the direct and indirect pathway. We developed an abstract computational model of the BG, with a dual pathway structure functionally corresponding to the direct and indirect pathways, and compared its behavior to biological data as well as other reinforcement learning models. The computations in our model are inspired by Bayesian inference, and the synaptic plasticity changes depend on a three factor Hebbian–Bayesian learning rule based on co-activation of pre- and post-synaptic units and on the value of the RPE. The model builds on a modified Actor-Critic architecture and implements the direct (Go) and the indirect (NoGo) pathway, as well as the reward prediction (RP) system, acting in a complementary fashion. We investigated the performance of the model system when different configurations of the Go, NoGo, and RP system were utilized, e.g., using only the Go, NoGo, or RP system, or combinations of those. Learning performance was investigated in several types of learning paradigms, such as learning-relearning, successive learning, stochastic learning, reversal learning and a two-choice task. The RPE and the activity of the model during learning were similar to monkey electrophysiological and behavioral data. Our results, however, show that there is not a unique best way to configure this BG model to handle well all the learning paradigms tested. We thus suggest that an agent might dynamically configure its action selection mode, possibly depending on task characteristics and also on how much time is available. PMID:23060764
Maximum entropy principal for transportation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilich, F.; Da Silva, R.
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utilitymore » concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.« less
NASA Technical Reports Server (NTRS)
Phenneger, M. C.; Singhal, S. P.; Lee, T. H.; Stengle, T. H.
1985-01-01
The work performed by the Attitude Determination and Control Section at the National Aeronautics and Space Administration/Goddard Space Flight Center in analyzing and evaluating the performance of infrared horizon sensors is presented. The results of studies performed during the 1960s are reviewed; several models for generating the Earth's infrared radiance profiles are presented; and the Horizon Radiance Modeling Utility, the software used to model the horizon sensor optics and electronics processing to computer radiance-dependent attitude errors, is briefly discussed. Also provided is mission experience from 12 spaceflight missions spanning the period from 1973 to 1984 and using a variety of horizon sensing hardware. Recommendations are presented for future directions for the infrared horizon sensing technology.
Development of a Higher Fidelity Model for the Cascade Distillation Subsystem (CDS)
NASA Technical Reports Server (NTRS)
Perry, Bruce; Anderson, Molly
2014-01-01
Significant improvements have been made to the ACM model of the CDS, enabling accurate predictions of dynamic operations with fewer assumptions. The model has been utilized to predict how CDS performance would be impacted by changing operating parameters, revealing performance trade-offs and possibilities for improvement. CDS efficiency is driven by the THP coefficient of performance, which in turn is dependent on heat transfer within the system. Based on the remaining limitations of the simulation, priorities for further model development include: center dot Relaxing the assumption of total condensation center dot Incorporating dynamic simulation capability for the buildup of dissolved inert gasses in condensers center dot Examining CDS operation with more complex feeds center dot Extending heat transfer analysis to all surfaces
A theoretical model of speed-dependent steering torque for rolling tyres
NASA Astrophysics Data System (ADS)
Wei, Yintao; Oertel, Christian; Liu, Yahui; Li, Xuebing
2016-04-01
It is well known that the tyre steering torque is highly dependent on the tyre rolling speed. In limited cases, i.e. parking manoeuvre, the steering torque approaches the maximum. With the increasing tyre speed, the steering torque decreased rapidly. Accurate modelling of the speed-dependent behaviour for the tyre steering torque is a key factor to calibrate the electric power steering (EPS) system and tune the handling performance of vehicles. However, no satisfactory theoretical model can be found in the existing literature to explain this phenomenon. This paper proposes a new theoretical framework to model this important tyre behaviour, which includes three key factors: (1) tyre three-dimensional transient rolling kinematics with turn-slip; (2) dynamical force and moment generation; and (3) the mixed Lagrange-Euler method for contact deformation solving. A nonlinear finite-element code has been developed to implement the proposed approach. It can be found that the main mechanism for the speed-dependent steering torque is due to turn-slip-related kinematics. This paper provides a theory to explain the complex mechanism of the tyre steering torque generation, which helps to understand the speed-dependent tyre steering torque, tyre road feeling and EPS calibration.
Yap, Melvin J; Balota, David A; Cortese, Michael J; Watson, Jason M
2006-12-01
This article evaluates 2 competing models that address the decision-making processes mediating word recognition and lexical decision performance: a hybrid 2-stage model of lexical decision performance and a random-walk model. In 2 experiments, nonword type and word frequency were manipulated across 2 contrasts (pseudohomophone-legal nonword and legal-illegal nonword). When nonwords became more wordlike (i.e., BRNTA vs. BRANT vs. BRANE), response latencies to nonwords were slowed and the word frequency effect increased. More important, distributional analyses revealed that the Nonword Type = Word Frequency interaction was modulated by different components of the response time distribution, depending on the specific nonword contrast. A single-process random-walk model was able to account for this particular set of findings more successfully than the hybrid 2-stage model. (c) 2006 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Minissale, Marco; Pardanaud, Cedric; Bisson, Régis; Gallais, Laurent
2017-11-01
The knowledge of optical properties of tungsten at high temperatures is of crucial importance in fields such as nuclear fusion and aerospace applications. The optical properties of tungsten are well known at room temperature, but little has been done at temperatures between 300 K and 1000 K in the visible and near-infrared domains. Here, we investigate the temperature dependence of tungsten reflectivity from the ambient to high temperatures (<1000 K) in the 500-1050 nm spectral range, a region where interband transitions make a strong contribution. Experimental measurements, performed via a spectroscopic system coupled with laser remote heating, show that tungsten’s reflectivity increases with temperature and wavelength. We have described these dependences through a Fresnel and two Lorentz-Drude models. The Fresnel model accurately reproduces the experimental curve at a given temperature, but it is able to simulate the temperature dependency of reflectivity only thanks to an ad hoc choice of temperature formulae for the refractive indexes. Thus, a less empirical approach, based on Lorentz-Drude models, is preferred to describe the interaction of light and charge carriers in the solid. The first Lorentz-Drude model, which includes a temperature dependency on intraband transitions, fits experimental results only qualitatively. The second Lorentz-Drude model includes in addition a temperature dependency on interband transitions. It is able to reproduce the experimental results quantitatively, highlighting a non-trivial dependence of interband transitions as a function of temperature. Eventually, we use these temperature dependent Lorentz-Drude models to evaluate the total emissivity of tungsten from 300 K to 3500 K, and we compare our experimental and theoretical findings with previous results.
Gupta, Shikha; Basant, Nikita; Mohan, Dinesh; Singh, Kunwar P
2016-07-01
The persistence and the removal of organic chemicals from the atmosphere are largely determined by their reactions with the OH radical and O3. Experimental determinations of the kinetic rate constants of OH and O3 with a large number of chemicals are tedious and resource intensive and development of computational approaches has widely been advocated. Recently, ensemble machine learning (EML) methods have emerged as unbiased tools to establish relationship between independent and dependent variables having a nonlinear dependence. In this study, EML-based, temperature-dependent quantitative structure-reactivity relationship (QSRR) models have been developed for predicting the kinetic rate constants for OH (kOH) and O3 (kO3) reactions with diverse chemicals. Structural diversity of chemicals was evaluated using a Tanimoto similarity index. The generalization and prediction abilities of the constructed models were established through rigorous internal and external validation performed employing statistical checks. In test data, the EML QSRR models yielded correlation (R (2)) of ≥0.91 between the measured and the predicted reactivities. The applicability domains of the constructed models were determined using methods based on descriptors range, Euclidean distance, leverage, and standardization approaches. The prediction accuracies for the higher reactivity compounds were relatively better than those of the low reactivity compounds. Proposed EML QSRR models performed well and outperformed the previous reports. The proposed QSRR models can make predictions of rate constants at different temperatures. The proposed models can be useful tools in predicting the reactivities of chemicals towards OH radical and O3 in the atmosphere.
A prediction model for lift-fan simulator performance. M.S. Thesis - Cleveland State Univ.
NASA Technical Reports Server (NTRS)
Yuska, J. A.
1972-01-01
The performance characteristics of a model VTOL lift-fan simulator installed in a two-dimensional wing are presented. The lift-fan simulator consisted of a 15-inch diameter fan driven by a turbine contained in the fan hub. The performance of the lift-fan simulator was measured in two ways: (1) the calculated momentum thrust of the fan and turbine (total thrust loading), and (2) the axial-force measured on a load cell force balance (axial-force loading). Tests were conducted over a wide range of crossflow velocities, corrected tip speeds, and wing angle of attack. A prediction modeling technique was developed to help in analyzing the performance characteristics of lift-fan simulators. A multiple linear regression analysis technique is presented which calculates prediction model equations for the dependent variables.
NASA Astrophysics Data System (ADS)
Saksala, Timo
2016-10-01
This paper deals with numerical modelling of rock fracture under dynamic loading. For this end, a combined continuum damage-embedded discontinuity model is applied in finite element modelling of crack propagation in rock. In this model, the strong loading rate sensitivity of rock is captured by the rate-dependent continuum scalar damage model that controls the pre-peak nonlinear hardening part of rock behaviour. The post-peak exponential softening part of the rock behaviour is governed by the embedded displacement discontinuity model describing the mode I, mode II and mixed mode fracture of rock. Rock heterogeneity is incorporated in the present approach by random description of the rock mineral texture based on the Voronoi tessellation. The model performance is demonstrated in numerical examples where the uniaxial tension and compression tests on rock are simulated. Finally, the dynamic three-point bending test of a semicircular disc is simulated in order to show that the model correctly predicts the strain rate-dependent tensile strengths as well as the failure modes of rock in this test. Special emphasis is laid on modelling the loading rate sensitivity of tensile strength of Laurentian granite.
NASA Technical Reports Server (NTRS)
Miller, T. L.
1986-01-01
Numerical modeling has been performed of the fluid dynamics in a prototypical physical vapor transport crystal growing situation. Cases with and without gravity have been computed. Dependence of the flows upon the dimensionless parameters aspect ratio and Peclet, Rayleigh, and Schmidt numbers is demonstrated to a greater extent than in previous works. Most notably, it is shown that the effects of thermally-induced buoyant convection upon the mass flux on the growth interface crucially depend upon the temperature boundary conditions on the sidewall (e.g., whether adiabatic or of a fixed profile, and in the latter case the results depend upon the shape of the profile assumed).
Space-dependent perfusion coefficient estimation in a 2D bioheat transfer problem
NASA Astrophysics Data System (ADS)
Bazán, Fermín S. V.; Bedin, Luciano; Borges, Leonardo S.
2017-05-01
In this work, a method for estimating the space-dependent perfusion coefficient parameter in a 2D bioheat transfer model is presented. In the method, the bioheat transfer model is transformed into a time-dependent semidiscrete system of ordinary differential equations involving perfusion coefficient values as parameters, and the estimation problem is solved through a nonlinear least squares technique. In particular, the bioheat problem is solved by the method of lines based on a highly accurate pseudospectral approach, and perfusion coefficient values are estimated by the regularized Gauss-Newton method coupled with a proper regularization parameter. The performance of the method on several test problems is illustrated numerically.
Calabro, Finnegan J.; Beardsley, Scott A.; Vaina, Lucia M.
2012-01-01
Estimation of time-to-arrival for moving objects is critical to obstacle interception and avoidance, as well as to timing actions such as reaching and grasping moving objects. The source of motion information that conveys arrival time varies with the trajectory of the object raising the question of whether multiple context-dependent mechanisms are involved in this computation. To address this question we conducted a series of psychophysical studies to measure observers’ performance on time-to-arrival estimation when object trajectory was specified by angular motion (“gap closure” trajectories in the frontoparallel plane), looming (colliding trajectories, TTC) or both (passage courses, TTP). We measured performance of time-to-arrival judgments in the presence of irrelevant motion, in which a perpendicular motion vector was added to the object trajectory. Data were compared to models of expected performance based on the use of different components of optical information. Our results demonstrate that for gap closure, performance depended only on the angular motion, whereas for TTC and TTP, both angular and looming motion affected performance. This dissociation of inputs suggests that gap closures are mediated by a separate mechanism than that used for the detection of time-to-collision and time-to-passage. We show that existing models of TTC and TTP estimation make systematic errors in predicting subject performance, and suggest that a model which weights motion cues by their relative time-to-arrival provides a better account of performance. PMID:22056519
Consolidating the effects of waking and sleep on motor-sequence learning.
Brawn, Timothy P; Fenn, Kimberly M; Nusbaum, Howard C; Margoliash, Daniel
2010-10-20
Sleep is widely believed to play a critical role in memory consolidation. Sleep-dependent consolidation has been studied extensively in humans using an explicit motor-sequence learning paradigm. In this task, performance has been reported to remain stable across wakefulness and improve significantly after sleep, making motor-sequence learning the definitive example of sleep-dependent enhancement. Recent work, however, has shown that enhancement disappears when the task is modified to reduce task-related inhibition that develops over a training session, thus questioning whether sleep actively consolidates motor learning. Here we use the same motor-sequence task to demonstrate sleep-dependent consolidation for motor-sequence learning and explain the discrepancies in results across studies. We show that when training begins in the morning, motor-sequence performance deteriorates across wakefulness and recovers after sleep, whereas performance remains stable across both sleep and subsequent waking with evening training. This pattern of results challenges an influential model of memory consolidation defined by a time-dependent stabilization phase and a sleep-dependent enhancement phase. Moreover, the present results support a new account of the behavioral effects of waking and sleep on explicit motor-sequence learning that is consistent across a wide range of tasks. These observations indicate that current theories of memory consolidation that have been formulated to explain sleep-dependent performance enhancements are insufficient to explain the range of behavioral changes associated with sleep.
Vukić, Dajana V; Vukić, Vladimir R; Milanović, Spasenija D; Ilicić, Mirela D; Kanurić, Katarina G
2018-06-01
Tree different fermented dairy products obtained by conventional and non-conventional starter cultures were investigated in this paper. Textural and rheological characteristics as well as chemical composition during 21 days of storage were analysed and subsequent data processing was performed by principal component analysis. The analysis of samples` flow behaviour was focused on their time dependent properties. Parameters of Power law model described flow behaviour of samples depended on used starter culture and days of storage. The Power law model was applied successfully to describe the flow of the fermented milk, which had characteristics of shear thinning and non-Newtonian fluid behaviour.
Empirical analysis and modeling of manual turnpike tollbooths in China
NASA Astrophysics Data System (ADS)
Zhang, Hao
2017-03-01
To deal with low-level of service satisfaction at tollbooths of many turnpikes in China, we conduct an empirical study and use a queueing model to investigate performance measures. In this paper, we collect archived data from six tollbooths of a turnpike in China. Empirical analysis on vehicle's time-dependent arrival process and collector's time-dependent service time is conducted. It shows that the vehicle arrival process follows a non-homogeneous Poisson process while the collector service time follows a log-normal distribution. Further, we model the process of collecting tolls at tollbooths with MAP / PH / 1 / FCFS queue for mathematical tractability and present some numerical examples.
Conditional responding is impaired in chronic alcoholics.
Hildebrandt, Helmut; Brokate, B; Hoffmann, E; Kröger, B; Eling, P
2006-07-01
Bechara (2003) describes a model for disturbances in executive functions related to addiction. This model involves deficits in decision-making and in suppressing pre-potent representations or response patterns. We tested this model in 29 individuals with long-term heavy alcohol dependency and compared their performance with that of 20 control subjects. Only individuals without memory impairment, with normal intelligence and normal visual response times were included. We examined word fluency, object alternation, spatial stimulus-response incompatibility, extra-dimensional shift learning and decision-making using the Gambling task. We subtracted the performance in a control condition from that of the executive condition, in order to focus specifically on the executive component of each task. Only the object alternation and incompatibility tasks revealed significant differences between the group of alcoholics and the control group. Moreover, response times in the object alternation task correlated with duration of alcohol dependency. The results do not argue in favor of a specific deficit in decision-making or in shifting between relevant representations. We conclude that long-term alcohol abuse leads to an impairment in conditional responding, provided the response depends on former reactions or the inhibition of pre-potent response patterns.
Geometric dependence of the parasitic components and thermal properties of HEMTs
NASA Astrophysics Data System (ADS)
Vun, Peter V.; Parker, Anthony E.; Mahon, Simon J.; Fattorini, Anthony
2007-12-01
For integrated circuit design up to 50GHz and beyond accurate models of the transistor access structures and intrinsic structures are necessary for prediction of circuit performance. The circuit design process relies on optimising transistor geometry parameters such as unit gate width, number of gates, number of vias and gate-to-gate spacing. So the relationship between electrical and thermal parasitic components in transistor access structures, and transistor geometry is important to understand when developing models for transistors of differing geometries. Current approaches to describing the geometric dependence of models are limited to empirical methods which only describe a finite set of geometries and only include unit gate width and number of gates as variables. A better understanding of the geometric dependence is seen as a way to provide scalable models that remain accurate for continuous variation of all geometric parameters. Understanding the distribution of parasitic elements between the manifold, the terminal fingers, and the reference plane discontinuities is an issue identified as important in this regard. Examination of dc characteristics and thermal images indicates that gate-to-gate thermal coupling and increased thermal conductance at the gate ends, affects the device total thermal conductance. Consequently, a distributed thermal model is proposed which accounts for these effects. This work is seen as a starting point for developing comprehensive scalable models that will allow RF circuit designers to optimise circuit performance parameters such as total die area, maximum output power, power-added-efficiency (PAE) and channel temperature/lifetime.
Huang, Anna S.; Klein, Daniel N.; Leung, Hoi-Chung
2015-01-01
Spatial working memory is a central cognitive process that matures through adolescence in conjunction with major changes in brain function and anatomy. Here we focused on late childhood and early adolescence to more closely examine the neural correlates of performance variability during this important transition period. Using a modified spatial 1-back task with two memory load conditions in an fMRI study, we examined the relationship between load-dependent neural responses and task performance in a sample of 39 youth aged 9–12 years. Our data revealed that between-subject differences in task performance was predicted by load-dependent deactivation in default network regions, including the ventral anterior cingulate cortex (vACC) and posterior cingulate cortex (PCC). Although load-dependent increases in activation in prefrontal and posterior parietal regions were only weakly correlated with performance, increased prefrontal-parietal coupling was associated with better performance. Furthermore, behavioral measures of executive function from as early as age 3 predicted current load-dependent deactivation in vACC and PCC. These findings suggest that both task positive and task negative brain activation during spatial working memory contributed to successful task performance in late childhood/early adolescence. This may serve as a good model for studying executive control deficits in developmental disorders. PMID:26562059
Huang, Anna S; Klein, Daniel N; Leung, Hoi-Chung
2016-02-01
Spatial working memory is a central cognitive process that matures through adolescence in conjunction with major changes in brain function and anatomy. Here we focused on late childhood and early adolescence to more closely examine the neural correlates of performance variability during this important transition period. Using a modified spatial 1-back task with two memory load conditions in an fMRI study, we examined the relationship between load-dependent neural responses and task performance in a sample of 39 youth aged 9-12 years. Our data revealed that between-subject differences in task performance was predicted by load-dependent deactivation in default network regions, including the ventral anterior cingulate cortex (vACC) and posterior cingulate cortex (PCC). Although load-dependent increases in activation in prefrontal and posterior parietal regions were only weakly correlated with performance, increased prefrontal-parietal coupling was associated with better performance. Furthermore, behavioral measures of executive function from as early as age 3 predicted current load-dependent deactivation in vACC and PCC. These findings suggest that both task positive and task negative brain activation during spatial working memory contributed to successful task performance in late childhood/early adolescence. This may serve as a good model for studying executive control deficits in developmental disorders. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Markovian prediction of future values for food grains in the economic survey
NASA Astrophysics Data System (ADS)
Sathish, S.; Khadar Babu, S. K.
2017-11-01
Now-a-days prediction and forecasting are plays a vital role in research. For prediction, regression is useful to predict the future value and current value on production process. In this paper, we assume food grain production exhibit Markov chain dependency and time homogeneity. The economic generative performance evaluation the balance time artificial fertilization different level in Estrusdetection using a daily Markov chain model. Finally, Markov process prediction gives better performance compare with Regression model.
Yun, Lifen; Wang, Xifu; Fan, Hongqiang; Li, Xiaopeng
2017-01-01
This paper proposes a reliable facility location design model under imperfect information with site-dependent disruptions; i.e., each facility is subject to a unique disruption probability that varies across the space. In the imperfect information contexts, customers adopt a realistic “trial-and-error” strategy to visit facilities; i.e., they visit a number of pre-assigned facilities sequentially until they arrive at the first operational facility or give up looking for the service. This proposed model aims to balance initial facility investment and expected long-term operational cost by finding the optimal facility locations. A nonlinear integer programming model is proposed to describe this problem. We apply a linearization technique to reduce the difficulty of solving the proposed model. A number of problem instances are studied to illustrate the performance of the proposed model. The results indicate that our proposed model can reveal a number of interesting insights into the facility location design with site-dependent disruptions, including the benefit of backup facilities and system robustness against variation of the loss-of-service penalty. PMID:28486564
A flexible cure rate model with dependent censoring and a known cure threshold.
Bernhardt, Paul W
2016-11-10
We propose a flexible cure rate model that accommodates different censoring distributions for the cured and uncured groups and also allows for some individuals to be observed as cured when their survival time exceeds a known threshold. We model the survival times for the uncured group using an accelerated failure time model with errors distributed according to the seminonparametric distribution, potentially truncated at a known threshold. We suggest a straightforward extension of the usual expectation-maximization algorithm approach for obtaining estimates in cure rate models to accommodate the cure threshold and dependent censoring. We additionally suggest a likelihood ratio test for testing for the presence of dependent censoring in the proposed cure rate model. We show through numerical studies that our model has desirable properties and leads to approximately unbiased parameter estimates in a variety of scenarios. To demonstrate how our method performs in practice, we analyze data from a bone marrow transplantation study and a liver transplant study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Physical activity classification with dynamic discriminative methods.
Ray, Evan L; Sasaki, Jeffer E; Freedson, Patty S; Staudenmayer, John
2018-06-19
A person's physical activity has important health implications, so it is important to be able to measure aspects of physical activity objectively. One approach to doing that is to use data from an accelerometer to classify physical activity according to activity type (e.g., lying down, sitting, standing, or walking) or intensity (e.g., sedentary, light, moderate, or vigorous). This can be formulated as a labeled classification problem, where the model relates a feature vector summarizing the accelerometer signal in a window of time to the activity type or intensity in that window. These data exhibit two key characteristics: (1) the activity classes in different time windows are not independent, and (2) the accelerometer features have moderately high dimension and follow complex distributions. Through a simulation study and applications to three datasets, we demonstrate that a model's classification performance is related to how it addresses these aspects of the data. Dynamic methods that account for temporal dependence achieve better performance than static methods that do not. Generative methods that explicitly model the distribution of the accelerometer signal features do not perform as well as methods that take a discriminative approach to establishing the relationship between the accelerometer signal and the activity class. Specifically, Conditional Random Fields consistently have better performance than commonly employed methods that ignore temporal dependence or attempt to model the accelerometer features. © 2018, The International Biometric Society.
Ding, Junjie; Wang, Yi; Lin, Weiwei; Wang, Changlian; Zhao, Limei; Li, Xingang; Zhao, Zhigang; Miao, Liyan; Jiao, Zheng
2015-03-01
Valproic acid (VPA) follows a non-linear pharmacokinetic profile in terms of protein-binding saturation. The total daily dose regarding VPA clearance is a simple power function, which may partially explain the non-linearity of the pharmacokinetic profile; however, it may be confounded by the therapeutic drug monitoring effect. The aim of this study was to develop a population pharmacokinetic model for VPA based on protein-binding saturation in pediatric patients with epilepsy. A total of 1,107 VPA serum trough concentrations at steady state were collected from 902 epileptic pediatric patients aged from 3 weeks to 14 years at three hospitals. The population pharmacokinetic model was developed using NONMEM(®) software. The ability of three candidate models (the simple power exponent model, the dose-dependent maximum effect [DDE] model, and the protein-binding model) to describe the non-linear pharmacokinetic profile of VPA was investigated, and potential covariates were screened using a stepwise approach. Bootstrap, normalized prediction distribution errors and external evaluations from two independent studies were performed to determine the stability and predictive performance of the candidate models. The age-dependent exponent model described the effects of body weight and age on the clearance well. Co-medication with carbamazepine was identified as a significant covariate. The DDE model best fitted the aim of this study, although there were no obvious differences in the predictive performances. The condition number was less than 500, and the precision of the parameter estimates was less than 30 %, indicating stability and validity of the final model. The DDE model successfully described the non-linear pharmacokinetics of VPA. Furthermore, the proposed population pharmacokinetic model of VPA can be used to design rational dosage regimens to achieve desirable serum concentrations.
USDA-ARS?s Scientific Manuscript database
Spectral scattering is useful for nondestructive sensing of fruit firmness. Prediction models, however, are typically built using multivariate statistical methods such as partial least squares regression (PLSR), whose performance generally depends on the characteristics of the data. The aim of this ...
Green-ampt infiltration parameters in riparian buffers
L.M. Stahr; D.E. Eisenhauer; M.J. Helmers; Mike G. Dosskey; T.G. Franti
2004-01-01
Riparian buffers can improve surface water quality by filtering contaminants from runoff before they enter streams. Infiltration is an important process in riparian buffers. Computer models are often used to assess the performance of riparian buffers. Accurate prediction of infiltration by these models is dependent upon accurate estimates of infiltration parameters....
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dehart, Mark; Mausolff, Zander; Goluoglu, Sedat
This report summarizes university research activities performed in support of TREAT modeling and simulation research. It is a compilation of annual research reports from four universities: University of Florida, Texas A&M University, Massachusetts Institute of Technology and Oregon State University. The general research topics are, respectively, (1) 3-D time-dependent transport with TDKENO/KENO-VI, (2) implementation of the Improved Quasi-Static method in Rattlesnake/MOOSE for time-dependent radiation transport approximations, (3) improved treatment of neutron physics representations within TREAT using OpenMC, and (4) steady state modeling of the minimum critical core of the Transient Reactor Test Facility (TREAT).
Acceleration techniques for dependability simulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Barnette, James David
1995-01-01
As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Young-Ho; Lazauskas, Rimantas; Park, Tae-Sun
M1 properties, comprising magnetic moments and radiative capture of thermal neutron observables, are studied in two- and three-nucleon systems. We use meson exchange current derived up to N{sup 3}LO using heavy baryon chiral perturbation theory a la Weinberg. Calculations have been performed for several qualitatively different realistic nuclear Hamiltonians, which permits us to analyze model dependence of our results. Our results are found to be strongly correlated with the effective range parameters such as binding energies and the scattering lengths. Taking into account such correlations, the results are in good agreement with the experimental data with small model dependence.
Random walkers with extreme value memory: modelling the peak-end rule
NASA Astrophysics Data System (ADS)
Harris, Rosemary J.
2015-05-01
Motivated by the psychological literature on the ‘peak-end rule’ for remembered experience, we perform an analysis within a random walk framework of a discrete choice model where agents’ future choices depend on the peak memory of their past experiences. In particular, we use this approach to investigate whether increased noise/disruption always leads to more switching between decisions. Here extreme value theory illuminates different classes of dynamics indicating that the long-time behaviour is dependent on the scale used for reflection; this could have implications, for example, in questionnaire design.
An Experiment on Repetitive Pulse Operation of Microwave Rocket
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oda, Yasuhisa; Shibata, Teppei; Komurasaki, Kimiya
2008-04-28
Microwave Rocket was operated with repetitive pulses. The microwave rocket model with forced breathing system was used. The pressure history in the thruster was measured and the thrust impulse was deduced. As a result, the impulse decreased at second pulse and impulses at latter pulses were constant. The dependence of the thrust performance on the partial filling rate of the thruster was compared to the thrust generation model based on the shock wave driven by microwave plasma. The experimental results showed good agreement to the predicted dependency.
NASA Astrophysics Data System (ADS)
Tomiwa, K. G.
2017-09-01
The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.
NASA Technical Reports Server (NTRS)
Orme, John S.
1995-01-01
The performance seeking control algorithm optimizes total propulsion system performance. This adaptive, model-based optimization algorithm has been successfully flight demonstrated on two engines with differing levels of degradation. Models of the engine, nozzle, and inlet produce reliable, accurate estimates of engine performance. But, because of an observability problem, component levels of degradation cannot be accurately determined. Depending on engine-specific operating characteristics PSC achieves various levels performance improvement. For example, engines with more deterioration typically operate at higher turbine temperatures than less deteriorated engines. Thus when the PSC maximum thrust mode is applied, for example, there will be less temperature margin available to be traded for increasing thrust.
On-line estimation of error covariance parameters for atmospheric data assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick P.
1995-01-01
A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.
Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.
Huang, Yan; Wang, Wei; Wang, Liang
2018-04-01
Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.
Interpreting incremental value of markers added to risk prediction models.
Pencina, Michael J; D'Agostino, Ralph B; Pencina, Karol M; Janssens, A Cecile J W; Greenland, Philip
2012-09-15
The discrimination of a risk prediction model measures that model's ability to distinguish between subjects with and without events. The area under the receiver operating characteristic curve (AUC) is a popular measure of discrimination. However, the AUC has recently been criticized for its insensitivity in model comparisons in which the baseline model has performed well. Thus, 2 other measures have been proposed to capture improvement in discrimination for nested models: the integrated discrimination improvement and the continuous net reclassification improvement. In the present study, the authors use mathematical relations and numerical simulations to quantify the improvement in discrimination offered by candidate markers of different strengths as measured by their effect sizes. They demonstrate that the increase in the AUC depends on the strength of the baseline model, which is true to a lesser degree for the integrated discrimination improvement. On the other hand, the continuous net reclassification improvement depends only on the effect size of the candidate variable and its correlation with other predictors. These measures are illustrated using the Framingham model for incident atrial fibrillation. The authors conclude that the increase in the AUC, integrated discrimination improvement, and net reclassification improvement offer complementary information and thus recommend reporting all 3 alongside measures characterizing the performance of the final model.
Observational uncertainty and regional climate model evaluation: A pan-European perspective
NASA Astrophysics Data System (ADS)
Kotlarski, Sven; Szabó, Péter; Herrera, Sixto; Räty, Olle; Keuler, Klaus; Soares, Pedro M.; Cardoso, Rita M.; Bosshard, Thomas; Pagé, Christian; Boberg, Fredrik; Gutiérrez, José M.; Jaczewski, Adam; Kreienkamp, Frank; Liniger, Mark. A.; Lussana, Cristian; Szepszo, Gabriella
2017-04-01
Local and regional climate change assessments based on downscaling methods crucially depend on the existence of accurate and reliable observational reference data. In dynamical downscaling via regional climate models (RCMs) observational data can influence model development itself and, later on, model evaluation, parameter calibration and added value assessment. In empirical-statistical downscaling, observations serve as predictand data and directly influence model calibration with corresponding effects on downscaled climate change projections. Focusing on the evaluation of RCMs, we here analyze the influence of uncertainties in observational reference data on evaluation results in a well-defined performance assessment framework and on a European scale. For this purpose we employ three different gridded observational reference grids, namely (1) the well-established EOBS dataset (2) the recently developed EURO4M-MESAN regional re-analysis, and (3) several national high-resolution and quality-controlled gridded datasets that recently became available. In terms of climate models five reanalysis-driven experiments carried out by five different RCMs within the EURO-CORDEX framework are used. Two variables (temperature and precipitation) and a range of evaluation metrics that reflect different aspects of RCM performance are considered. We furthermore include an illustrative model ranking exercise and relate observational spread to RCM spread. The results obtained indicate a varying influence of observational uncertainty on model evaluation depending on the variable, the season, the region and the specific performance metric considered. Over most parts of the continent, the influence of the choice of the reference dataset for temperature is rather small for seasonal mean values and inter-annual variability. Here, model uncertainty (as measured by the spread between the five RCM simulations considered) is typically much larger than reference data uncertainty. For parameters of the daily temperature distribution and for the spatial pattern correlation, however, important dependencies on the reference dataset can arise. The related evaluation uncertainties can be as large or even larger than model uncertainty. For precipitation the influence of observational uncertainty is, in general, larger than for temperature. It often dominates model uncertainty especially for the evaluation of the wet day frequency, the spatial correlation and the shape and location of the distribution of daily values. But even the evaluation of large-scale seasonal mean values can be considerably affected by the choice of the reference. When employing a simple and illustrative model ranking scheme on these results it is found that RCM ranking in many cases depends on the reference dataset employed.
EEJ and EIA variations during modeling substorms with different onset moments
NASA Astrophysics Data System (ADS)
Klimenko, V. V.; Klimenko, M. V.
2015-11-01
This paper presents the simulations of four modeling substorms with different moment of substorm onset at 00:00 UT, 06:00 UT, 12:00 UT, and 18:00 UT for spring equinoctial conditions in solar activity minimum. Such investigation provides opportunity to examine the longitudinal dependence of ionospheric response to geomagnetic substorms. Model runs were performed using modified Global Self-consistent Model of the Thermosphere, Ionosphere and Protonosphere (GSM TIP). We analyzed GSM TIP simulated global distributions of foF2, low latitude electric field and ionospheric currents at geomagnetic equator and their disturbances at different UT moments substorms. We considered in more detail the variations in equatorial ionization anomaly, equatorial electrojet and counter equatorial electrojet during substorms. It is shown that: (1) the effects in EIA, EEJ and CEJ strongly depend on the substorm onset moment; (2) disturbances in equatorial zonal current density during substorm has significant longitudinal dependence; (3) the observed controversy on the equatorial ionospheric electric field signature of substorms can depend on the substorm onset moments, i.e., on the longitudinal variability in parameters of the thermosphere-ionosphere system.
NASA Astrophysics Data System (ADS)
Keivani, M.; Abadian, N.; Koochi, A.; Mokhtari, J.; Abadyan, M.
2016-10-01
It has been well established that the physical performance of nanodevices might be affected by the microstructure. Herein, a two-degree-of-freedom model base on the modified couple stress theory is developed to incorporate the impact of microstructure in the torsion/bending coupled instability of rotational nanoscanner. Effect of microstructure dependency on the instability parameters is determined as a function of the microstructure parameter, bending/torsion coupling ratio, van der Waals force parameter and geometrical dimensions. It is found that the bending/torsion coupling substantially affects the stable behavior of the scanners especially those with long rotational beam elements. Impact of microstructure on instability voltage of the nanoscanner depends on coupling ratio and the conquering bending mode over torsion mode. This effect is more highlighted for higher values of coupling ratio. Depending on the geometry and material characteristics, the presented model is able to simulate both hardening behavior (due to microstructure) and softening behavior (due to torsion/bending coupling) of the nanoscanners.
Modeling Local Item Dependence Due to Common Test Format with a Multidimensional Rasch Model
ERIC Educational Resources Information Center
Baghaei, Purya; Aryadoust, Vahid
2015-01-01
Research shows that test method can exert a significant impact on test takers' performance and thereby contaminate test scores. We argue that common test method can exert the same effect as common stimuli and violate the conditional independence assumption of item response theory models because, in general, subsets of items which have a shared…
Viscoelastic modeling of deformation and gravity changes induced by pressurized magmatic sources
NASA Astrophysics Data System (ADS)
Currenti, Gilda
2018-05-01
Gravity and height changes, which reflect magma accumulation in subsurface chambers, are evaluated using analytical and numerical models in order to investigate their relationships and temporal evolutions. The analysis focuses mainly on the exploration of the time-dependent response of gravity and height changes to the pressurization of ellipsoidal magmatic chambers in viscoelastic media. Firstly, the validation of the numerical Finite Element results is performed by comparison with analytical solutions, which are devised for a simple spherical source embedded in a homogeneous viscoelastic half-space medium. Then, the effect of several model parameters on time-dependent height and gravity changes is investigated thanks to the flexibility of the numerical method in handling complex configurations. Both homogeneous and viscoelastic shell models reveal significantly different amplitudes in the ratio between gravity and height changes depending on geometry factors and medium rheology. The results show that these factors also influence the relaxation characteristic times of the investigated geophysical changes. Overall, these temporal patterns are compatible with time-dependent height and gravity changes observed on Etna volcano during the 1994-1997 inflation period. By modeling the viscoelastic response of a pressurized prolate magmatic source, a general agreement between computed and observed geophysical variations is achieved.
NASA Astrophysics Data System (ADS)
van Daal-Rombouts, Petra; Sun, Siao; Langeveld, Jeroen; Bertrand-Krajewski, Jean-Luc; Clemens, François
2016-07-01
Optimisation or real time control (RTC) studies in wastewater systems increasingly require rapid simulations of sewer systems in extensive catchments. To reduce the simulation time calibrated simplified models are applied, with the performance generally based on the goodness of fit of the calibration. In this research the performance of three simplified and a full hydrodynamic (FH) model for two catchments are compared based on the correct determination of CSO event occurrences and of the total discharged volumes to the surface water. Simplified model M1 consists of a rainfall runoff outflow (RRO) model only. M2 combines the RRO model with a static reservoir model for the sewer behaviour. M3 comprises the RRO model and a dynamic reservoir model. The dynamic reservoir characteristics were derived from FH model simulations. It was found that M2 and M3 are able to describe the sewer behaviour of the catchments, contrary to M1. The preferred model structure depends on the quality of the information (geometrical database and monitoring data) available for the design and calibration of the model. Finally, calibrated simplified models are shown to be preferable to uncalibrated FH models when performing optimisation or RTC studies.
Reliability-based management of buried pipelines considering external corrosion defects
NASA Astrophysics Data System (ADS)
Miran, Seyedeh Azadeh
Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.
Chen, Disheng; Lander, Gary R; Solomon, Glenn S; Flagg, Edward B
2017-01-20
Resonant photoluminescence excitation (RPLE) spectra of a neutral InGaAs quantum dot show unconventional line shapes that depend on the detection polarization. We characterize this phenomenon by performing polarization-dependent RPLE measurements and simulating the measured spectra with a three-level quantum model. The spectra are explained by interference between fields coherently scattered from the two fine structure split exciton states, and the measurements enable extraction of the steady-state coherence between the two exciton states.
Efficient Model Posing and Morphing Software
2014-04-01
disclosure of contents or reconstruction of this document. Air Force Research Laboratory 711th Human Performance Wing Human ...Command, Air Force Research Laboratory 711th Human Performance Wing, Human Effectiveness Directorate, Bioeffects Division, Radio Frequency...13. SUPPLEMENTARY NOTES 14. ABSTRACT The absorption of electromagnetic energy within human tissue depends upon anatomical posture and body
NASA Astrophysics Data System (ADS)
Sharma, Pankaj; Jain, Ajai
2014-12-01
Stochastic dynamic job shop scheduling problem with consideration of sequence-dependent setup times are among the most difficult classes of scheduling problems. This paper assesses the performance of nine dispatching rules in such shop from makespan, mean flow time, maximum flow time, mean tardiness, maximum tardiness, number of tardy jobs, total setups and mean setup time performance measures viewpoint. A discrete event simulation model of a stochastic dynamic job shop manufacturing system is developed for investigation purpose. Nine dispatching rules identified from literature are incorporated in the simulation model. The simulation experiments are conducted under due date tightness factor of 3, shop utilization percentage of 90% and setup times less than processing times. Results indicate that shortest setup time (SIMSET) rule provides the best performance for mean flow time and number of tardy jobs measures. The job with similar setup and modified earliest due date (JMEDD) rule provides the best performance for makespan, maximum flow time, mean tardiness, maximum tardiness, total setups and mean setup time measures.
Laser Powered Launch Vehicle Performance Analyses
NASA Technical Reports Server (NTRS)
Chen, Yen-Sen; Liu, Jiwen; Wang, Ten-See (Technical Monitor)
2001-01-01
The purpose of this study is to establish the technical ground for modeling the physics of laser powered pulse detonation phenomenon. Laser powered propulsion systems involve complex fluid dynamics, thermodynamics and radiative transfer processes. Successful predictions of the performance of laser powered launch vehicle concepts depend on the sophisticate models that reflects the underlying flow physics including the laser ray tracing the focusing, inverse Bremsstrahlung (IB) effects, finite-rate air chemistry, thermal non-equilibrium, plasma radiation and detonation wave propagation, etc. The proposed work will extend the base-line numerical model to an efficient design analysis tool. The proposed model is suitable for 3-D analysis using parallel computing methods.
Development of a bioenergetics model for the threespine stickleback Gasterosteus aculeatus
Hovel, Rachel A.; Beauchamp, David A.; Hansen, Adam G.; Sorel, Mark H.
2016-01-01
The Threespine Stickleback Gasterosteus aculeatus is widely distributed across northern hemisphere ecosystems, has ecological influence as an abundant planktivore, and is commonly used as a model organism, but the species lacks a comprehensive model to describe bioenergetic performance in response to varying environmental or ecological conditions. This study parameterized a bioenergetics model for the Threespine Stickleback using laboratory measurements to determine mass- and temperature-dependent functions for maximum consumption and routine respiration costs. Maximum consumption experiments were conducted across a range of temperatures from 7.5°C to 23.0°C and a range of fish weights from 0.5 to 4.5 g. Respiration experiments were conducted across a range of temperatures from 8°C to 28°C. Model sensitivity was consistent with other comparable models in that the mass-dependent parameters for maximum consumption were the most sensitive. Growth estimates based on the Threespine Stickleback bioenergetics model suggested that 22°C is the optimal temperature for growth when food is not limiting. The bioenergetics model performed well when used to predict independent, paired measures of consumption and growth observed from a separate wild population of Threespine Sticklebacks. Predicted values for consumption and growth (expressed as percent body weight per day) only deviated from observed values by 2.0%. Our model should provide insight into the physiological performance of this species across a range of environmental conditions and be useful for quantifying the trophic impact of this species in food webs containing other ecologically or economically important species.
NASA Astrophysics Data System (ADS)
Lotfy, K.; Sarkar, N.
2017-11-01
In this work, a novel generalized model of photothermal theory with two-temperature thermoelasticity theory based on memory-dependent derivative (MDD) theory is performed. A one-dimensional problem for an elastic semiconductor material with isotropic and homogeneous properties has been considered. The problem is solved with a new model (MDD) under the influence of a mechanical force with a photothermal excitation. The Laplace transform technique is used to remove the time-dependent terms in the governing equations. Moreover, the general solutions of some physical fields are obtained. The surface taken into consideration is free of traction and subjected to a time-dependent thermal shock. The numerical Laplace inversion is used to obtain the numerical results of the physical quantities of the problem. Finally, the obtained results are presented and discussed graphically.
Document page structure learning for fixed-layout e-books using conditional random fields
NASA Astrophysics Data System (ADS)
Tao, Xin; Tang, Zhi; Xu, Canhui
2013-12-01
In this paper, a model is proposed to learn logical structure of fixed-layout document pages by combining support vector machine (SVM) and conditional random fields (CRF). Features related to each logical label and their dependencies are extracted from various original Portable Document Format (PDF) attributes. Both local evidence and contextual dependencies are integrated in the proposed model so as to achieve better logical labeling performance. With the merits of SVM as local discriminative classifier and CRF modeling contextual correlations of adjacent fragments, it is capable of resolving the ambiguities of semantic labels. The experimental results show that CRF based models with both tree and chain graph structures outperform the SVM model with an increase of macro-averaged F1 by about 10%.
Sun, Xingshu; Silverman, Timothy; Garris, Rebekah; ...
2016-07-18
In this study, we present a physics-based analytical model for copper indium gallium diselenide (CIGS) solar cells that describes the illumination- and temperature-dependent current-voltage (I-V) characteristics and accounts for the statistical shunt variation of each cell. The model is derived by solving the drift-diffusion transport equation so that its parameters are physical and, therefore, can be obtained from independent characterization experiments. The model is validated against CIGS I-V characteristics as a function of temperature and illumination intensity. This physics-based model can be integrated into a large-scale simulation framework to optimize the performance of solar modules, as well as predict themore » long-term output yields of photovoltaic farms under different environmental conditions.« less
NASA Astrophysics Data System (ADS)
Arnaoudova, Kristina; Stanchev, Peter
2015-11-01
The business processes are the key asset for every organization. The design of the business process models is the foremost concern and target among an organization's functions. Business processes and their proper management are intensely dependent on the performance of software applications and technology solutions. The paper is attempt for definition of new Conceptual model of IT service provider, it could be examined as IT focused Enterprise model, part of Enterprise Architecture (EA) school.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, F.; Zimmerman, B.; Heard, F.
A number of N Reactor core heatup studies have been performed using the TRUMP-BD computer code. These studies were performed to address questions concerning the dependency of results on potential variations in the material properties and/or modeling assumptions. This report described and documents a series of 31 TRUMP-BD runs that were performed to determine the sensitivity of calculated inner-fuel temperatures to a variety of TRUMP input parameters and also to a change in the node density in a high-temperature-gradient region. The results of this study are based on the 32-in. model. 18 refs., 17 figs., 2 tab.
Thermal Ablation Modeling for Silicate Materials
NASA Technical Reports Server (NTRS)
Chen, Yih-Kanq
2016-01-01
A general thermal ablation model for silicates is proposed. The model includes the mass losses through the balance between evaporation and condensation, and through the moving molten layer driven by surface shear force and pressure gradient. This model can be applied in the ablation simulation of the meteoroid and the glassy ablator for spacecraft Thermal Protection Systems. Time-dependent axisymmetric computations are performed by coupling the fluid dynamics code, Data-Parallel Line Relaxation program, with the material response code, Two-dimensional Implicit Thermal Ablation simulation program, to predict the mass lost rates and shape change. The predicted mass loss rates will be compared with available data for model validation, and parametric studies will also be performed for meteoroid earth entry conditions.
Maurer, Christian; Baré, Jonathan; Kusmierczyk-Michulec, Jolanta; ...
2018-03-08
After performing a first multi-model exercise in 2015 a comprehensive and technically more demanding atmospheric transport modelling challenge was organized in 2016. Release data were provided by the Australian Nuclear Science and Technology Organization radiopharmaceutical facility in Sydney (Australia) for a one month period. Measured samples for the same time frame were gathered from six International Monitoring System stations in the Southern Hemisphere with distances to the source ranging between 680 (Melbourne) and about 17,000 km (Tristan da Cunha). Participants were prompted to work with unit emissions in pre-defined emission intervals (daily, half-daily, 3-hourly and hourly emission segment lengths) andmore » in order to perform a blind test actual emission values were not provided to them. Despite the quite different settings of the two atmospheric transport modelling challenges there is common evidence that for long-range atmospheric transport using temporally highly resolved emissions and highly space-resolved meteorological input fields has no significant advantage compared to using lower resolved ones. As well an uncertainty of up to 20% in the daily stack emission data turns out to be acceptable for the purpose of a study like this. Model performance at individual stations is quite diverse depending largely on successfully capturing boundary layer processes. No single model-meteorology combination performs best for all stations. Moreover, the stations statistics do not depend on the distance between the source and the individual stations. Finally, it became more evident how future exercises need to be designed. Set-up parameters like the meteorological driver or the output grid resolution should be pre-scribed in order to enhance diversity as well as comparability among model runs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, Christian; Baré, Jonathan; Kusmierczyk-Michulec, Jolanta
After performing a first multi-model exercise in 2015 a comprehensive and technically more demanding atmospheric transport modelling challenge was organized in 2016. Release data were provided by the Australian Nuclear Science and Technology Organization radiopharmaceutical facility in Sydney (Australia) for a one month period. Measured samples for the same time frame were gathered from six International Monitoring System stations in the Southern Hemisphere with distances to the source ranging between 680 (Melbourne) and about 17,000 km (Tristan da Cunha). Participants were prompted to work with unit emissions in pre-defined emission intervals (daily, half-daily, 3-hourly and hourly emission segment lengths) andmore » in order to perform a blind test actual emission values were not provided to them. Despite the quite different settings of the two atmospheric transport modelling challenges there is common evidence that for long-range atmospheric transport using temporally highly resolved emissions and highly space-resolved meteorological input fields has no significant advantage compared to using lower resolved ones. As well an uncertainty of up to 20% in the daily stack emission data turns out to be acceptable for the purpose of a study like this. Model performance at individual stations is quite diverse depending largely on successfully capturing boundary layer processes. No single model-meteorology combination performs best for all stations. Moreover, the stations statistics do not depend on the distance between the source and the individual stations. Finally, it became more evident how future exercises need to be designed. Set-up parameters like the meteorological driver or the output grid resolution should be pre-scribed in order to enhance diversity as well as comparability among model runs.« less
CPMIP: measurements of real computational performance of Earth system models in CMIP6
NASA Astrophysics Data System (ADS)
Balaji, Venkatramani; Maisonnave, Eric; Zadeh, Niki; Lawrence, Bryan N.; Biercamp, Joachim; Fladrich, Uwe; Aloisio, Giovanni; Benson, Rusty; Caubel, Arnaud; Durachta, Jeffrey; Foujols, Marie-Alice; Lister, Grenville; Mocavero, Silvia; Underwood, Seth; Wright, Garrett
2017-01-01
A climate model represents a multitude of processes on a variety of timescales and space scales: a canonical example of multi-physics multi-scale modeling. The underlying climate system is physically characterized by sensitive dependence on initial conditions, and natural stochastic variability, so very long integrations are needed to extract signals of climate change. Algorithms generally possess weak scaling and can be I/O and/or memory-bound. Such weak-scaling, I/O, and memory-bound multi-physics codes present particular challenges to computational performance. Traditional metrics of computational efficiency such as performance counters and scaling curves do not tell us enough about real sustained performance from climate models on different machines. They also do not provide a satisfactory basis for comparative information across models. codes present particular challenges to computational performance. We introduce a set of metrics that can be used for the study of computational performance of climate (and Earth system) models. These measures do not require specialized software or specific hardware counters, and should be accessible to anyone. They are independent of platform and underlying parallel programming models. We show how these metrics can be used to measure actually attained performance of Earth system models on different machines, and identify the most fruitful areas of research and development for performance engineering. codes present particular challenges to computational performance. We present results for these measures for a diverse suite of models from several modeling centers, and propose to use these measures as a basis for a CPMIP, a computational performance model intercomparison project (MIP).
Chen, Gang; Xu, Zhengyuan; Ding, Haipeng; Sadler, Brian
2009-03-02
We consider outdoor non-line-of-sight deep ultraviolet (UV) solar blind communications at ranges up to 100 m, with different transmitter and receiver geometries. We propose an empirical channel path loss model, and fit the model based on extensive measurements. We observe range-dependent power decay with a power exponent that varies from 0.4 to 2.4 with varying geometry. We compare with the single scattering model, and show that the single scattering assumption leads to a model that is not accurate for small apex angles. Our model is then used to study fundamental communication system performance trade-offs among transmitted optical power, range, link geometry, data rate, and bit error rate. Both weak and strong solar background radiation scenarios are considered to bound detection performance. These results provide guidelines to system design.
NASA Astrophysics Data System (ADS)
Chen, Peng; Bai, Xian-Xu; Qian, Li-Jun; Choi, Seung-Bok
2017-06-01
This paper presents a new hysteresis model based on the force-displacement characteristics of magnetorheological (MR) fluid actuators (or devices) subjected to squeeze mode operation. The idea of the proposed model is originated from experimental observation of the field-dependent hysteretic behavior of MR fluids, which shows that from a view of rate-independence of hysteresis, a gap width-dependent hysteresis is occurred in the force-displacement relationship instead of the typical relationship of the force-velocity. To effectively and accurately portray the hysteresis behavior, the gap width-dependent hysteresis elements, the nonlinear viscous effect and the inertial effect are considered for the formulation of the hysteresis model. Then, a model-based feedforward force tracking control scheme is established through an observer which can estimate the virtual displacement. The effectiveness of the proposed hysteresis model is validated through the identification and prediction of the damping force of MR fluids in the squeeze mode. In addition, it is shown that superior force tracking performance of the feedforward control associated with the proposed hysteresis mode is evaluated by adopting several tracking trajectories.
NASA Technical Reports Server (NTRS)
Yonekura, Emmi; Hall, Timothy M.
2011-01-01
A new statistical model for western North Pacific Ocean tropical cyclone genesis and tracks is developed and applied to estimate regionally resolved tropical cyclone landfall rates along the coasts of the Asian mainland, Japan, and the Philippines. The model is constructed on International Best Track Archive for Climate Stewardship (IBTrACS) 1945-2007 historical data for the western North Pacific. The model is evaluated in several ways, including comparing the stochastic spread in simulated landfall rates with historic landfall rates. Although certain biases have been detected, overall the model performs well on the diagnostic tests, for example, reproducing well the geographic distribution of landfall rates. Western North Pacific cyclogenesis is influenced by El Nino-Southern Oscillation (ENSO). This dependence is incorporated in the model s genesis component to project the ENSO-genesis dependence onto landfall rates. There is a pronounced shift southeastward in cyclogenesis and a small but significant reduction in basinwide annual counts with increasing ENSO index value. On almost all regions of coast, landfall rates are significantly higher in a negative ENSO state (La Nina).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hepburn, I.; De Schutter, E., E-mail: erik@oist.jp; Theoretical Neurobiology & Neuroengineering, University of Antwerp, Antwerp 2610
Spatial stochastic molecular simulations in biology are limited by the intense computation required to track molecules in space either in a discrete time or discrete space framework, which has led to the development of parallel methods that can take advantage of the power of modern supercomputers in recent years. We systematically test suggested components of stochastic reaction-diffusion operator splitting in the literature and discuss their effects on accuracy. We introduce an operator splitting implementation for irregular meshes that enhances accuracy with minimal performance cost. We test a range of models in small-scale MPI simulations from simple diffusion models to realisticmore » biological models and find that multi-dimensional geometry partitioning is an important consideration for optimum performance. We demonstrate performance gains of 1-3 orders of magnitude in the parallel implementation, with peak performance strongly dependent on model specification.« less
A High-Performance Cellular Automaton Model of Tumor Growth with Dynamically Growing Domains
Poleszczuk, Jan; Enderling, Heiko
2014-01-01
Tumor growth from a single transformed cancer cell up to a clinically apparent mass spans many spatial and temporal orders of magnitude. Implementation of cellular automata simulations of such tumor growth can be straightforward but computing performance often counterbalances simplicity. Computationally convenient simulation times can be achieved by choosing appropriate data structures, memory and cell handling as well as domain setup. We propose a cellular automaton model of tumor growth with a domain that expands dynamically as the tumor population increases. We discuss memory access, data structures and implementation techniques that yield high-performance multi-scale Monte Carlo simulations of tumor growth. We discuss tumor properties that favor the proposed high-performance design and present simulation results of the tumor growth model. We estimate to which parameters the model is the most sensitive, and show that tumor volume depends on a number of parameters in a non-monotonic manner. PMID:25346862
Solid rocket booster performance evaluation model. Volume 1: Engineering description
NASA Technical Reports Server (NTRS)
1974-01-01
The space shuttle solid rocket booster performance evaluation model (SRB-II) is made up of analytical and functional simulation techniques linked together so that a single pass through the model will predict the performance of the propulsion elements of a space shuttle solid rocket booster. The available options allow the user to predict static test performance, predict nominal and off nominal flight performance, and reconstruct actual flight and static test performance. Options selected by the user are dependent on the data available. These can include data derived from theoretical analysis, small scale motor test data, large motor test data and motor configuration data. The user has several options for output format that include print, cards, tape and plots. Output includes all major performance parameters (Isp, thrust, flowrate, mass accounting and operating pressures) as a function of time as well as calculated single point performance data. The engineering description of SRB-II discusses the engineering and programming fundamentals used, the function of each module, and the limitations of each module.
Cheng, Kung-Shan; Yuan, Yu; Li, Zhen; Stauffer, Paul R; Maccarini, Paolo; Joines, William T; Dewhirst, Mark W; Das, Shiva K
2009-04-07
In large multi-antenna systems, adaptive controllers can aid in steering the heat focus toward the tumor. However, the large number of sources can greatly increase the steering time. Additionally, controller performance can be degraded due to changes in tissue perfusion which vary non-linearly with temperature, as well as with time and spatial position. The current work investigates whether a reduced-order controller with the assumption of piecewise constant perfusion is robust to temperature-dependent perfusion and achieves steering in a shorter time than required by a full-order controller. The reduced-order controller assumes that the optimal heating setting lies in a subspace spanned by the best heating vectors (virtual sources) of an initial, approximate, patient model. An initial, approximate, reduced-order model is iteratively updated by the controller, using feedback thermal images, until convergence of the heat focus to the tumor. Numerical tests were conducted in a patient model with a right lower leg sarcoma, heated in a 10-antenna cylindrical mini-annual phased array applicator operating at 150 MHz. A half-Gaussian model was used to simulate temperature-dependent perfusion. Simulated magnetic resonance temperature images were used as feedback at each iteration step. Robustness was validated for the controller, starting from four approximate initial models: (1) a 'standard' constant perfusion lower leg model ('standard' implies a model that exactly models the patient with the exception that perfusion is considered constant, i.e., not temperature dependent), (2) a model with electrical and thermal tissue properties varied from 50% higher to 50% lower than the standard model, (3) a simplified constant perfusion pure-muscle lower leg model with +/-50% deviated properties and (4) a standard model with the tumor position in the leg shifted by 1.5 cm. Convergence to the desired focus of heating in the tumor was achieved for all four simulated models. The controller accomplished satisfactory therapeutic outcomes: approximately 80% of the tumor was heated to temperatures 43 degrees C and approximately 93% was maintained at temperatures <41 degrees C. Compared to the controller without model reduction, a approximately 9-25 fold reduction in convergence time was accomplished using approximately 2-3 orthonormal virtual sources. In the situations tested, the controller was robust to the presence of temperature-dependent perfusion. The results of this work can help to lay the foundation for real-time thermal control of multi-antenna hyperthermia systems in clinical situations where perfusion can change rapidly with temperature.
Asgharpour, Zahra; Zioupos, Peter; Graw, Matthias; Peldschus, Steffen
2014-03-01
Computer-aided methods such as finite-element simulation offer a great potential in the forensic reconstruction of injury mechanisms. Numerous studies have been performed on understanding and analysing the mechanical properties of bone and the mechanism of its fracture. Determination of the mechanical properties of bones is made on the same basis used for other structural materials. The mechanical behaviour of bones is affected by the mechanical properties of the bone material, the geometry, the loading direction and mode and of course the loading rate. Strain rate dependency of mechanical properties of cortical bone has been well demonstrated in literature studies, but as many of these were performed on animal bones and at non-physiological strain rates it is questionable how these will apply in the human situations. High strain-rates dominate in a lot of forensic applications in automotive crashes and assault scenarios. There is an overwhelming need to a model which can describe the complex behaviour of bone at lower strain rates as well as higher ones. Some attempts have been made to model the viscoelastic and viscoplastic properties of the bone at high strain rates using constitutive mathematical models with little demonstrated success. The main objective of the present study is to model the rate dependent behaviour of the bones based on experimental data. An isotropic material model of human cortical bone with strain rate dependency effects is implemented using the LS-DYNA material library. We employed a human finite element model called THUMS (Total Human Model for Safety), developed by Toyota R&D Labs and the Wayne State University, USA. The finite element model of the human femur is extracted from the THUMS model. Different methods have been employed to develop a strain rate dependent material model for the femur bone. Results of one the recent experimental studies on human femur have been employed to obtain the numerical model for cortical femur. A forensic application of the model is explained in which impacts to the arm have been reconstructed using the finite element model of THUMS. The advantage of the numerical method is that a wide range of impact conditions can be easily reconstructed. Impact velocity has been changed as a parameter to find the tolerance levels of injuries to the lower arm. The method can be further developed to study the assaults and the injury mechanism which can lead to severe traumatic injuries in forensic cases. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kishor Kumar, V. V.; Kuzhiveli, B. T.
2017-12-01
The performance of a Stirling cryocooler depends on the thermal and hydrodynamic properties of the regenerator in the system. CFD modelling is the best technique to design and predict the performance of a Stirling cooler. The accuracy of the simulation results depend on the hydrodynamic and thermal transport parameters used as the closure relations for the volume averaged governing equations. A methodology has been developed to quantify the viscous and inertial resistance terms required for modelling the regenerator as a porous medium in Fluent. Using these terms, the steady and steady - periodic flow of helium through regenerator was modelled and simulated. Comparison of the predicted and experimental pressure drop reveals the good predictive power of the correlation based method. For oscillatory flow, the simulation could predict the exit pressure amplitude and the phase difference accurately. Therefore the method was extended to obtain the Darcy permeability and Forchheimer’s inertial coefficient of other wire mesh matrices applicable to Stirling coolers. Simulation of regenerator using these parameters will help to better understand the thermal and hydrodynamic interactions between working fluid and the regenerator material, and pave the way to contrive high performance, ultra-compact free displacers used in miniature Stirling cryocoolers in the future.
A Perishable Inventory Model with Return
NASA Astrophysics Data System (ADS)
Setiawan, S. W.; Lesmono, D.; Limansyah, T.
2018-04-01
In this paper, we develop a mathematical model for a perishable inventory with return by assuming deterministic demand and inventory dependent demand. By inventory dependent demand, it means that demand at certain time depends on the available inventory at that time with certain rate. In dealing with perishable items, we should consider deteriorating rate factor that corresponds to the decreasing quality of goods. There are also costs involved in this model such as purchasing, ordering, holding, shortage (backordering) and returning costs. These costs compose the total costs in the model that we want to minimize. In the model we seek for the optimal return time and order quantity. We assume that after some period of time, called return time, perishable items can be returned to the supplier at some returning costs. The supplier will then replace them in the next delivery. Some numerical experiments are given to illustrate our model and sensitivity analysis is performed as well. We found that as the deteriorating rate increases, returning time becomes shorter, the optimal order quantity and total cost increases. When considering the inventory-dependent demand factor, we found that as this factor increases, assuming a certain deteriorating rate, returning time becomes shorter, optimal order quantity becomes larger and the total cost increases.
International Space Station Configuration Analysis and Integration
NASA Technical Reports Server (NTRS)
Anchondo, Rebekah
2016-01-01
Ambitious engineering projects, such as NASA's International Space Station (ISS), require dependable modeling, analysis, visualization, and robotics to ensure that complex mission strategies are carried out cost effectively, sustainably, and safely. Learn how Booz Allen Hamilton's Modeling, Analysis, Visualization, and Robotics Integration Center (MAVRIC) team performs engineering analysis of the ISS Configuration based primarily on the use of 3D CAD models. To support mission planning and execution, the team tracks the configuration of ISS and maintains configuration requirements to ensure operational goals are met. The MAVRIC team performs multi-disciplinary integration and trade studies to ensure future configurations meet stakeholder needs.
Catchment area-based evaluation of the AMC-dependent SCS-CN-based rainfall-runoff models
NASA Astrophysics Data System (ADS)
Mishra, S. K.; Jain, M. K.; Pandey, R. P.; Singh, V. P.
2005-09-01
Using a large set of rainfall-runoff data from 234 watersheds in the USA, a catchment area-based evaluation of the modified version of the Mishra and Singh (2002a) model was performed. The model is based on the Soil Conservation Service Curve Number (SCS-CN) methodology and incorporates the antecedent moisture in computation of direct surface runoff. Comparison with the existing SCS-CN method showed that the modified version performed better than did the existing one on the data of all seven area-based groups of watersheds ranging from 0.01 to 310.3 km2.
A model for the transfer of perceptual-motor skill learning in human behaviors.
Rosalie, Simon M; Müller, Sean
2012-09-01
This paper presents a preliminary model that outlines the mechanisms underlying the transfer of perceptual-motor skill learning in sport and everyday tasks. Perceptual-motor behavior is motivated by performance demands and evolves over time to increase the probability of success through adaptation. Performance demands at the time of an event create a unique transfer domain that specifies a range of potentially successful actions. Transfer comprises anticipatory subconscious and conscious mechanisms. The model also outlines how transfer occurs across a continuum, which depends on the individual's expertise and contextual variables occurring at the incidence of transfer
A Context-Aware Model to Provide Positioning in Disaster Relief Scenarios
Moreno, Daniel; Ochoa, Sergio F.; Meseguer, Roc
2015-01-01
The effectiveness of the work performed during disaster relief efforts is highly dependent on the coordination of activities conducted by the first responders deployed in the affected area. Such coordination, in turn, depends on an appropriate management of geo-referenced information. Therefore, enabling first responders to count on positioning capabilities during these activities is vital to increase the effectiveness of the response process. The positioning methods used in this scenario must assume a lack of infrastructure-based communication and electrical energy, which usually characterizes affected areas. Although positioning systems such as the Global Positioning System (GPS) have been shown to be useful, we cannot assume that all devices deployed in the area (or most of them) will have positioning capabilities by themselves. Typically, many first responders carry devices that are not capable of performing positioning on their own, but that require such a service. In order to help increase the positioning capability of first responders in disaster-affected areas, this paper presents a context-aware positioning model that allows mobile devices to estimate their position based on information gathered from their surroundings. The performance of the proposed model was evaluated using simulations, and the obtained results show that mobile devices without positioning capabilities were able to use the model to estimate their position. Moreover, the accuracy of the positioning model has been shown to be suitable for conducting most first response activities. PMID:26437406
NASA Astrophysics Data System (ADS)
Park, Subok; Zhang, George Z.; Zeng, Rongping; Myers, Kyle J.
2014-03-01
A task-based assessment of image quality1 for digital breast tomosynthesis (DBT) can be done in either the projected or reconstructed data space. As the choice of observer models and feature selection methods can vary depending on the type of task and data statistics, we previously investigated the performance of two channelized- Hotelling observer models in conjunction with 2D Laguerre-Gauss (LG) and two implementations of partial least squares (PLS) channels along with that of the Hotelling observer in binary detection tasks involving DBT projections.2, 3 The difference in these observers lies in how the spatial correlation in DBT angular projections is incorporated in the observer's strategy to perform the given task. In the current work, we extend our method to the reconstructed data space of DBT. We investigate how various model observers including the aforementioned compare for performing the binary detection of a spherical signal embedded in structured breast phantoms with the use of DBT slices reconstructed via filtered back projection. We explore how well the model observers incorporate the spatial correlation between different numbers of reconstructed DBT slices while varying the number of projections. For this, relatively small and large scan angles (24° and 96°) are used for comparison. Our results indicate that 1) given a particular scan angle, the number of projections needed to achieve the best performance for each observer is similar across all observer/channel combinations, i.e., Np = 25 for scan angle 96° and Np = 13 for scan angle 24°, and 2) given these sufficient numbers of projections, the number of slices for each observer to achieve the best performance differs depending on the channel/observer types, which is more pronounced in the narrow scan angle case.
Perceptual Veridicality in Esthetic Communication: A Model, General Procedure, and Illustration.
ERIC Educational Resources Information Center
Holbrook, Morris B.; Bertges, Stephen A.
1981-01-01
Developed a model and tested the following hypotheses: (1) esthetic features--tempo, rhythm, dynamics, and phrasing in piano performance--are accurately perceived by audience members and (2) such perceptual veridicality does not depend upon one's degree of education/training and is therefore shared by critics and audience members. (PD)
Exchanging transportation networks between two GISs via the SDTS
DOT National Transportation Integrated Search
1997-05-01
Performing meaningful network analyses is greatly dependent upon accurate and : complete transportation network models, which are digitized into a Geographic : Information System (GIS) or, more often, imported from another GIS. : Transportation netwo...
History dependent quantum random walks as quantum lattice gas automata
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shakeel, Asif, E-mail: asif.shakeel@gmail.com, E-mail: dmeyer@math.ucsd.edu, E-mail: plove@haverford.edu; Love, Peter J., E-mail: asif.shakeel@gmail.com, E-mail: dmeyer@math.ucsd.edu, E-mail: plove@haverford.edu; Meyer, David A., E-mail: asif.shakeel@gmail.com, E-mail: dmeyer@math.ucsd.edu, E-mail: plove@haverford.edu
Quantum Random Walks (QRW) were first defined as one-particle sectors of Quantum Lattice Gas Automata (QLGA). Recently, they have been generalized to include history dependence, either on previous coin (internal, i.e., spin or velocity) states or on previous position states. These models have the goal of studying the transition to classicality, or more generally, changes in the performance of quantum walks in algorithmic applications. We show that several history dependent QRW can be identified as one-particle sectors of QLGA. This provides a unifying conceptual framework for these models in which the extra degrees of freedom required to store the historymore » information arise naturally as geometrical degrees of freedom on the lattice.« less
NASA Technical Reports Server (NTRS)
Park, Young W.; Montez, Moises N.
1994-01-01
A candidate onboard space navigation filter demonstrated excellent performance (less than 8 meter level RMS semi-major axis accuracy) in performing orbit determination of a low-Earth orbit Explorer satellite using single-frequency real GPS data. This performance is significantly better than predicted by other simulation studies using dual-frequency GPS data. The study results revealed the significance of two new modeling approaches evaluated in the work. One approach introduces a single-frequency ionospheric correction through pseudo-range and phase range averaging implementation. The other approach demonstrates a precise axis-dependent characterization of dynamic sample space uncertainty to compute a more accurate Kalman filter gain. Additionally, this navigation filter demonstrates a flexibility to accommodate both perturbational dynamic and observational biases required for multi-flight phase and inhomogeneous application environments. This paper reviews the potential application of these methods and the filter structure to terrestrial vehicle and positioning applications. Both the single-frequency ionospheric correction method and the axis-dependent state noise modeling approach offer valuable contributions in cost and accuracy improvements for terrestrial GPS receivers. With a modular design approach to either 'plug-in' or 'unplug' various force models, this multi-flight phase navigation filter design structure also provides a versatile GPS navigation software engine for both atmospheric and exo-atmospheric navigation or positioning use, thereby streamlining the flight phase or application-dependent software requirements. Thus, a standardized GPS navigation software engine that can reduce the development and maintenance cost of commercial GPS receivers is now possible.
Sheng, Li; Wang, Zidong; Tian, Engang; Alsaadi, Fuad E
2016-12-01
This paper deals with the H ∞ state estimation problem for a class of discrete-time neural networks with stochastic delays subject to state- and disturbance-dependent noises (also called (x,v)-dependent noises) and fading channels. The time-varying stochastic delay takes values on certain intervals with known probability distributions. The system measurement is transmitted through fading channels described by the Rice fading model. The aim of the addressed problem is to design a state estimator such that the estimation performance is guaranteed in the mean-square sense against admissible stochastic time-delays, stochastic noises as well as stochastic fading signals. By employing the stochastic analysis approach combined with the Kronecker product, several delay-distribution-dependent conditions are derived to ensure that the error dynamics of the neuron states is stochastically stable with prescribed H ∞ performance. Finally, a numerical example is provided to illustrate the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Hamilton, H. B.; Strangas, E.
1980-01-01
The time dependent solution of the magnetic field is introduced as a method for accounting for the variation, in time, of the machine parameters in predicting and analyzing the performance of the electrical machines. The method of time dependent finite element was used in combination with an also time dependent construction of a grid for the air gap region. The Maxwell stress tensor was used to calculate the airgap torque from the magnetic vector potential distribution. Incremental inductances were defined and calculated as functions of time, depending on eddy currents and saturation. The currents in all the machine circuits were calculated in the time domain based on these inductances, which were continuously updated. The method was applied to a chopper controlled DC series motor used for electric vehicle drive, and to a salient pole sychronous motor with damper bars. Simulation results were compared to experimentally obtained ones.
Analysis of Wind Tunnel Lateral Oscillatory Data of the F-16XL Aircraft
NASA Technical Reports Server (NTRS)
Klein, Vladislav; Murphy, Patrick C.; Szyba, Nathan M.
2004-01-01
Static and dynamic wind tunnel tests were performed on an 18% scale model of the F-16XL aircraft. These tests were performed over a wide range of angles of attack and sideslip with oscillation amplitudes from 5 deg. to 30 deg. and reduced frequencies from 0.073 to 0.269. Harmonic analysis was used to estimate Fourier coefficients and in-phase and out-of-phase components. For frequency dependent data from rolling oscillations, a two-step regression method was used to obtain unsteady models (indicial functions), and derivatives due to sideslip angle, roll rate and yaw rate from in-phase and out-of-phase components. Frequency dependence was found for angles of attack between 20 deg. and 50 deg. Reduced values of coefficient of determination and increased values of fit error were found for angles of attack between 35 deg. and 45 deg. An attempt to estimate model parameters from yaw oscillations failed, probably due to the low number of test cases at different frequencies.
Ab initio folding of proteins using all-atom discrete molecular dynamics
Ding, Feng; Tsao, Douglas; Nie, Huifen; Dokholyan, Nikolay V.
2008-01-01
Summary Discrete molecular dynamics (DMD) is a rapid sampling method used in protein folding and aggregation studies. Until now, DMD was used to perform simulations of simplified protein models in conjunction with structure-based force fields. Here, we develop an all-atom protein model and a transferable force field featuring packing, solvation, and environment-dependent hydrogen bond interactions. Using the replica exchange method, we perform folding simulations of six small proteins (20–60 residues) with distinct native structures. In all cases, native or near-native states are reached in simulations. For three small proteins, multiple folding transitions are observed and the computationally-characterized thermodynamics are in quantitative agreement with experiments. The predictive power of all-atom DMD highlights the importance of environment-dependent hydrogen bond interactions in modeling protein folding. The developed approach can be used for accurate and rapid sampling of conformational spaces of proteins and protein-protein complexes, and applied to protein engineering and design of protein-protein interactions. PMID:18611374
NASA Astrophysics Data System (ADS)
Kardan, Farshid; Cheng, Wai-Chi; Baverel, Olivier; Porté-Agel, Fernando
2016-04-01
Understanding, analyzing and predicting meteorological phenomena related to urban planning and built environment are becoming more essential than ever to architectural and urban projects. Recently, various version of RANS models have been established but more validation cases are required to confirm their capability for wind flows. In the present study, the performance of recently developed RANS models, including the RNG k-ɛ , SST BSL k-ω and SST ⪆mma-Reθ , have been evaluated for the flow past a single block (which represent the idealized architecture scale). For validation purposes, the velocity streamlines and the vertical profiles of the mean velocities and variances were compared with published LES and wind tunnel experiment results. Furthermore, other additional CFD simulations were performed to analyze the impact of regular/irregular mesh structures and grid resolutions based on selected turbulence model in order to analyze the grid independency. Three different grid resolutions (coarse, medium and fine) of Nx × Ny × Nz = 320 × 80 × 320, 160 × 40 × 160 and 80 × 20 × 80 for the computational domain and nx × nz = 26 × 32, 13 × 16 and 6 × 8, which correspond to number of grid points on the block edges, were chosen and tested. It can be concluded that among all simulated RANS models, the SST ⪆mma-Reθ model performed best and agreed fairly well to the LES simulation and experimental results. It can also be concluded that the SST ⪆mma-Reθ model provides a very satisfactory results in terms of grid dependency in the fine and medium grid resolutions in both regular and irregular structure meshes. On the other hand, despite a very good performance of the RNG k-ɛ model in the fine resolution and in the regular structure grids, a disappointing performance of this model in the coarse and medium grid resolutions indicates that the RNG k-ɛ model is highly dependent on grid structure and grid resolution. These quantitative validations are essential to access the accuracy of RANS models for the simulation of flow in urban environment.
NASA Technical Reports Server (NTRS)
Chien, Steve; Kandt, R. Kirk; Roden, Joseph; Burleigh, Scott; King, Todd; Joy, Steve
1992-01-01
Scientific data preparation is the process of extracting usable scientific data from raw instrument data. This task involves noise detection (and subsequent noise classification and flagging or removal), extracting data from compressed forms, and construction of derivative or aggregate data (e.g. spectral densities or running averages). A software system called PIPE provides intelligent assistance to users developing scientific data preparation plans using a programming language called Master Plumber. PIPE provides this assistance capability by using a process description to create a dependency model of the scientific data preparation plan. This dependency model can then be used to verify syntactic and semantic constraints on processing steps to perform limited plan validation. PIPE also provides capabilities for using this model to assist in debugging faulty data preparation plans. In this case, the process model is used to focus the developer's attention upon those processing steps and data elements that were used in computing the faulty output values. Finally, the dependency model of a plan can be used to perform plan optimization and runtime estimation. These capabilities allow scientists to spend less time developing data preparation procedures and more time on scientific analysis tasks. Because the scientific data processing modules (called fittings) evolve to match scientists' needs, issues regarding maintainability are of prime importance in PIPE. This paper describes the PIPE system and describes how issues in maintainability affected the knowledge representation used in PIPE to capture knowledge about the behavior of fittings.
NASA Astrophysics Data System (ADS)
Larkin, K.; Ghommem, M.; Abdelkefi, A.
2018-05-01
Capacitive-based sensing microelectromechanical (MEMS) and nanoelectromechanical (NEMS) gyroscopes have significant advantages over conventional gyroscopes, such as low power consumption, batch fabrication, and possible integration with electronic circuits. However, inadequacies in the modeling of these inertial sensors have presented issues of reliability and functionality of micro-/nano-scale gyroscopes. In this work, a micromechanical model is developed to represent the unique microstructure of nanocrystalline materials and simulate the response of micro-/nano-gyroscope comprising an electrostatically-actuated cantilever beam with a tip mass at the free end. Couple stress and surface elasticity theories are integrated into the classical Euler-Bernoulli beam model in order to derive a size-dependent model. This model is then used to investigate the influence of size-dependent effects on the static pull-in instability, the natural frequencies and the performance output of gyroscopes as the scale decreases from micro-to nano-scale. The simulation results show significant changes in the static pull-in voltage and the natural frequency as the scale of the system is decreased. However, the differential frequency between the two vibration modes of the gyroscope is observed to drastically decrease as the size of the gyroscope is reduced. As such, the frequency-based operation mode may not be an efficient strategy for nano-gyroscopes. The results show that a strong coupling between the surface elasticity and material structure takes place when smaller grain sizes and higher void percentages are considered.
Curutchet, Carles; Cupellini, Lorenzo; Kongsted, Jacob; Corni, Stefano; Frediani, Luca; Steindal, Arnfinn Hykkerud; Guido, Ciro A; Scalmani, Giovanni; Mennucci, Benedetta
2018-03-13
Mixed multiscale quantum/molecular mechanics (QM/MM) models are widely used to explore the structure, reactivity, and electronic properties of complex chemical systems. Whereas such models typically include electrostatics and potentially polarization in so-called electrostatic and polarizable embedding approaches, respectively, nonelectrostatic dispersion and repulsion interactions are instead commonly described through classical potentials despite their quantum mechanical origin. Here we present an extension of the Tkatchenko-Scheffler semiempirical van der Waals (vdW TS ) scheme aimed at describing dispersion and repulsion interactions between quantum and classical regions within a QM/MM polarizable embedding framework. Starting from the vdW TS expression, we define a dispersion and a repulsion term, both of them density-dependent and consistently based on a Lennard-Jones-like potential. We explore transferable atom type-based parametrization strategies for the MM parameters, based on either vdW TS calculations performed on isolated fragments or on a direct estimation of the parameters from atomic polarizabilities taken from a polarizable force field. We investigate the performance of the implementation by computing self-consistent interaction energies for the S22 benchmark set, designed to represent typical noncovalent interactions in biological systems, in both equilibrium and out-of-equilibrium geometries. Overall, our results suggest that the present implementation is a promising strategy to include dispersion and repulsion in multiscale QM/MM models incorporating their explicit dependence on the electronic density.
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry
1998-01-01
This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.
NASA Astrophysics Data System (ADS)
Alekseev, M. V.; Vozhakov, I. S.; Lezhnin, S. I.; Pribaturin, N. A.
2017-09-01
A comparative numerical simulation of the supercritical fluid outflow on the thermodynamic equilibrium and non-equilibrium relaxation models of phase transition for different times of relaxation has been performed. The model for the fixed relaxation time based on the experimentally determined radius of liquid droplets was compared with the model of dynamically changing relaxation time, calculated by the formula (7) and depending on local parameters. It is shown that the relaxation time varies significantly depending on the thermodynamic conditions of the two-phase medium in the course of outflowing. The application of the proposed model with dynamic relaxation time leads to qualitatively correct results. The model can be used for both vaporization and condensation processes. It is shown that the model can be improved on the basis of processing experimental data on the distribution of the droplet sizes formed during the breaking up of the liquid jet.
Data mining of tree-based models to analyze freeway accident frequency.
Chang, Li-Yen; Chen, Wen-Chieh
2005-01-01
Statistical models, such as Poisson or negative binomial regression models, have been employed to analyze vehicle accident frequency for many years. However, these models have their own model assumptions and pre-defined underlying relationship between dependent and independent variables. If these assumptions are violated, the model could lead to erroneous estimation of accident likelihood. Classification and Regression Tree (CART), one of the most widely applied data mining techniques, has been commonly employed in business administration, industry, and engineering. CART does not require any pre-defined underlying relationship between target (dependent) variable and predictors (independent variables) and has been shown to be a powerful tool, particularly for dealing with prediction and classification problems. This study collected the 2001-2002 accident data of National Freeway 1 in Taiwan. A CART model and a negative binomial regression model were developed to establish the empirical relationship between traffic accidents and highway geometric variables, traffic characteristics, and environmental factors. The CART findings indicated that the average daily traffic volume and precipitation variables were the key determinants for freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies.
Pointwise influence matrices for functional-response regression.
Reiss, Philip T; Huang, Lei; Wu, Pei-Shien; Chen, Huaihou; Colcombe, Stan
2017-12-01
We extend the notion of an influence or hat matrix to regression with functional responses and scalar predictors. For responses depending linearly on a set of predictors, our definition is shown to reduce to the conventional influence matrix for linear models. The pointwise degrees of freedom, the trace of the pointwise influence matrix, are shown to have an adaptivity property that motivates a two-step bivariate smoother for modeling nonlinear dependence on a single predictor. This procedure adapts to varying complexity of the nonlinear model at different locations along the function, and thereby achieves better performance than competing tensor product smoothers in an analysis of the development of white matter microstructure in the brain. © 2017, The International Biometric Society.
Shah, A A; Xing, W W; Triantafyllidis, V
2017-04-01
In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach.
COLA with scale-dependent growth: applications to screened modified gravity models
NASA Astrophysics Data System (ADS)
Winther, Hans A.; Koyama, Kazuya; Manera, Marc; Wright, Bill S.; Zhao, Gong-Bo
2017-08-01
We present a general parallelized and easy-to-use code to perform numerical simulations of structure formation using the COLA (COmoving Lagrangian Acceleration) method for cosmological models that exhibit scale-dependent growth at the level of first and second order Lagrangian perturbation theory. For modified gravity theories we also include screening using a fast approximate method that covers all the main examples of screening mechanisms in the literature. We test the code by comparing it to full simulations of two popular modified gravity models, namely f(R) gravity and nDGP, and find good agreement in the modified gravity boost-factors relative to ΛCDM even when using a fairly small number of COLA time steps.
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
2015-12-07
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less
Xing, W. W.; Triantafyllidis, V.
2017-01-01
In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach. PMID:28484327
Search for a Lorentz-violating sidereal signal with atmospheric neutrinos in IceCube
NASA Astrophysics Data System (ADS)
Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Benzvi, S.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Braun, J.; Buitink, S.; Carson, M.; Chirkin, D.; Christy, B.; Clem, J.; Clevermann, F.; Cohen, S.; Colnard, C.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Davis, J. C.; de Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; Deyoung, T.; Díaz-Vélez, J. C.; Dierckxsens, M.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Geisler, M.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Goodman, J. A.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Homeier, A.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kemming, N.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Krings, T.; Kroll, G.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Larson, M. J.; Lauer, R.; Lehmann, R.; Lünemann, J.; Madsen, J.; Majumdar, P.; Marotta, A.; Maruyama, R.; Mase, K.; Matis, H. S.; Matusik, M.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Middell, E.; Milke, N.; Miller, J.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Naumann, U.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; O'Murchadha, A.; Ono, M.; Panknin, S.; Paul, L.; Pérez de Los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Porrata, R.; Posselt, J.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Ruhe, T.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Santander, M.; Sarkar, S.; Schatto, K.; Schlenstedt, S.; Schmidt, T.; Schukraft, A.; Schultes, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Singh, K.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sullivan, G. W.; Swillens, Q.; Taavola, H.; Taboada, I.; Tamburro, A.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Toscano, S.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; van Overloop, A.; van Santen, J.; Voge, M.; Voigt, B.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Wolf, M.; Woschnagg, K.; Xu, C.; Xu, X. W.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.
2010-12-01
A search for sidereal modulation in the flux of atmospheric muon neutrinos in IceCube was performed. Such a signal could be an indication of Lorentz-violating physics. Neutrino oscillation models, derivable from extensions to the standard model, allow for neutrino oscillations that depend on the neutrino’s direction of propagation. No such direction-dependent variation was found. A discrete Fourier transform method was used to constrain the Lorentz and CPT-violating coefficients in one of these models. Because of the unique high energy reach of IceCube, it was possible to improve constraints on certain Lorentz-violating oscillations by 3 orders of magnitude with respect to limits set by other experiments.
Integrated Structural/Acoustic Modeling of Heterogeneous Panels
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett, A.; Aboudi, Jacob; Arnold, Steven, M.; Pennline, James, A.
2012-01-01
A model for the dynamic response of heterogeneous media is presented. A given medium is discretized into a number of subvolumes, each of which may contain an elastic anisotropic material, void, or fluid, and time-dependent boundary conditions are applied to simulate impact or incident pressure waves. The full time-dependent displacement and stress response throughout the medium is then determined via an explicit solution procedure. The model is applied to simulate the coupled structural/acoustic response of foam core sandwich panels as well as aluminum panels with foam inserts. Emphasis is placed on the acoustic absorption performance of the panels versus weight and the effects of the arrangement of the materials and incident wave frequency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less
NASA Technical Reports Server (NTRS)
Murphy, M. R.; Awe, C. A.
1986-01-01
Six professionally active, retired captains rated the coordination and decisionmaking performances of sixteen aircrews while viewing videotapes of a simulated commercial air transport operation. The scenario featured a required diversion and a probable minimum fuel situation. Seven point Likert-type scales were used in rating variables on the basis of a model of crew coordination and decisionmaking. The variables were based on concepts of, for example, decision difficulty, efficiency, and outcome quality; and leader-subordin ate concepts such as person and task-oriented leader behavior, and competency motivation of subordinate crewmembers. Five-front-end variables of the model were in turn dependent variables for a hierarchical regression procedure. The variance in safety performance was explained 46%, by decision efficiency, command reversal, and decision quality. The variance of decision quality, an alternative substantive dependent variable to safety performance, was explained 60% by decision efficiency and the captain's quality of within-crew communications. The variance of decision efficiency, crew coordination, and command reversal were in turn explained 78%, 80%, and 60% by small numbers of preceding independent variables. A principle component, varimax factor analysis supported the model structure suggested by regression analyses.
NASA Astrophysics Data System (ADS)
Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire
2017-04-01
Nowadays, there is a growing interest on small-scale rainfall information, provided by weather radars, to be used in urban water management and decision-making. Therefore, an increasing interest is in parallel devoted to the development of fully distributed and grid-based models following the increase of computation capabilities, the availability of high-resolution GIS information needed for such models implementation. However, the choice of an appropriate implementation scale to integrate the catchment heterogeneity and the whole measured rainfall variability provided by High-resolution radar technologies still issues. This work proposes a two steps investigation of scale effects in urban hydrology and its effects on modeling works. In the first step fractal tools are used to highlight the scale dependency observed within distributed data used to describe the catchment heterogeneity, both the structure of the sewer network and the distribution of impervious areas are analyzed. Then an intensive multi-scale modeling work is carried out to understand scaling effects on hydrological model performance. Investigations were conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model was implemented at 17 spatial resolutions ranging from 100 m to 5 m and modeling investigations were performed using both rain gauge rainfall information as well as high resolution X band radar data in order to assess the sensitivity of the model to small scale rainfall variability. Results coming out from this work demonstrate scale effect challenges in urban hydrology modeling. In fact, fractal concept highlights the scale dependency observed within distributed data used to implement hydrological models. Patterns of geophysical data change when we change the observation pixel size. The multi-scale modeling investigation performed with Multi-Hydro model at 17 spatial resolutions confirms scaling effect on hydrological model performance. Results were analyzed at three ranges of scales identified in the fractal analysis and confirmed in the modeling work. The sensitivity of the model to small-scale rainfall variability was discussed as well.
Regression analysis of sparse asynchronous longitudinal data.
Cao, Hongyuan; Zeng, Donglin; Fine, Jason P
2015-09-01
We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus.
Khalid, Ruzelan; Nawawi, Mohd Kamal M; Kawsar, Luthful A; Ghani, Noraida A; Kamil, Anton A; Mustafa, Adli
2013-01-01
M/G/C/C state dependent queuing networks consider service rates as a function of the number of residing entities (e.g., pedestrians, vehicles, and products). However, modeling such dynamic rates is not supported in modern Discrete Simulation System (DES) software. We designed an approach to cater this limitation and used it to construct the M/G/C/C state-dependent queuing model in Arena software. Using the model, we have evaluated and analyzed the impacts of various arrival rates to the throughput, the blocking probability, the expected service time and the expected number of entities in a complex network topology. Results indicated that there is a range of arrival rates for each network where the simulation results fluctuate drastically across replications and this causes the simulation results and analytical results exhibit discrepancies. Detail results that show how tally the simulation results and the analytical results in both abstract and graphical forms and some scientific justifications for these have been documented and discussed.
NASA Technical Reports Server (NTRS)
Clayton, J. Louie; Ehle, Curt; Saxon, Jeff (Technical Monitor)
2002-01-01
RSRM nozzle liner components have been analyzed and tested to explore the occurrence of anomalous material performance known as pocketing erosion. Primary physical factors that contribute to pocketing seem to include the geometric permeability, which governs pore pressure magnitudes and hence load, and carbon fiber high temperature tensile strength, which defines a material limiting capability. The study reports on the results of a coupled thermostructural finite element analysis of Carbon Cloth Phenolic (CCP) material tested at the Laser Hardened Material Evaluation Laboratory (the LHMEL facility). Modeled test configurations will be limited to the special case of where temperature gradients are oriented perpendicular to the composite material ply angle. Analyses were conducted using a transient, one-dimensional flow/thermal finite element code that models pore pressure and temperature distributions and in an explicitly coupled formulation, passes this information to a 2-dimensional finite element structural model for determination of the stress/deformation behavior of the orthotropic fiber/matrix CCP. Pore pressures are generated by thermal decomposition of the phenolic resin which evolve as a multi-component gas phase which is partially trapped in the porous microstructure of the composite. The nature of resultant pressures are described by using the Darcy relationships which have been modified to permit a multi-specie mass and momentum balance including water vapor condensation. Solution to the conjugate flow/thermal equations were performed using the SINDA code. Of particular importance to this problem was the implementation of a char and deformation state dependent (geometric) permeability as describing a first order interaction between the flow/thermal and structural models. Material property models are used to characterize the solid phase mechanical stiffness and failure. Structural calculations were performed using the ABAQUS code. Iterations were made between the two codes involving the dependent variables temperature, pressure and across-ply strain level. Model results comparisons are made for three different surface heat rates and dependent variable sensitivities discussed for the various cases.
Modeling the Coupled Chemo-Thermo-Mechanical Behavior of Amorphous Polymer Networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, Jonathan A.; Nguyen, Thao D.; Xiao, Rui
2015-02-01
Amorphous polymers exhibit a rich landscape of time-dependent behavior including viscoelasticity, structural relaxation, and viscoplasticity. These time-dependent mechanisms can be exploited to achieve shape-memory behavior, which allows the material to store a programmed deformed shape indefinitely and to recover entirely the undeformed shape in response to specific environmental stimulus. The shape-memory performance of amorphous polymers depends on the coordination of multiple physical mechanisms, and considerable opportunities exist to tailor the polymer structure and shape-memory programming procedure to achieve the desired performance. The goal of this project was to use a combination of theoretical, numerical and experimental methods to investigate themore » effect of shape memory programming, thermo-mechanical properties, and physical and environmental aging on the shape memory performance. Physical and environmental aging occurs during storage and through exposure to solvents, such as water, and can significantly alter the viscoelastic behavior and shape memory behavior of amorphous polymers. This project – executed primarily by Professor Thao Nguyen and Graduate Student Rui Xiao at Johns Hopkins University in support of a DOE/NNSA Presidential Early Career Award in Science and Engineering (PECASE) – developed a theoretical framework for chemothermo- mechanical behavior of amorphous polymers to model the effects of physical aging and solvent-induced environmental factors on their thermoviscoelastic behavior.« less
Structural composite panel performance under long-term load
Theodore L. Laufenberg
1988-01-01
Information on the performance of wood-based structural composite panels under long-term load is currently needed to permit their use in engineered assemblies and systems. A broad assessment of the time-dependent properties of panels is critical for creating databases and models of the creep-rupture phenomenon that lead to reliability-based design procedures. This...
Ionospheric convection inferred from interplanetary magnetic field-dependent Birkeland currents
NASA Technical Reports Server (NTRS)
Rasmussen, C. E.; Schunk, R. W.
1988-01-01
Computer simulations of ionospheric convection have been performed, combining empirical models of Birkeland currents with a model of ionospheric conductivity in order to investigate IMF-dependent convection characteristics. Birkeland currents representing conditions in the northern polar cap of the negative IMF By component are used. Two possibilities are considered: (1) the morning cell shifting into the polar cap as the IMF turns northward, and this cell and a distorted evening cell providing for sunward flow in the polar cap; and (2) the existence of a three-cell pattern when the IMF is strongly northward.
HCFA and the states: politics and intergovernmental leverage.
Gormley, W T; Boccuti, C
2001-06-01
In this article, we seek to explain variations in the Health Care Financing Administration's (HCFA) relationship to state governments. After reviewing several alternative models of the policy-making process, we argue that the utility of each model depends on certain issue characteristics, especially salience and conflict. We further argue that HCFA's choice of intergovernmental tools, rooted in a political setting, depends on the same issue characteristics. We illustrate our arguments by examining HCFA's behavior during the Clinton administration and by focusing on four cases: HMO performance measurement, nursing home regulation, lead screening for children, and the Children's Health Insurance Program (CHIP).
Molecular Dynamics Modeling of Thermal Properties of Aluminum Near Melting Line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karavaev, A. V.; Dremov, V. V.; Sapozhnikov, F. A.
2006-08-03
In this work we present results of calculations of thermal properties of solid and liquid phases of aluminum at different densities and temperatures using classical molecular dynamics with EAM potential function. Dependencies of heat capacity CV on temperature and density have been analyzed. It was shown that when temperature increases, heat capacity CV behavior deviates from that by Dulong-Petit law. It may be explained by influence of anharmonicity of crystal lattice vibrations. Comparison of heat capacity CV of liquid phase with Grover's model has been performed. Dependency of aluminum melting temperature on pressure has been acquired.
2014-01-01
Background This study aims to suggest an approach that integrates multilevel models and eigenvector spatial filtering methods and apply it to a case study of self-rated health status in South Korea. In many previous health-related studies, multilevel models and single-level spatial regression are used separately. However, the two methods should be used in conjunction because the objectives of both approaches are important in health-related analyses. The multilevel model enables the simultaneous analysis of both individual and neighborhood factors influencing health outcomes. However, the results of conventional multilevel models are potentially misleading when spatial dependency across neighborhoods exists. Spatial dependency in health-related data indicates that health outcomes in nearby neighborhoods are more similar to each other than those in distant neighborhoods. Spatial regression models can address this problem by modeling spatial dependency. This study explores the possibility of integrating a multilevel model and eigenvector spatial filtering, an advanced spatial regression for addressing spatial dependency in datasets. Methods In this spatially filtered multilevel model, eigenvectors function as additional explanatory variables accounting for unexplained spatial dependency within the neighborhood-level error. The specification addresses the inability of conventional multilevel models to account for spatial dependency, and thereby, generates more robust outputs. Results The findings show that sex, employment status, monthly household income, and perceived levels of stress are significantly associated with self-rated health status. Residents living in neighborhoods with low deprivation and a high doctor-to-resident ratio tend to report higher health status. The spatially filtered multilevel model provides unbiased estimations and improves the explanatory power of the model compared to conventional multilevel models although there are no changes in the signs of parameters and the significance levels between the two models in this case study. Conclusions The integrated approach proposed in this paper is a useful tool for understanding the geographical distribution of self-rated health status within a multilevel framework. In future research, it would be useful to apply the spatially filtered multilevel model to other datasets in order to clarify the differences between the two models. It is anticipated that this integrated method will also out-perform conventional models when it is used in other contexts. PMID:24571639
Van der Ende, Jan; Verhulst, Frank C; Tiemeier, Henning
2016-08-01
Internalizing and externalizing problems are associated with poor academic performance, both concurrently and longitudinally. Important questions are whether problems precede academic performance or vice versa, whether both internalizing and externalizing are associated with academic problems when simultaneously tested, and whether associations and their direction depend on the informant providing information. These questions were addressed in a sample of 816 children who were assessed four times. The children were 6-10 years at baseline and 14-18 years at the last assessment. Parent-reported internalizing and externalizing problems and teacher-reported academic performance were tested in cross-lagged models to examine bidirectional paths between these constructs. These models were compared with cross-lagged models testing paths between teacher-reported internalizing and externalizing problems and parent-reported academic performance. Both final models revealed similar pathways from mostly externalizing problems to academic performance. No paths emerged from internalizing problems to academic performance. Moreover, paths from academic performance to internalizing and externalizing problems were only found when teachers reported on children's problems and not for parent-reported problems. Additional model tests revealed that paths were observed in both childhood and adolescence. Externalizing problems place children at increased risk of poor academic performance and should therefore be the target for interventions.
Multi-objective optimization for generating a weighted multi-model ensemble
NASA Astrophysics Data System (ADS)
Lee, H.
2017-12-01
Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.
NASA Astrophysics Data System (ADS)
van der Hilst, Floor
2018-03-01
The sustainability of biomass production for energy depends on site-specific biophysical and socio-economic conditions. New research using high-resolution ecosystem process modelling shows the trade-offs between economic and environmental performance of biomass production for an ethanol biorefinery.
Computational analysis of Variable Thrust Engine (VTE) performance
NASA Technical Reports Server (NTRS)
Giridharan, M. G.; Krishnan, A.; Przekwas, A. J.
1993-01-01
The Variable Thrust Engine (VTE) of the Orbital Maneuvering Vehicle (OMV) uses a hypergolic propellant combination of Monomethyl Hydrazine (MMH) and Nitrogen Tetroxide (NTO) as fuel and oxidizer, respectively. The performance of the VTE depends on a number of complex interacting phenomena such as atomization, spray dynamics, vaporization, turbulent mixing, convective/radiative heat transfer, and hypergolic combustion. This study involved the development of a comprehensive numerical methodology to facilitate detailed analysis of the VTE. An existing Computational Fluid Dynamics (CFD) code was extensively modified to include the following models: a two-liquid, two-phase Eulerian-Lagrangian spray model; a chemical equilibrium model; and a discrete ordinate radiation heat transfer model. The modified code was used to conduct a series of simulations to assess the effects of various physical phenomena and boundary conditions on the VTE performance. The details of the models and the results of the simulations are presented.
Hydraulic modeling of clay ceramic water filters for point-of-use water treatment.
Schweitzer, Ryan W; Cunningham, Jeffrey A; Mihelcic, James R
2013-01-02
The acceptability of ceramic filters for point-of-use water treatment depends not only on the quality of the filtered water, but also on the quantity of water the filters can produce. This paper presents two mathematical models for the hydraulic performance of ceramic water filters under typical usage. A model is developed for two common filter geometries: paraboloid- and frustum-shaped. Both models are calibrated and evaluated by comparison to experimental data. The hydraulic models are able to predict the following parameters as functions of time: water level in the filter (h), instantaneous volumetric flow rate of filtrate (Q), and cumulative volume of water produced (V). The models' utility is demonstrated by applying them to estimate how the volume of water produced depends on factors such as the filter shape and the frequency of filling. Both models predict that the volume of water produced can be increased by about 45% if users refill the filter three times per day versus only once per day. Also, the models predict that filter geometry affects the volume of water produced: for two filters with equal volume, equal wall thickness, and equal hydraulic conductivity, a filter that is tall and thin will produce as much as 25% more water than one which is shallow and wide. We suggest that the models can be used as tools to help optimize filter performance.
El-Diasty, Mohammed; Pagiatakis, Spiros
2009-01-01
In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.
Intersymbol Interference Investigations Using a 3D Time-Dependent Traveling Wave Tube Model
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Andro, Monty
2002-01-01
For the first time, a time-dependent, physics-based computational model has been used to provide a direct description of the effects of the traveling wave tube amplifier (TWTA) on modulated digital signals. The TWT model comprehensively takes into account the effects of frequency dependent AM/AM and AM/PM conversion; gain and phase ripple; drive-induced oscillations; harmonic generation; intermodulation products; and backward waves. Thus, signal integrity can be investigated in the presence of these sources of potential distortion as a function of the physical geometry and operating characteristics of the high power amplifier and the operational digital signal. This method promises superior predictive fidelity compared to methods using TWT models based on swept- amplitude and/or swept-frequency data. First, the TWT model using the three dimensional (3D) electromagnetic code MAFIA is presented. Then, this comprehensive model is used to investigate approximations made in conventional TWT black-box models used in communication system level simulations. To quantitatively demonstrate the effects these approximations have on digital signal performance predictions, including intersymbol interference (ISI), the MAFIA results are compared to the system level analysis tool, Signal Processing Workstation (SPW), using high order modulation schemes including 16 and 64-QAM.
Surface Adsorption in Nonpolarizable Atomic Models.
Whitmer, Jonathan K; Joshi, Abhijeet A; Carlton, Rebecca J; Abbott, Nicholas L; de Pablo, Juan J
2014-12-09
Many ionic solutions exhibit species-dependent properties, including surface tension and the salting-out of proteins. These effects may be loosely quantified in terms of the Hofmeister series, first identified in the context of protein solubility. Here, our interest is to develop atomistic models capable of capturing Hofmeister effects rigorously. Importantly, we aim to capture this dependence in computationally cheap "hard" ionic models, which do not exhibit dynamic polarization. To do this, we have performed an investigation detailing the effects of the water model on these properties. Though incredibly important, the role of water models in simulation of ionic solutions and biological systems is essentially unexplored. We quantify this via the ion-dependent surface attraction of the halide series (Cl, Br, I) and, in so doing, determine the relative importance of various hypothesized contributions to ionic surface free energies. Importantly, we demonstrate surface adsorption can result in hard ionic models combined with a thermodynamically accurate representation of the water molecule (TIP4Q). The effect observed in simulations of iodide is commensurate with previous calculations of the surface potential of mean force in rigid molecular dynamics and polarizable density-functional models. Our calculations are direct simulation evidence of the subtle but sensitive role of water thermodynamics in atomistic simulations.
Development of an algorithm to model an aircraft equipped with a generic CDTI display
NASA Technical Reports Server (NTRS)
Driscoll, W. C.; Houck, J. A.
1986-01-01
A model of human pilot performance of a tracking task using a generic Cockpit Display of Traffic Information (CDTI) display is developed from experimental data. The tracking task is to use CDTI in tracking a leading aircraft at a nominal separation of three nautical miles over a prescribed trajectory in space. The analysis of the data resulting from a factorial design of experiments reveals that the tracking task performance depends on the pilot and his experience at performing the task. Performance was not strongly affected by the type of control system used (velocity vector control wheel steering versus 3D automatic flight path guidance and control). The model that is developed and verified results in state trajectories whose difference from the experimental state trajectories is small compared to the variation due to the pilot and experience factors.
Side-information-dependent correlation channel estimation in hash-based distributed video coding.
Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter
2012-04-01
In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.
The Routine Fitting of Kinetic Data to Models
Berman, Mones; Shahn, Ezra; Weiss, Marjory F.
1962-01-01
A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975
Compensation for loads during arm movements using equilibrium-point control.
Gribble, P L; Ostry, D J
2000-12-01
A significant problem in motor control is how information about movement error is used to modify control signals to achieve desired performance. A potential source of movement error and one that is readily controllable experimentally relates to limb dynamics and associated movement-dependent loads. In this paper, we have used a position control model to examine changes to control signals for arm movements in the context of movement-dependent loads. In the model, based on the equilibrium-point hypothesis, equilibrium shifts are adjusted directly in proportion to the positional error between desired and actual movements. The model is used to simulate multi-joint movements in the presence of both "internal" loads due to joint interaction torques, and externally applied loads resulting from velocity-dependent force fields. In both cases it is shown that the model can achieve close correspondence to empirical data using a simple linear adaptation procedure. An important feature of the model is that it achieves compensation for loads during movement without the need for either coordinate transformations between positional error and associated corrective forces, or inverse dynamics calculations.
Li, Baoyue; Bruyneel, Luk; Lesaffre, Emmanuel
2014-05-20
A traditional Gaussian hierarchical model assumes a nested multilevel structure for the mean and a constant variance at each level. We propose a Bayesian multivariate multilevel factor model that assumes a multilevel structure for both the mean and the covariance matrix. That is, in addition to a multilevel structure for the mean we also assume that the covariance matrix depends on covariates and random effects. This allows to explore whether the covariance structure depends on the values of the higher levels and as such models heterogeneity in the variances and correlation structure of the multivariate outcome across the higher level values. The approach is applied to the three-dimensional vector of burnout measurements collected on nurses in a large European study to answer the research question whether the covariance matrix of the outcomes depends on recorded system-level features in the organization of nursing care, but also on not-recorded factors that vary with countries, hospitals, and nursing units. Simulations illustrate the performance of our modeling approach. Copyright © 2013 John Wiley & Sons, Ltd.
Coupled ion redistribution and electronic breakdown in low-alkali boroaluminosilicate glass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Doo Hyun, E-mail: cooldoo@add.re.kr; Randall, Clive, E-mail: car4@psu.edu; Furman, Eugene, E-mail: euf1@psu.edu
2015-08-28
Dielectrics with high electrostatic energy storage must have exceptionally high dielectric breakdown strength at elevated temperatures. Another important consideration in designing a high performance dielectric is understanding the thickness and temperature dependence of breakdown strengths. Here, we develop a numerical model which assumes a coupled ionic redistribution and electronic breakdown is applied to predict the breakdown strength of low-alkali glass. The ionic charge transport of three likely charge carriers (Na{sup +}, H{sup +}/H{sub 3}O{sup +}, Ba{sup 2+}) was used to calculate the ionic depletion width in low-alkali boroaluminosilicate which can further be used for the breakdown modeling. This model predictsmore » the breakdown strengths in the 10{sup 8}–10{sup 9 }V/m range and also accounts for the experimentally observed two distinct thickness dependent regions for breakdown. Moreover, the model successfully predicts the temperature dependent breakdown strength for low-alkali glass from room temperature up to 150 °C. This model showed that breakdown strengths were governed by minority charge carriers in the form of ionic transport (mostly sodium) in these glasses.« less
NASA Astrophysics Data System (ADS)
Moslemipour, Ghorbanali
2018-07-01
This paper aims at proposing a quadratic assignment-based mathematical model to deal with the stochastic dynamic facility layout problem. In this problem, product demands are assumed to be dependent normally distributed random variables with known probability density function and covariance that change from period to period at random. To solve the proposed model, a novel hybrid intelligent algorithm is proposed by combining the simulated annealing and clonal selection algorithms. The proposed model and the hybrid algorithm are verified and validated using design of experiment and benchmark methods. The results show that the hybrid algorithm has an outstanding performance from both solution quality and computational time points of view. Besides, the proposed model can be used in both of the stochastic and deterministic situations.
Multi-water-bag models of ion temperature gradient instability in cylindrical geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coulette, David; Besse, Nicolas
2013-05-15
Ion temperature gradient instabilities play a major role in the understanding of anomalous transport in core fusion plasmas. In the considered cylindrical geometry, ion dynamics is described using a drift-kinetic multi-water-bag model for the parallel velocity dependency of the ion distribution function. In a first stage, global linear stability analysis is performed. From the obtained normal modes, parametric dependencies of the main spectral characteristics of the instability are then examined. Comparison of the multi-water-bag results with a reference continuous Maxwellian case allows us to evaluate the effects of discrete parallel velocity sampling induced by the Multi-Water-Bag model. Differences between themore » global model and local models considered in previous works are discussed. Using results from linear, quasilinear, and nonlinear numerical simulations, an analysis of the first stage saturation dynamics of the instability is proposed, where the divergence between the three models is examined.« less
Finite element analysis of history-dependent damage in time-dependent fracture mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnaswamy, P.; Brust, F.W.; Ghadiali, N.D.
1993-11-01
The demands for structural systems to perform reliably under both severe and changing operating conditions continue to increase. Under these conditions time-dependent straining and history-dependent damage become extremely important. This work focuses on studying creep crack growth using finite element (FE) analysis. Two important issues, namely, (1) the use of history-dependent constitutive laws, and (2) the use of various fracture parameters in predicting creep crack growth, have both been addressed in this work. The constitutive model used here is the one developed by Murakami and Ohno and is based on the concept of a creep hardening surface. An implicit FEmore » algorithm for this model was first developed and verified for simple geometries and loading configurations. The numerical methodology developed here has been used to model stationary and growing cracks in CT specimens. Various fracture parameters such as the C[sub 1], C[sup *], T[sup *], J were used to compare the numerical predictions with experimental results available in the literature. A comparison of the values of these parameters as a function of time has been made for both stationary and growing cracks. The merit of using each of these parameters has also been discussed.« less
Park, Jungkap; Saitou, Kazuhiro
2014-09-18
Multibody potentials accounting for cooperative effects of molecular interactions have shown better accuracy than typical pairwise potentials. The main challenge in the development of such potentials is to find relevant structural features that characterize the tightly folded proteins. Also, the side-chains of residues adopt several specific, staggered conformations, known as rotamers within protein structures. Different molecular conformations result in different dipole moments and induce charge reorientations. However, until now modeling of the rotameric state of residues had not been incorporated into the development of multibody potentials for modeling non-bonded interactions in protein structures. In this study, we develop a new multibody statistical potential which can account for the influence of rotameric states on the specificity of atomic interactions. In this potential, named "rotamer-dependent atomic statistical potential" (ROTAS), the interaction between two atoms is specified by not only the distance and relative orientation but also by two state parameters concerning the rotameric state of the residues to which the interacting atoms belong. It was clearly found that the rotameric state is correlated to the specificity of atomic interactions. Such rotamer-dependencies are not limited to specific type or certain range of interactions. The performance of ROTAS was tested using 13 sets of decoys and was compared to those of existing atomic-level statistical potentials which incorporate orientation-dependent energy terms. The results show that ROTAS performs better than other competing potentials not only in native structure recognition, but also in best model selection and correlation coefficients between energy and model quality. A new multibody statistical potential, ROTAS accounting for the influence of rotameric states on the specificity of atomic interactions was developed and tested on decoy sets. The results show that ROTAS has improved ability to recognize native structure from decoy models compared to other potentials. The effectiveness of ROTAS may provide insightful information for the development of many applications which require accurate side-chain modeling such as protein design, mutation analysis, and docking simulation.
NASA Astrophysics Data System (ADS)
Mladenova, I. E.; Crow, W. T.; Teng, W. L.; Doraiswamy, P.
2010-12-01
Crop yield in crop production models is simulated as a function of weather, ground conditions and management practices and it is driven by the amount of nutrients, heat and water availability in the root-zone. It has been demonstrated that assimilation of satellite-derived soil moisture data has the potential to improve the model root-zone soil water (RZSW) information. However, the satellite estimates represent the moisture conditions of the top 3 cm to 5 cm of the soil profile depending on system configuration and surface conditions (i.e. soil wetness, density of the canopy cover, etc). The propagation of this superficial information throughout the profile will depend on the model physics. In an Ensemble Kalman Filter (EnKF) data assimilation system, as the one examined here, the update of each soil layer is done through the Kalman Gain, K. K is a weighing factor that determines how much correction will be performed on the forecasts. Furthermore, K depends on the strength of the correlation between the surface and the root-zone soil moisture; the stronger this correlation is, the more observations will impact the analysis. This means that even if the satellite-derived product has higher sensitivity and accuracy as compared to the model estimates, the improvement of the RZSW will be negligible if the surface-root zone coupling is weak, where the later is determined by the model subsurface physics. This research examines: (1) the strength of the vertical coupling in the Environmental Policy Integrated Climate (EPIC) model over corn and soybeans covered fields in Iowa, US, (2) the potential to improve EPIC RZSW information through assimilation of satellite soil moisture data derived from the Advanced Microwave Scanning Radiometer (AMSR-E) and (3) the impact of the vertical coupling on the EnKF performance.
Five degrees of freedom linear state-space representation of electrodynamic thrust bearings
NASA Astrophysics Data System (ADS)
Van Verdeghem, J.; Kluyskens, V.; Dehez, B.
2017-09-01
Electrodynamic bearings can provide stable and contactless levitation of rotors while operating at room temperatures. Depending solely on passive phenomena, specific models have to be developed to study the forces they exert and the resulting rotordynamics. In recent years, models allowing us to describe the axial dynamics of a large range of electrodynamic thrust bearings have been derived. However, these bearings being devised to be integrated into fully magnetic suspensions, the existing models still suffer from restrictions. Indeed, assuming the spin speed as varying slowly, a rigid rotor is characterised by five independent degrees of freedom whereas early models only considered the axial degree. This paper presents a model free of the previous limitations. It consists in a linear state-space representation describing the rotor's complete dynamics by considering the impact of the rotor axial, radial and angular displacements as well as the gyroscopic effects. This set of ten equations depends on twenty parameters whose identification can be easily performed through static finite element simulations or quasi-static experimental measurements. The model stresses the intrinsic decoupling between the axial dynamics and the other degrees of freedom as well as the existence of electrodynamic angular torques restoring the rotor to its nominal position. Finally, a stability analysis performed on the model highlights the presence of two conical whirling modes related to the angular dynamics, namely the nutation and precession motions. The former, whose intrinsic stability depends on the ratio between polar and transverse moments of inertia, can be easily stabilised through external damping whereas the latter, which is stable up to an instability threshold linked to the angular electrodynamic cross-coupling stiffness, is less impacted by that damping.
Contributions to lateral balance control in ambulatory older adults.
Sparto, Patrick J; Newman, A B; Simonsick, E M; Caserotti, P; Strotmeyer, E S; Kritchevsky, S B; Yaffe, K; Rosano, C
2018-06-01
In older adults, impaired control of standing balance in the lateral direction is associated with the increased risk of falling. Assessing the factors that contribute to impaired standing balance control may identify areas to address to reduce falls risk. To investigate the contributions of physiological factors to standing lateral balance control. Two hundred twenty-two participants from the Pittsburgh site of the Health, Aging and Body Composition Study had lateral balance control assessed using a clinical sensory integration balance test (standing on level and foam surface with eyes open and closed) and a lateral center of pressure tracking test using visual feedback. The center of pressure was recorded from a force platform. Multiple linear regression models examined contributors of lateral control of balance performance, including concurrently measured tests of lower extremity sensation, knee extensor strength, executive function, and clinical balance tests. Models were adjusted for age, body mass index, and sex. Larger lateral sway during the sensory integration test performed on foam was associated with longer repeated chair stands time. During the lateral center of pressure tracking task, the error in tracking increased at higher frequencies; greater error was associated with worse executive function. The relationship between sway performance and physical and cognitive function differed between women and men. Contributors to control of lateral balance were task-dependent. Lateral standing performance on an unstable surface may be more dependent upon general lower extremity strength, whereas visual tracking performance may be more dependent upon cognitive factors. Lateral balance control in ambulatory older adults is associated with deficits in strength and executive function.
Modeling the Magnetopause Shadowing Loss during the October 2012 Dropout Event
NASA Astrophysics Data System (ADS)
Tu, Weichao; Cunningham, Gregory
2017-04-01
The relativistic electron flux in Earth's outer radiation belt are observed to drop by orders of magnitude on timescales of a few hours, which is called radiation belt dropouts. Where do the electrons go during the dropouts? This is one of the most important outstanding questions in radiation belt studies. Radiation belt electrons can be lost either by precipitation into the atmosphere or by transport across the magnetopause into interplanetary space. The latter mechanism is called magnetopause shadowing, usually combined with outward radial diffusion of electrons due to the sharp radial gradient it creates. In order to quantify the relative contribution of these two mechanisms to radiation belt dropout, we performed an event study on the October 2012 dropout event observed by Van Allen Probes. First, the precipitating MeV electrons observed by multiple NOAA POES satellites at low altitude did not show evidence of enhanced precipitation during the dropout, which suggested that precipitation was not the dominant loss mechanism for the event. Then, in order to simulate the magnetopause shadowing loss and outward radial diffusion during the dropout, we applied a radial diffusion model with electron lifetimes on the order of electron drift periods outside the last closed drift shell. In addition, realistic and event-specific inputs of radial diffusion coefficients (DLL) and last closed drift shell (LCDS) were implemented in the model. Specifically, we used the new DLL developed by Cunningham [JGR 2016] which were estimated in realistic TS04 [Tsyganenko and Sitnov, JGR 2005] storm time magnetic field model and included physical K (2nd adiabatic invariant) or pitch angle dependence. Event-specific LCDS traced in TS04 model with realistic K dependence was also implemented. Our simulation results showed that these event-specific inputs are critical to explain the electron dropout during the event. The new DLL greatly improved the model performance at low L* regions (L*<3.6) compared to empirical Kp-dependent DLL [Brautigam and Albert, JGR 2000] used in previous radial diffusion models. Combining the event-specific DLL and LCDS, our model well captured the magnetopause shadowing loss and reproduced the electron dropout at L*=4.0-4.5. In addition, we found the K-dependent LCDS is critical to reproduce the pitch angle dependence of the observed electron dropout.
Repetition priming in selective attention: A TVA analysis.
Ásgeirsson, Árni Gunnar; Kristjánsson, Árni; Bundesen, Claus
2015-09-01
Current behavior is influenced by events in the recent past. In visual attention, this is expressed in many variations of priming effects. Here, we investigate color priming in a brief exposure digit-recognition task. Observers performed a masked odd-one-out singleton recognition task where the target-color either repeated or changed between subsequent trials. Performance was measured by recognition accuracy over exposure durations. The purpose of the study was to replicate earlier findings of perceptual priming in brief displays and to model those results based on a Theory of Visual Attention (TVA; Bundesen, 1990). We tested 4 different definitions of a generic TVA-model and assessed their explanatory power. Our hypothesis was that priming effects could be explained by selective mechanisms, and that target-color repetitions would only affect the selectivity parameter (α) of our models. Repeating target colors enhanced performance for all 12 observers. As predicted, this was only true under conditions that required selection of a target among distractors, but not when a target was presented alone. Model fits by TVA were obtained with a trial-by-trial maximum likelihood estimation procedure that estimated 4-15 free parameters, depending on the particular model. We draw two main conclusions. Color priming can be modeled simply as a change in selectivity between conditions of repetition or swap of target color. Depending on the desired resolution of analysis; priming can accurately be modeled by a simple four parameter model, where VSTM capacity and spatial biases of attention are ignored, or more fine-grained by a 10 parameter model that takes these aspects into account. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dirian, Yves; Foffa, Stefano; Kunz, Martin
We present a comprehensive and updated comparison with cosmological observations of two non-local modifications of gravity previously introduced by our group, the so called RR and RT models. We implement the background evolution and the cosmological perturbations of the models in a modified Boltzmann code, using CLASS. We then test the non-local models against the Planck 2015 TT, TE, EE and Cosmic Microwave Background (CMB) lensing data, isotropic and anisotropic Baryonic Acoustic Oscillations (BAO) data, JLA supernovae, H {sub 0} measurements and growth rate data, and we perform Bayesian parameter estimation. We then compare the RR, RT and ΛCDM models,more » using the Savage-Dickey method. We find that the RT model and ΛCDM perform equally well, while the performance of the RR model with respect to ΛCDM depends on whether or not we include a prior on H {sub 0} based on local measurements.« less
Developing a physiologically based approach for modeling plutonium decorporation therapy with DTPA.
Kastl, Manuel; Giussani, Augusto; Blanchardon, Eric; Breustedt, Bastian; Fritsch, Paul; Hoeschen, Christoph; Lopez, Maria Antonia
2014-11-01
To develop a physiologically based compartmental approach for modeling plutonium decorporation therapy with the chelating agent Diethylenetriaminepentaacetic acid (Ca-DTPA/Zn-DTPA). Model calculations were performed using the software package SAAM II (©The Epsilon Group, Charlottesville, Virginia, USA). The Luciani/Polig compartmental model with age-dependent description of the bone recycling processes was used for the biokinetics of plutonium. The Luciani/Polig model was slightly modified in order to account for the speciation of plutonium in blood and for the different affinities for DTPA of the present chemical species. The introduction of two separate blood compartments, describing low-molecular-weight complexes of plutonium (Pu-LW) and transferrin-bound plutonium (Pu-Tf), respectively, and one additional compartment describing plutonium in the interstitial fluids was performed successfully. The next step of the work is the modeling of the chelation process, coupling the physiologically modified structure with the biokinetic model for DTPA. RESULTS of animal studies performed under controlled conditions will enable to better understand the principles of the involved mechanisms.
Investigating the Effect of Damage Progression Model Choice on Prognostics Performance
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhury, Indranil; Narasimhan, Sriram; Saha, Sankalita; Saha, Bhaskar; Goebel, Kai
2011-01-01
The success of model-based approaches to systems health management depends largely on the quality of the underlying models. In model-based prognostics, it is especially the quality of the damage progression models, i.e., the models describing how damage evolves as the system operates, that determines the accuracy and precision of remaining useful life predictions. Several common forms of these models are generally assumed in the literature, but are often not supported by physical evidence or physics-based analysis. In this paper, using a centrifugal pump as a case study, we develop different damage progression models. In simulation, we investigate how model changes influence prognostics performance. Results demonstrate that, in some cases, simple damage progression models are sufficient. But, in general, the results show a clear need for damage progression models that are accurate over long time horizons under varied loading conditions.
An Integrated Model of Cognitive Control in Task Switching
ERIC Educational Resources Information Center
Altmann, Erik M.; Gray, Wayne D.
2008-01-01
A model of cognitive control in task switching is developed in which controlled performance depends on the system maintaining access to a code in episodic memory representing the most recently cued task. The main constraint on access to the current task code is proactive interference from old task codes. This interference and the mechanisms that…
A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Shengzhi; Ming, Bo; Huang, Qiang
It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecastingmore » models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.« less
Performance monitoring can boost turboexpander efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
McIntire, R.
1982-07-05
Focuses on the turboexpander/refrigeration system's radial expander and radial compressor. Explains that radial expander efficiency depends on mass flow rate, inlet pressure, inlet temperature, discharge pressure, gas composition, and shaft speed. Discusses quantifying the performance of the separate components over a range of operating conditions; estimating the increase in performance associated with any hardware change; and developing an analytical (computer) model of the entire system by using the performance curve of individual components. Emphasizes antisurge control and modifying Q/N (flow rate/ shaft speed).
Note: extraction of temperature-dependent interfacial resistance of thermoelectric modules.
Chen, Min
2011-11-01
This article discusses an approach for extracting the temperature dependency of the electrical interfacial resistance associated with thermoelectric devices. The method combines a traditional module-level test rig and a nonlinear numerical model of thermoelectricity to minimize measurement errors on the interfacial resistance. The extracted results represent useful data to investigating the characteristics of thermoelectric module resistance and comparing performance of various modules. © 2011 American Institute of Physics
Chow, Stephanie S.; Romo, Ranulfo; Brody, Carlos D.
2010-01-01
In a complex world, a sensory cue may prompt different actions in different contexts. A laboratory example of context-dependent sensory processing is the two-stimulus-interval discrimination task. In each trial, a first stimulus (f1) must be stored in short-term memory and later compared with a second stimulus (f2), for the animal to come to a binary decision. Prefrontal cortex (PFC) neurons need to interpret the f1 information in one way (perhaps with a positive weight) and the f2 information in an opposite way (perhaps with a negative weight), although they come from the very same secondary somatosensory cortex (S2) neurons; therefore, a functional sign inversion is required. This task thus provides a clear example of context-dependent processing. Here we develop a biologically plausible model of a context-dependent signal transformation of the stimulus encoding from S2 to PFC. To ground our model in experimental neurophysiology, we use neurophysiological data recorded by R. Romo’s laboratory from both cortical area S2 and PFC in monkeys performing the task. Our main goal is to use experimentally observed context-dependent modulations of firing rates in cortical area S2 as the basis for a model that achieves a context-dependent inversion of the sign of S2 to PFC connections. This is done without requiring any changes in connectivity (Salinas, 2004b). We (1) characterize the experimentally observed context-dependent firing rate modulation in area S2, (2) construct a model that results in the sign transformation, and (3) characterize the robustness and consequent biological plausibility of the model. PMID:19494146
The impact of the achievement motive on athletic performance in adolescent football players.
Zuber, Claudia; Conzelmann, Achim
2014-01-01
Researchers largely agree that there is a positive relationship between achievement motivation and athletic performance, which is why the achievement motive is viewed as a potential criterion for talent. However, the underlying mechanism behind this relationship remains unclear. In talent and performance models, main effect, mediator and moderator models have been suggested. A longitudinal study was carried out among 140 13-year-old football talents, using structural equation modelling to determine which model best explains how hope for success (HS) and fear of failure (FF), which are the aspects of the achievement motive, motor skills and abilities that affect performance. Over a period of half a year, HS can to some extent explain athletic performance, but this relationship is not mediated by the volume of training, sport-specific skills or abilities, nor is the achievement motive a moderating variable. Contrary to expectations, FF does not explain any part of performance. Aside from HS, however, motor abilities and in particular skills also predict a significant part of performance. The study confirms the widespread assumption that the development of athletic performance in football depends on multiple factors, and in particular that HS is worth watching in the medium term as a predictor of talent.
Development of system reliability models for railway bridges.
DOT National Transportation Integrated Search
2012-07-01
Performance of the railway transportation network depends on the reliability of railway bridges, which can be affected by : various forms of deterioration and extreme environmental conditions. More than half of the railway bridges in US were : built ...
Models and techniques for evaluating the effectiveness of aircraft computing systems
NASA Technical Reports Server (NTRS)
Meyer, J. F.
1978-01-01
The development of system models that can provide a basis for the formulation and evaluation of aircraft computer system effectiveness, the formulation of quantitative measures of system effectiveness, and the development of analytic and simulation techniques for evaluating the effectiveness of a proposed or existing aircraft computer are described. Specific topics covered include: system models; performability evaluation; capability and functional dependence; computation of trajectory set probabilities; and hierarchical modeling of an air transport mission.
Electrode performance parameters for a radioisotope-powered AMTEC for space power applications
NASA Technical Reports Server (NTRS)
Underwood, M. L.; O'Connor, D.; Williams, R. M.; Jeffries-Nakamura, B.; Ryan, M. A.; Bankston, C. P.
1992-01-01
The alkali metal thermoelastic converter (AMTEC) is a device for the direct conversion of heat to electricity. Recently a design of an AMTEC using a radioisotope heat source was described, but the optimum condenser temperature was hotter than the temperatures used in the laboratory to develop the electrode performance model. Now laboratory experiments have confirmed the dependence of two model parameters over a broader range of condenser and electrode temperatures for two candidate electrode compositions. One parameter, the electrochemical exchange current density at the reaction interface, is independent of the condenser temperature, and depends only upon the collision rate of sodium at the reaction zone. The second parameter, a morphological parameter, which measures the mass transport resistance through the electrode, is independent of condenser and electrode temperatures for molybdenum electrodes. For rhodium-tungsten electrodes, however, this parameter increases for decreasing electrode temperature, indicating an activated mass transport mechanism such as surface diffusion.
NASA Astrophysics Data System (ADS)
Dittmann, Niklas; Splettstoesser, Janine; Helbig, Nicole
2018-03-01
We calculate the frequency-dependent equilibrium noise of a mesoscopic capacitor in time-dependent density functional theory (TDDFT). The capacitor is modeled as a single-level quantum dot with on-site Coulomb interaction and tunnel coupling to a nearby reservoir. The noise spectra are derived from linear-response conductances via the fluctuation-dissipation theorem. Thereby, we analyze the performance of a recently derived exchange-correlation potential with time-nonlocal density dependence in the finite-frequency linear-response regime. We compare our TDDFT noise spectra with real-time perturbation theory and find excellent agreement for noise frequencies below the reservoir temperature.
Team deliberate practice in medicine and related domains: a consideration of the issues.
Harris, Kevin R; Eccles, David W; Shatzer, John H
2017-03-01
A better understanding of the factors influencing medical team performance and accounting for expert medical team performance should benefit medical practice. Therefore, the aim here is to highlight key issues with using deliberate practice to improve medical team performance, especially given the success of deliberate practice for developing individual expert performance in medicine and other domains. Highlighting these issues will inform the development of training for medical teams. The authors first describe team coordination and its critical role in medical teams. Presented next are the cognitive mechanisms that allow expert performers to accurately interpret the current situation via the creation of an accurate mental "model" of the current situation, known as a situation model. Following this, the authors propose that effective team performance depends at least in part on team members having similar models of the situation, known as a shared situation model. The authors then propose guiding principles for implementing team deliberate practice in medicine and describe how team deliberate practice can be used in an attempt to reduce barriers inherent in medical teams to the development of shared situation models. The paper concludes with considerations of limitations, and future research directions, concerning the implementation of team deliberate practice within medicine.
Atomistic modelling of magnetic nano-granular thin films
NASA Astrophysics Data System (ADS)
Agudelo-Giraldo, J. D.; Arbeláez-Echeverry, O. D.; Restrepo-Parra, E.
2018-03-01
In this work, a complete model for studying the magnetic behaviour of polycrystalline thin films at nanoscale was processed. This model includes terms as exchange interaction, dipolar interaction and various types of anisotropies. For the first term, exchange interaction dependence of the distance n was used with purpose of quantify the interaction, mainly in grain boundaries. The third term includes crystalline, surface and boundary anisotropies. Special attention was paid to the disorder vector that determines the loss of cubic symmetry in the crystalline structure. For the case of the dipolar interaction, a similar implementation of the fast multiple method (FMM) was performed. Using these tools, modelling and simulations were developed varying the number of grains, and the results obtained presented a great dependence of the magnetic properties on this parameter. Comparisons between critical temperature and magnetization of saturation depending on the number of grains were performed for samples with and without factors as the surface and boundary anisotropies, and the dipolar interaction. It was observed that the inclusion of these parameters produced a decrease in the critical temperature and the magnetization of saturation; furthermore, in both cases, including and not including the disorder parameters, not only the critical temperature, but also the magnetization of saturation exhibited a range of values that also depend on the number of grains. This presence of a critical interval is due to each grain can transit toward the ferromagnetic state at different values of critical temperature. The processes of Zero field cooling (ZFC), Field cooling (FCC) and field cooling in warming mode (FCW) were necessary for understanding the mono-domain regime around of transition temperature, due to the high probabilities of a Super-paramagnetic (SPM) state.
Positional dependence of the SNPP VIIRS SD BRDF degradation factor
NASA Astrophysics Data System (ADS)
Lei, Ning; Chen, Xuexia; Chang, Tiejun; Xiong, Xiaoxiong
2017-09-01
The Visible Infrared Imaging Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership (SNPP) satellite is a passive scanning radiometer and an imager. The VIIRS regularly performs on-orbit radiometric calibration of its reflective solar bands (RSBs) through observing an onboard sunlit solar diffuser (SD). The reflectance of the SD changes over time and the change is denoted as the SD bidirectional reflectance distribution function degradation factor. The degradation factor, measured by an onboard solar diffuser stability monitor, has been shown to be both incident sunlight and outgoing direction dependent. In this Proceeding, we investigate the factor's dependence on SD position. We develop a model to relate the SD degradation factor with the amount of solar exposure. We use Earth measurements to evaluate the effectiveness of the model.
Gering, Kevin L.
2013-01-01
A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics. The computing system also analyzes the cell information of the electrochemical cell with a Butler-Volmer (BV) expression modified to determine exchange current density of the electrochemical cell by including kinetic performance information related to pulse-time dependence, electrode surface availability, or a combination thereof. A set of sigmoid-based expressions may be included with the modified-BV expression to determine kinetic performance as a function of pulse time. The determined exchange current density may be used with the modified-BV expression, with or without the sigmoid expressions, to analyze other characteristics of the electrochemical cell. Model parameters can be defined in terms of cell aging, making the overall kinetics model amenable to predictive estimates of cell kinetic performance along the aging timeline.
NASA Astrophysics Data System (ADS)
Moreno-Camacho, Carlos A.; Montoya-Torres, Jairo R.; Vélez-Gallego, Mario C.
2018-06-01
Only a few studies in the available scientific literature address the problem of having a group of workers that do not share identical levels of productivity during the planning horizon. This study considers a workforce scheduling problem in which the actual processing time is a function of the scheduling sequence to represent the decline in workers' performance, evaluating two classical performance measures separately: makespan and maximum tardiness. Several mathematical models are compared with each other to highlight the advantages of each approach. The mathematical models are tested with randomly generated instances available from a public e-library.
Axial Compressor Reversed Flow Performance.
1985-05-01
5.3.2. Axial Tempature Profils TIme-verage axial temperature profiles were acquired through the use of exposed...on the above questions, or any additional details concerning the current application, future potential, or other value of this research. Please use the...were heavily dependent upon the model used for defining compressor post-stall performance, both steady state end transient, especially In the reve a
The dependence of the properties of optical fibres on length
NASA Astrophysics Data System (ADS)
Poppett, C. L.; Allington-Smith, J. R.
2010-05-01
We investigate the dependence on length of optical fibres used in astronomy, especially the focal ratio degradation (FRD) which places constraints on the performance of fibre-fed spectrographs used for multiplexed spectroscopy. To this end, we present a modified version of the FRD model proposed by Carrasco & Parry to quantify the number of scattering defects within an optical fibre using a single parameter. The model predicts many trends which are seen experimentally, for example, a decrease in FRD as core diameter increases, and also as wavelength increases. However, the model also predicts a strong dependence on FRD with length that is not seen experimentally. By adapting the single fibre model to include a second fibre, we can quantify the amount of FRD due to stress caused by the method of termination. By fitting the model to experimental data, we find that polishing the fibre causes more stress to be induced in the end of the fibre compared to a simple cleave technique. We estimate that the number of scattering defects caused by polishing is approximately double that produced by cleaving. By placing limits on the end effect, the model can be used to estimate the residual-length dependence in very long fibres, such as those required for Extremely Large Telescopes, without having to carry out costly experiments. We also use our data to compare different methods of fibre termination.
Nicolas, Renaud; Sibon, Igor; Hiba, Bassem
2015-01-01
The diffusion-weighted-dependent attenuation of the MRI signal E(b) is extremely sensitive to microstructural features. The aim of this study was to determine which mathematical model of the E(b) signal most accurately describes it in the brain. The models compared were the monoexponential model, the stretched exponential model, the truncated cumulant expansion (TCE) model, the biexponential model, and the triexponential model. Acquisition was performed with nine b-values up to 2500 s/mm(2) in 12 healthy volunteers. The goodness-of-fit was studied with F-tests and with the Akaike information criterion. Tissue contrasts were differentiated with a multiple comparison corrected nonparametric analysis of variance. F-test showed that the TCE model was better than the biexponential model in gray and white matter. Corrected Akaike information criterion showed that the TCE model has the best accuracy and produced the most reliable contrasts in white matter among all models studied. In conclusion, the TCE model was found to be the best model to infer the microstructural properties of brain tissue.
Nonequilibrium simulations of model ionomers in an oscillating electric field
Ting, Christina L.; Sorensen-Unruh, Karen E.; Stevens, Mark J.; ...
2016-07-25
Here, we perform molecular dynamics simulations of a coarse-grained model of ionomer melts in an applied oscillating electric field. The frequency-dependent conductivity and susceptibility are calculated directly from the current density and polarization density, respectively. At high frequencies, we find a peak in the real part of the conductivity due to plasma oscillations of the ions. At lower frequencies, the dynamic response of the ionomers depends on the ionic aggregate morphology in the system, which consists of either percolated or isolated aggregates. We show that the dynamic response of the model ionomers to the applied oscillating field can be understoodmore » by comparison with relevant time scales in the systems, obtained from independent calculations.« less
Nonequilibrium simulations of model ionomers in an oscillating electric field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ting, Christina L.; Sorensen-Unruh, Karen E.; Stevens, Mark J.
Here, we perform molecular dynamics simulations of a coarse-grained model of ionomer melts in an applied oscillating electric field. The frequency-dependent conductivity and susceptibility are calculated directly from the current density and polarization density, respectively. At high frequencies, we find a peak in the real part of the conductivity due to plasma oscillations of the ions. At lower frequencies, the dynamic response of the ionomers depends on the ionic aggregate morphology in the system, which consists of either percolated or isolated aggregates. We show that the dynamic response of the model ionomers to the applied oscillating field can be understoodmore » by comparison with relevant time scales in the systems, obtained from independent calculations.« less
Communication: Coordinate-dependent diffusivity from single molecule trajectories
NASA Astrophysics Data System (ADS)
Berezhkovskii, Alexander M.; Makarov, Dmitrii E.
2017-11-01
Single-molecule observations of biomolecular folding are commonly interpreted using the model of one-dimensional diffusion along a reaction coordinate, with a coordinate-independent diffusion coefficient. Recent analysis, however, suggests that more general models are required to account for single-molecule measurements performed with high temporal resolution. Here, we consider one such generalization: a model where the diffusion coefficient can be an arbitrary function of the reaction coordinate. Assuming Brownian dynamics along this coordinate, we derive an exact expression for the coordinate-dependent diffusivity in terms of the splitting probability within an arbitrarily chosen interval and the mean transition path time between the interval boundaries. This formula can be used to estimate the effective diffusion coefficient along a reaction coordinate directly from single-molecule trajectories.
Turbulence modeling and experiments
NASA Technical Reports Server (NTRS)
Shabbir, Aamir
1992-01-01
The best way of verifying turbulence is to do a direct comparison between the various terms and their models. The success of this approach depends upon the availability of the data for the exact correlations (both experimental and DNS). The other approach involves numerically solving the differential equations and then comparing the results with the data. The results of such a computation will depend upon the accuracy of all the modeled terms and constants. Because of this it is sometimes difficult to find the cause of a poor performance by a model. However, such a calculation is still meaningful in other ways as it shows how a complete Reynolds stress model performs. Thirteen homogeneous flows are numerically computed using the second order closure models. We concentrate only on those models which use a linear (or quasi-linear) model for the rapid term. This, therefore, includes the Launder, Reece and Rodi (LRR) model; the isotropization of production (IP) model; and the Speziale, Sarkar, and Gatski (SSG) model. Which of the three models performs better is examined along with what are their weaknesses, if any. The other work reported deal with the experimental balances of the second moment equations for a buoyant plume. Despite the tremendous amount of activity toward the second order closure modeling of turbulence, very little experimental information is available about the budgets of the second moment equations. Part of the problem stems from our inability to measure the pressure correlations. However, if everything else appearing in these equations is known from the experiment, pressure correlations can be obtained as the closing terms. This is the closest we can come to in obtaining these terms from experiment, and despite the measurement errors which might be present in such balances, the resulting information will be extremely useful for the turbulence modelers. The purpose of this part of the work was to provide such balances of the Reynolds stress and heat flux equations for the buoyant plume.
A consistent framework for Horton regression statistics that leads to a modified Hack's law
Furey, P.R.; Troutman, B.M.
2008-01-01
A statistical framework is introduced that resolves important problems with the interpretation and use of traditional Horton regression statistics. The framework is based on a univariate regression model that leads to an alternative expression for Horton ratio, connects Horton regression statistics to distributional simple scaling, and improves the accuracy in estimating Horton plot parameters. The model is used to examine data for drainage area A and mainstream length L from two groups of basins located in different physiographic settings. Results show that confidence intervals for the Horton plot regression statistics are quite wide. Nonetheless, an analysis of covariance shows that regression intercepts, but not regression slopes, can be used to distinguish between basin groups. The univariate model is generalized to include n > 1 dependent variables. For the case where the dependent variables represent ln A and ln L, the generalized model performs somewhat better at distinguishing between basin groups than two separate univariate models. The generalized model leads to a modification of Hack's law where L depends on both A and Strahler order ??. Data show that ?? plays a statistically significant role in the modified Hack's law expression. ?? 2008 Elsevier B.V.
A Temperature-Dependent Battery Model for Wireless Sensor Networks.
Rodrigues, Leonardo M; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco
2017-02-22
Energy consumption is a major issue in Wireless Sensor Networks (WSNs), as nodes are powered by chemical batteries with an upper bounded lifetime. Estimating the lifetime of batteries is a difficult task, as it depends on several factors, such as operating temperatures and discharge rates. Analytical battery models can be used for estimating both the battery lifetime and the voltage behavior over time. Still, available models usually do not consider the impact of operating temperatures on the battery behavior. The target of this work is to extend the widely-used Kinetic Battery Model (KiBaM) to include the effect of temperature on the battery behavior. The proposed Temperature-Dependent KiBaM (T-KiBaM) is able to handle operating temperatures, providing better estimates for the battery lifetime and voltage behavior. The performed experimental validation shows that T-KiBaM achieves an average accuracy error smaller than 0.33%, when estimating the lifetime of Ni-MH batteries for different temperature conditions. In addition, T-KiBaM significantly improves the original KiBaM voltage model. The proposed model can be easily adapted to handle other battery technologies, enabling the consideration of different WSN deployments.
A Temperature-Dependent Battery Model for Wireless Sensor Networks
Rodrigues, Leonardo M.; Montez, Carlos; Moraes, Ricardo; Portugal, Paulo; Vasques, Francisco
2017-01-01
Energy consumption is a major issue in Wireless Sensor Networks (WSNs), as nodes are powered by chemical batteries with an upper bounded lifetime. Estimating the lifetime of batteries is a difficult task, as it depends on several factors, such as operating temperatures and discharge rates. Analytical battery models can be used for estimating both the battery lifetime and the voltage behavior over time. Still, available models usually do not consider the impact of operating temperatures on the battery behavior. The target of this work is to extend the widely-used Kinetic Battery Model (KiBaM) to include the effect of temperature on the battery behavior. The proposed Temperature-Dependent KiBaM (T-KiBaM) is able to handle operating temperatures, providing better estimates for the battery lifetime and voltage behavior. The performed experimental validation shows that T-KiBaM achieves an average accuracy error smaller than 0.33%, when estimating the lifetime of Ni-MH batteries for different temperature conditions. In addition, T-KiBaM significantly improves the original KiBaM voltage model. The proposed model can be easily adapted to handle other battery technologies, enabling the consideration of different WSN deployments. PMID:28241444
NASA Astrophysics Data System (ADS)
Ege, Kerem; Roozen, N. B.; Leclère, Quentin; Rinaldi, Renaud G.
2018-07-01
In the context of aeronautics, automotive and construction applications, the design of light multilayer plates with optimized vibroacoustical damping and isolation performances remains a major industrial challenge and a hot topic of research. This paper focuses on the vibrational behavior of three-layered sandwich composite plates in a broad-band frequency range. Several aspects are studied through measurement techniques and analytical modelling of a steel/polymer/steel plate sandwich system. A contactless measurement of the velocity field of plates using a scanning laser vibrometer is performed, from which the equivalent single layer complex rigidity (apparent bending stiffness and apparent damping) in the mid/high frequency ranges is estimated. The results are combined with low/mid frequency estimations obtained with a high-resolution modal analysis method so that the frequency dependent equivalent Young's modulus and equivalent loss factor of the composite plate are identified for the whole [40 Hz-20 kHz] frequency band. The results are in very good agreement with an equivalent single layer analytical modelling based on wave propagation analysis (model of Guyader). The comparison with this model allows identifying the frequency dependent complex modulus of the polymer core layer through inverse resolution. Dynamical mechanical analysis measurements are also performed on the polymer layer alone and compared with the values obtained through the inverse method. Again, a good agreement between these two estimations over the broad-band frequency range demonstrates the validity of the approach.
NASA Astrophysics Data System (ADS)
Wylie, Scott; Watson, Simon
2013-04-01
Any past, current or projected future wind farm developments are highly dependent on localised climatic conditions. For example the mean wind speed, one of the main factors in assessing the economic feasibility of a wind farm, can vary significantly over length scales no greater than the size of a typical wind farm. Any additional heterogeneity at a potential site, such as forestry, can affect the wind resource further not accounting for the additional difficulty of installation. If a wind farm is sited in an environmentally sensitive area then the ability to predict the wind farm performance and possible impacts on the important localised climatic conditions are of increased importance. Siting of wind farms in environmentally sensitive areas is not uncommon, such as areas of peat-land as in this example. Areas of peat-land are important sinks for carbon in the atmosphere but their ability to sequester carbon is highly dependent on the local climatic conditions. An operational wind farm's impact on such an area was investigated using CFD. Validation of the model outputs were carried out using field measurements from three automatic weather stations (AWS) located throughout the site. The study focuses on validation of both wind speed and turbulence measurement, whilst also assessing the models ability to predict wind farm performance. The use of CFD to model the variation in wind speed over heterogeneous terrain, including wind turbines effects, is increasing in popularity. Encouraging results have increased confidence in the ability of CFD performance in complex terrain with features such as steep slopes and forests, which are not well modelled by the widely used linear models such as WAsP and MS-Micro. Using concurrent measurements from three stationary AWS across the wind farm will allow detailed validation of the model predicted flow characteristics, whilst aggregated power output information will allow an assessment of how accurate the model setup can predict wind farm performance. Given the dependence of the local climatic conditions influence on the peat-land's ability to sequester carbon, accurate predictions of the local wind and turbulence features will allow us to quantify any possible wind farm influences. This work was carried out using the commercially available Reynolds Averaged Navier-Stokes (RANS) CFD package ANSYS CFX. Utilising the Windmodeller add-on in CFX, a series of simulations were carried out to assess wind flow interactions through and around the wind farm, incorporating features such as terrain, forestry and rotor wake interactions. Particular attention was paid to forestry effects, as the AWS are located close to the vicinity of forestry. Different Leaf Area Densities (LAD) were tested to assess how sensitive the models output was to this change.
A multimodal approach to estimating vigilance using EEG and forehead EOG.
Zheng, Wei-Long; Lu, Bao-Liang
2017-04-01
Covert aspects of ongoing user mental states provide key context information for user-aware human computer interactions. In this paper, we focus on the problem of estimating the vigilance of users using EEG and EOG signals. The PERCLOS index as vigilance annotation is obtained from eye tracking glasses. To improve the feasibility and wearability of vigilance estimation devices for real-world applications, we adopt a novel electrode placement for forehead EOG and extract various eye movement features, which contain the principal information of traditional EOG. We explore the effects of EEG from different brain areas and combine EEG and forehead EOG to leverage their complementary characteristics for vigilance estimation. Considering that the vigilance of users is a dynamic changing process because the intrinsic mental states of users involve temporal evolution, we introduce continuous conditional neural field and continuous conditional random field models to capture dynamic temporal dependency. We propose a multimodal approach to estimating vigilance by combining EEG and forehead EOG and incorporating the temporal dependency of vigilance into model training. The experimental results demonstrate that modality fusion can improve the performance compared with a single modality, EOG and EEG contain complementary information for vigilance estimation, and the temporal dependency-based models can enhance the performance of vigilance estimation. From the experimental results, we observe that theta and alpha frequency activities are increased, while gamma frequency activities are decreased in drowsy states in contrast to awake states. The forehead setup allows for the simultaneous collection of EEG and EOG and achieves comparative performance using only four shared electrodes in comparison with the temporal and posterior sites.
Dual-memory processes in crack cocaine dependents: The effects of childhood neglect on recall.
Tractenberg, Saulo G; Viola, Thiago W; Gomes, Carlos F A; Wearick-Silva, Luis Eduardo; Kristensen, Christian H; Stein, Lilian M; Grassi-Oliveira, Rodrigo
2015-01-01
Exposure to adversities during sensitive periods of neurodevelopment is associated with the subsequent development of substance dependence and exerts harmful, long-lasting effects upon memory functioning. In this study, we investigated the relationship between childhood neglect (CN) and memory using a dual-process model that quantifies recollective and non-recollective retrieval processes in crack cocaine dependents. Eighty-four female crack cocaine-dependent inpatients who did (N = 32) or did not (N = 52) report a history of CN received multiple opportunities to study and recall a short list composed of familiar and concrete words and then received a delayed-recall test. Crack cocaine dependents with a history of CN showed worse performance on free-recall tests than did dependents without a history of CN; this finding was associated with declines in recollective retrieval (direct access) rather than non-recollective retrieval. In addition, we found no evidence of group differences in forgetting rates between immediate- and delayed-recall tests. The results support developmental models of traumatology and suggest that neglect of crack cocaine dependents in early life disrupts the adult memory processes that support the retrieval of detailed representations of events from the past.
Psychophysical and perceptual performance in a simulated-scotoma model of human eye injury
NASA Astrophysics Data System (ADS)
Brandeis, R.; Egoz, I.; Peri, D.; Sapiens, N.; Turetz, J.
2008-02-01
Macular scotomas, affecting visual functioning, characterize many eye and neurological diseases like AMD, diabetes mellitus, multiple sclerosis, and macular hole. In this work, foveal visual field defects were modeled, and their effects were evaluated on spatial contrast sensitivity and a task of stimulus detection and aiming. The modeled occluding scotomas, of different size, were superimposed on the stimuli presented on the computer display, and were stabilized on the retina using a mono Purkinje Eye-Tracker. Spatial contrast sensitivity was evaluated using square-wave grating stimuli, whose contrast thresholds were measured using the method of constant stimuli with "catch trials". The detection task consisted of a triple conjunctive visual search display of: size (in visual angle), contrast and background (simple, low-level features vs. complex, high-level features). Search/aiming accuracy as well as R.T. measures used for performance evaluation. Artificially generated scotomas suppressed spatial contrast sensitivity in a size dependent manner, similar to previous studies. Deprivation effect was dependent on spatial frequency, consistent with retinal inhomogeneity models. Stimulus detection time was slowed in complex background search situation more than in simple background. Detection speed was dependent on scotoma size and size of stimulus. In contrast, visually guided aiming was more sensitive to scotoma effect in simple background search situation than in complex background. Both stimulus aiming R.T. and accuracy (precision targeting) were impaired, as a function of scotoma size and size of stimulus. The data can be explained by models distinguishing between saliency-based, parallel and serial search processes, guiding visual attention, which are supported by underlying retinal as well as neural mechanisms.
Stöckl, Anna L; Kihlström, Klara; Chandler, Steven; Sponberg, Simon
2017-04-05
Flight control in insects is heavily dependent on vision. Thus, in dim light, the decreased reliability of visual signal detection also prompts consequences for insect flight. We have an emerging understanding of the neural mechanisms that different species employ to adapt the visual system to low light. However, much less explored are comparative analyses of how low light affects the flight behaviour of insect species, and the corresponding links between physiological adaptations and behaviour. We investigated whether the flower tracking behaviour of three hawkmoth species with different diel activity patterns revealed luminance-dependent adaptations, using a system identification approach. We found clear luminance-dependent differences in flower tracking in all three species, which were explained by a simple luminance-dependent delay model, which generalized across species. We discuss physiological and anatomical explanations for the variance in tracking responses, which could not be explained by such simple models. Differences between species could not be explained by the simple delay model. However, in several cases, they could be explained through the addition on a second model parameter, a simple scaling term, that captures the responsiveness of each species to flower movements. Thus, we demonstrate here that much of the variance in the luminance-dependent flower tracking responses of hawkmoths with different diel activity patterns can be captured by simple models of neural processing.This article is part of the themed issue 'Vision in dim light'. © 2017 The Author(s).
The effects of deep level traps on the electrical properties of semi-insulating CdZnTe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zha, Gangqiang; Yang, Jian; Xu, Lingyan
2014-01-28
Deep level traps have considerable effects on the electrical properties and radiation detection performance of high resistivity CdZnTe. A deep-trap model for high resistivity CdZnTe was proposed in this paper. The high resistivity mechanism and the electrical properties were analyzed based on this model. High resistivity CdZnTe with high trap ionization energy E{sub t} can withstand high bias voltages. The leakage current is dependent on both the deep traps and the shallow impurities. The performance of a CdZnTe radiation detector will deteriorate at low temperatures, and the way in which sub-bandgap light excitation could improve the low temperature performance canmore » be explained using the deep trap model.« less
Multilingual Twitter Sentiment Classification: The Role of Human Annotators
Mozetič, Igor; Grčar, Miha; Smailović, Jasmina
2016-01-01
What are the limits of automated Twitter sentiment classification? We analyze a large set of manually labeled tweets in different languages, use them as training data, and construct automated classification models. It turns out that the quality of classification models depends much more on the quality and size of training data than on the type of the model trained. Experimental results indicate that there is no statistically significant difference between the performance of the top classification models. We quantify the quality of training data by applying various annotator agreement measures, and identify the weakest points of different datasets. We show that the model performance approaches the inter-annotator agreement when the size of the training set is sufficiently large. However, it is crucial to regularly monitor the self- and inter-annotator agreements since this improves the training datasets and consequently the model performance. Finally, we show that there is strong evidence that humans perceive the sentiment classes (negative, neutral, and positive) as ordered. PMID:27149621
A statistical analysis of the daily streamflow hydrograph
NASA Astrophysics Data System (ADS)
Kavvas, M. L.; Delleur, J. W.
1984-03-01
In this study a periodic statistical analysis of daily streamflow data in Indiana, U.S.A., was performed to gain some new insight into the stochastic structure which describes the daily streamflow process. This analysis was performed by the periodic mean and covariance functions of the daily streamflows, by the time and peak discharge -dependent recession limb of the daily streamflow hydrograph, by the time and discharge exceedance level (DEL) -dependent probability distribution of the hydrograph peak interarrival time, and by the time-dependent probability distribution of the time to peak discharge. Some new statistical estimators were developed and used in this study. In general features, this study has shown that: (a) the persistence properties of daily flows depend on the storage state of the basin at the specified time origin of the flow process; (b) the daily streamflow process is time irreversible; (c) the probability distribution of the daily hydrograph peak interarrival time depends both on the occurrence time of the peak from which the inter-arrival time originates and on the discharge exceedance level; and (d) if the daily streamflow process is modeled as the release from a linear watershed storage, this release should depend on the state of the storage and on the time of the release as the persistence properties and the recession limb decay rates were observed to change with the state of the watershed storage and time. Therefore, a time-varying reservoir system needs to be considered if the daily streamflow process is to be modeled as the release from a linear watershed storage.
Will, Johanna L; Eckart, Moritz T; Rosenow, Felix; Bauer, Sebastian; Oertel, Wolfgang H; Schwarting, Rainer K W; Norwood, Braxton A
2013-06-15
The human serial reaction time task (SRTT) has widely been used to study the neural basis of implicit learning. It is well documented, in both human and animal studies, that striatal dopaminergic processes play a major role in this task. However, findings on the role of the hippocampus - which is mainly associated with declarative memory - in implicit learning and performance are less univocal. We used a SRTT to evaluate implicit learning and performance in rats with perforant pathway stimulation-induced hippocampal neuron loss; a clinically-relevant animal model of mesial temporal lobe epilepsy (MTLS-HS). As has been previously reported for the Sprague-Dawley strain, 8h of continuous stimulation in male Wistar rats reliably induced widespread neuron loss in areas CA3 and CA1 with a characteristic sparing of CA2 and the granule cells. Histological analysis revealed that hippocampal volume was reduced by an average of 44%. Despite this severe hippocampal injury, rats showed superior performance in our instrumental SRTT, namely shorter reaction times, and without a loss in accuracy, especially during the second half of our 16-days testing period. These results demonstrate that a hippocampal lesion can improve performance in a rat SRTT, which is probably due to enhanced instrumental performance. In line with our previous findings based on ibotenic-acid induced hippocampal lesion, these data support the hypothesis that loss or impairment of hippocampal function can enhance specific task performance, especially when it is dependent on procedural (striatum-dependent) mechanisms with minimal spatial requirements. As the animal model used here exhibits the defining characteristics of MTLE-HS, these findings may have implications for the study and management of patients with MTLE. Copyright © 2013 Elsevier B.V. All rights reserved.
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Multi-Topic Tracking Model for dynamic social network
NASA Astrophysics Data System (ADS)
Li, Yuhua; Liu, Changzheng; Zhao, Ming; Li, Ruixuan; Xiao, Hailing; Wang, Kai; Zhang, Jun
2016-07-01
The topic tracking problem has attracted much attention in the last decades. However, existing approaches rarely consider network structures and textual topics together. In this paper, we propose a novel statistical model based on dynamic bayesian network, namely Multi-Topic Tracking Model for Dynamic Social Network (MTTD). It takes influence phenomenon, selection phenomenon, document generative process and the evolution of textual topics into account. Specifically, in our MTTD model, Gibbs Random Field is defined to model the influence of historical status of users in the network and the interdependency between them in order to consider the influence phenomenon. To address the selection phenomenon, a stochastic block model is used to model the link generation process based on the users' interests to topics. Probabilistic Latent Semantic Analysis (PLSA) is used to describe the document generative process according to the users' interests. Finally, the dependence on the historical topic status is also considered to ensure the continuity of the topic itself in topic evolution model. Expectation Maximization (EM) algorithm is utilized to estimate parameters in the proposed MTTD model. Empirical experiments on real datasets show that the MTTD model performs better than Popular Event Tracking (PET) and Dynamic Topic Model (DTM) in generalization performance, topic interpretability performance, topic content evolution and topic popularity evolution performance.
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn; Steinsland, Ingelin
2014-05-01
This study introduces a methodology for the construction of probabilistic inflow forecasts for multiple catchments and lead times, and investigates criterions for evaluation of multi-variate forecasts. A post-processing approach is used, and a Gaussian model is applied for transformed variables. The post processing model has two main components, the mean model and the dependency model. The mean model is used to estimate the marginal distributions for forecasted inflow for each catchment and lead time, whereas the dependency models was used to estimate the full multivariate distribution of forecasts, i.e. co-variances between catchments and lead times. In operational situations, it is a straightforward task to use the models to sample inflow ensembles which inherit the dependencies between catchments and lead times. The methodology was tested and demonstrated in the river systems linked to the Ulla-Førre hydropower complex in southern Norway, where simultaneous probabilistic forecasts for five catchments and ten lead times were constructed. The methodology exhibits sufficient flexibility to utilize deterministic flow forecasts from a numerical hydrological model as well as statistical forecasts such as persistent forecasts and sliding window climatology forecasts. It also deals with variation in the relative weights of these forecasts with both catchment and lead time. When evaluating predictive performance in original space using cross validation, the case study found that it is important to include the persistent forecast for the initial lead times and the hydrological forecast for medium-term lead times. Sliding window climatology forecasts become more important for the latest lead times. Furthermore, operationally important features in this case study such as heteroscedasticity, lead time varying between lead time dependency and lead time varying between catchment dependency are captured. Two criterions were used for evaluating the added value of the dependency model. The first one was the Energy score (ES) that is a multi-dimensional generalization of continuous rank probability score (CRPS). ES was calculated for all lead-times and catchments together, for each catchment across all lead times and for each lead time across all catchments. The second criterion was to use CRPS for forecasted inflows accumulated over several lead times and catchments. The results showed that ES was not very sensitive to correct covariance structure, whereas CRPS for accumulated flows where more suitable for evaluating the dependency model. This indicates that it is more appropriate to evaluate relevant univariate variables that depends on the dependency structure then to evaluate the multivariate forecast directly.
Correcting evaluation bias of relational classifiers with network cross validation
Neville, Jennifer; Gallagher, Brian; Eliassi-Rad, Tina; ...
2011-01-04
Recently, a number of modeling techniques have been developed for data mining and machine learning in relational and network domains where the instances are not independent and identically distributed (i.i.d.). These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been little focus on how these same dependencies affect our ability to draw accurate conclusions about the performance of the models. More specifically, the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it difficult to use these tests to assess themore » models in an unbiased manner. In this work, we examine the task of within-network classification and the question of whether two algorithms will learn models that will result in significantly different levels of performance. We show that the commonly used form of evaluation (paired t-test on overlapping network samples) can result in an unacceptable level of Type I error. Furthermore, we show that Type I error increases as (1) the correlation among instances increases and (2) the size of the evaluation set increases (i.e., the proportion of labeled nodes in the network decreases). Lastly, we propose a method for network cross-validation that combined with paired t-tests produces more acceptable levels of Type I error while still providing reasonable levels of statistical power (i.e., 1–Type II error).« less
Mueller, Evelyn A; Bengel, Juergen; Wirtz, Markus A
2013-12-01
This study aimed to develop a self-description assessment instrument to measure work performance in patients with musculoskeletal diseases. In terms of the International Classification of Functioning, Disability and Health (ICF), work performance is defined as the degree of meeting the work demands (activities) at the actual workplace (environment). To account for the fact that work performance depends on the work demands of the job, we strived to develop item banks that allow a flexible use of item subgroups depending on the specific work demands of the patients' jobs. Item development included the collection of work tasks from literature and content validation through expert surveys and patient interviews. The resulting 122 items were answered by 621 patients with musculoskeletal diseases. Exploratory factor analysis to ascertain dimensionality and Rasch analysis (partial credit model) for each of the resulting dimensions were performed. Exploratory factor analysis resulted in four dimensions, and subsequent Rasch analysis led to the following item banks: 'impaired productivity' (15 items), 'impaired cognitive performance' (18), 'impaired coping with stress' (13) and 'impaired physical performance' (low physical workload 20 items, high physical workload 10 items). The item banks exhibited person separation indices (reliability) between 0.89 and 0.96. The assessment of work performance adds the activities component to the more commonly employed participation component of the ICF-model. The four item banks can be adapted to specific jobs where necessary without losing comparability of person measures, as the item banks are based on Rasch analysis.
Johnson, Fred A.; Jensen, Gitte H.; Madsen, Jesper; Williams, Byron K.
2014-01-01
We explored the application of dynamic-optimization methods to the problem of pink-footed goose (Anser brachyrhynchus) management in western Europe. We were especially concerned with the extent to which uncertainty in population dynamics influenced an optimal management strategy, the gain in management performance that could be expected if uncertainty could be eliminated or reduced, and whether an adaptive or robust management strategy might be most appropriate in the face of uncertainty. We combined three alternative survival models with three alternative reproductive models to form a set of nine annual-cycle models for pink-footed geese. These models represent a wide range of possibilities concerning the extent to which demographic rates are density dependent or independent, and the extent to which they are influenced by spring temperatures. We calculated state-dependent harvest strategies for these models using stochastic dynamic programming and an objective function that maximized sustainable harvest, subject to a constraint on desired population size. As expected, attaining the largest mean objective value (i.e., the relative measure of management performance) depended on the ability to match a model-dependent optimal strategy with its generating model of population dynamics. The nine models suggested widely varying objective values regardless of the harvest strategy, with the density-independent models generally producing higher objective values than models with density-dependent survival. In the face of uncertainty as to which of the nine models is most appropriate, the optimal strategy assuming that both survival and reproduction were a function of goose abundance and spring temperatures maximized the expected minimum objective value (i.e., maxi–min). In contrast, the optimal strategy assuming equal model weights minimized the expected maximum loss in objective value. The expected value of eliminating model uncertainty was an increase in objective value of only 3.0%. This value represents the difference between the best that could be expected if the most appropriate model were known and the best that could be expected in the face of model uncertainty. The value of eliminating uncertainty about the survival process was substantially higher than that associated with the reproductive process, which is consistent with evidence that variation in survival is more important than variation in reproduction in relatively long-lived avian species. Comparing the expected objective value if the most appropriate model were known with that of the maxi–min robust strategy, we found the value of eliminating uncertainty to be an expected increase of 6.2% in objective value. This result underscores the conservatism of the maxi–min rule and suggests that risk-neutral managers would prefer the optimal strategy that maximizes expected value, which is also the strategy that is expected to minimize the maximum loss (i.e., a strategy based on equal model weights). The low value of information calculated for pink-footed geese suggests that a robust strategy (i.e., one in which no learning is anticipated) could be as nearly effective as an adaptive one (i.e., a strategy in which the relative credibility of models is assessed through time). Of course, an alternative explanation for the low value of information is that the set of population models we considered was too narrow to represent key uncertainties in population dynamics. Yet we know that questions about the presence of density dependence must be central to the development of a sustainable harvest strategy. And while there are potentially many environmental covariates that could help explain variation in survival or reproduction, our admission of models in which vital rates are drawn randomly from reasonable distributions represents a worst-case scenario for management. We suspect that much of the value of the various harvest strategies we calculated is derived from the fact that they are state dependent, such that appropriate harvest rates depend on population abundance and weather conditions, as well as our focus on an infinite time horizon for sustainability.
Modeling of Density-Dependent Flow based on the Thermodynamically Constrained Averaging Theory
NASA Astrophysics Data System (ADS)
Weigand, T. M.; Schultz, P. B.; Kelley, C. T.; Miller, C. T.; Gray, W. G.
2016-12-01
The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for density-dependent flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as a diffusion that arises from gradients associated with pressure and activity and the ability to describe both high and low concentration displacement. The TCAT model is presented and closure relations for the TCAT model are postulated based on microscale averages and a parameter estimation is performed on a subset of the experimental data. Due to the sharpness of the fronts, an adaptive moving mesh technique was used to ensure grid independent solutions within the run time constraints. The optimized parameters are then used for forward simulations and compared to the set of experimental data not used for the parameter estimation.
A Computer Model for Analyzing Volatile Removal Assembly
NASA Technical Reports Server (NTRS)
Guo, Boyun
2010-01-01
A computer model simulates reactional gas/liquid two-phase flow processes in porous media. A typical process is the oxygen/wastewater flow in the Volatile Removal Assembly (VRA) in the Closed Environment Life Support System (CELSS) installed in the International Space Station (ISS). The volatile organics in the wastewater are combusted by oxygen gas to form clean water and carbon dioxide, which is solved in the water phase. The model predicts the oxygen gas concentration profile in the reactor, which is an indicator of reactor performance. In this innovation, a mathematical model is included in the computer model for calculating the mass transfer from the gas phase to the liquid phase. The amount of mass transfer depends on several factors, including gas-phase concentration, distribution, and reaction rate. For a given reactor dimension, these factors depend on pressure and temperature in the reactor and composition and flow rate of the influent.
Multivariate Boosting for Integrative Analysis of High-Dimensional Cancer Genomic Data
Xiong, Lie; Kuan, Pei-Fen; Tian, Jianan; Keles, Sunduz; Wang, Sijian
2015-01-01
In this paper, we propose a novel multivariate component-wise boosting method for fitting multivariate response regression models under the high-dimension, low sample size setting. Our method is motivated by modeling the association among different biological molecules based on multiple types of high-dimensional genomic data. Particularly, we are interested in two applications: studying the influence of DNA copy number alterations on RNA transcript levels and investigating the association between DNA methylation and gene expression. For this purpose, we model the dependence of the RNA expression levels on DNA copy number alterations and the dependence of gene expression on DNA methylation through multivariate regression models and utilize boosting-type method to handle the high dimensionality as well as model the possible nonlinear associations. The performance of the proposed method is demonstrated through simulation studies. Finally, our multivariate boosting method is applied to two breast cancer studies. PMID:26609213
Incorporating signal-dependent noise for hyperspectral target detection
NASA Astrophysics Data System (ADS)
Morman, Christopher J.; Meola, Joseph
2015-05-01
The majority of hyperspectral target detection algorithms are developed from statistical data models employing stationary background statistics or white Gaussian noise models. Stationary background models are inaccurate as a result of two separate physical processes. First, varying background classes often exist in the imagery that possess different clutter statistics. Many algorithms can account for this variability through the use of subspaces or clustering techniques. The second physical process, which is often ignored, is a signal-dependent sensor noise term. For photon counting sensors that are often used in hyperspectral imaging systems, sensor noise increases as the measured signal level increases as a result of Poisson random processes. This work investigates the impact of this sensor noise on target detection performance. A linear noise model is developed describing sensor noise variance as a linear function of signal level. The linear noise model is then incorporated for detection of targets using data collected at Wright Patterson Air Force Base.
Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel
2016-10-01
We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.
Performance monitoring can boost turboexpander efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
McIntire, R.
1982-07-05
This paper discusses ways of improving the productivity of the turboexpander/refrigeration system's radial expander and radial compressor through systematic review of component performance. It reviews several techniques to determine the performance of an expander and compressor. It suggests that any performance improvement program requires quantifying the performance of separate components over a range of operating conditions; estimating the increase in performance associated with any hardware change; and developing an analytical (computer) model of the entire system by using the performance curve of individual components. The model is used to quantify the economic benefits of any change in the system, eithermore » a change in operating procedures or a hardware modification. Topics include proper ways of using antisurge control valves and modifying flow rate/shaft speed (Q/N). It is noted that compressor efficiency depends on the incidence angle of blade at the rotor leading edge and the angle of the incoming gas stream.« less
A hybrid variational ensemble data assimilation for the HIgh Resolution Limited Area Model (HIRLAM)
NASA Astrophysics Data System (ADS)
Gustafsson, N.; Bojarova, J.; Vignes, O.
2014-02-01
A hybrid variational ensemble data assimilation has been developed on top of the HIRLAM variational data assimilation. It provides the possibility of applying a flow-dependent background error covariance model during the data assimilation at the same time as full rank characteristics of the variational data assimilation are preserved. The hybrid formulation is based on an augmentation of the assimilation control variable with localised weights to be assigned to a set of ensemble member perturbations (deviations from the ensemble mean). The flow-dependency of the hybrid assimilation is demonstrated in single simulated observation impact studies and the improved performance of the hybrid assimilation in comparison with pure 3-dimensional variational as well as pure ensemble assimilation is also proven in real observation assimilation experiments. The performance of the hybrid assimilation is comparable to the performance of the 4-dimensional variational data assimilation. The sensitivity to various parameters of the hybrid assimilation scheme and the sensitivity to the applied ensemble generation techniques are also examined. In particular, the inclusion of ensemble perturbations with a lagged validity time has been examined with encouraging results.
New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times
NASA Astrophysics Data System (ADS)
Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid
2017-09-01
In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.
Vakanski, A; Ferguson, JM; Lee, S
2016-01-01
Objective The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient’s exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient’s physician with recommendations for improvement. Methods The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. Results The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject’s performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. Conclusion The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons. PMID:28111643
Rebaudo, François; Faye, Emile; Dangles, Olivier
2016-01-01
A large body of literature has recently recognized the role of microclimates in controlling the physiology and ecology of species, yet the relevance of fine-scale climatic data for modeling species performance and distribution remains a matter of debate. Using a 6-year monitoring of three potato moth species, major crop pests in the tropical Andes, we asked whether the spatiotemporal resolution of temperature data affect the predictions of models of moth performance and distribution. For this, we used three different climatic data sets: (i) the WorldClim dataset (global dataset), (ii) air temperature recorded using data loggers (weather station dataset), and (iii) air crop canopy temperature (microclimate dataset). We developed a statistical procedure to calibrate all datasets to monthly and yearly variation in temperatures, while keeping both spatial and temporal variances (air monthly temperature at 1 km² for the WorldClim dataset, air hourly temperature for the weather station, and air minute temperature over 250 m radius disks for the microclimate dataset). Then, we computed pest performances based on these three datasets. Results for temperature ranging from 9 to 11°C revealed discrepancies in the simulation outputs in both survival and development rates depending on the spatiotemporal resolution of the temperature dataset. Temperature and simulated pest performances were then combined into multiple linear regression models to compare predicted vs. field data. We used an additional set of study sites to test the ability of the results of our model to be extrapolated over larger scales. Results showed that the model implemented with microclimatic data best predicted observed pest abundances for our study sites, but was less accurate than the global dataset model when performed at larger scales. Our simulations therefore stress the importance to consider different temperature datasets depending on the issue to be solved in order to accurately predict species abundances. In conclusion, keeping in mind that the mismatch between the size of organisms and the scale at which climate data are collected and modeled remains a key issue, temperature dataset selection should be balanced by the desired output spatiotemporal scale for better predicting pest dynamics and developing efficient pest management strategies.
Vakanski, A; Ferguson, J M; Lee, S
2016-12-01
The objective of the proposed research is to develop a methodology for modeling and evaluation of human motions, which will potentially benefit patients undertaking a physical rehabilitation therapy (e.g., following a stroke or due to other medical conditions). The ultimate aim is to allow patients to perform home-based rehabilitation exercises using a sensory system for capturing the motions, where an algorithm will retrieve the trajectories of a patient's exercises, will perform data analysis by comparing the performed motions to a reference model of prescribed motions, and will send the analysis results to the patient's physician with recommendations for improvement. The modeling approach employs an artificial neural network, consisting of layers of recurrent neuron units and layers of neuron units for estimating a mixture density function over the spatio-temporal dependencies within the human motion sequences. Input data are sequences of motions related to a prescribed exercise by a physiotherapist to a patient, and recorded with a motion capture system. An autoencoder subnet is employed for reducing the dimensionality of captured sequences of human motions, complemented with a mixture density subnet for probabilistic modeling of the motion data using a mixture of Gaussian distributions. The proposed neural network architecture produced a model for sets of human motions represented with a mixture of Gaussian density functions. The mean log-likelihood of observed sequences was employed as a performance metric in evaluating the consistency of a subject's performance relative to the reference dataset of motions. A publically available dataset of human motions captured with Microsoft Kinect was used for validation of the proposed method. The article presents a novel approach for modeling and evaluation of human motions with a potential application in home-based physical therapy and rehabilitation. The described approach employs the recent progress in the field of machine learning and neural networks in developing a parametric model of human motions, by exploiting the representational power of these algorithms to encode nonlinear input-output dependencies over long temporal horizons.
Rebaudo, François; Faye, Emile; Dangles, Olivier
2016-01-01
A large body of literature has recently recognized the role of microclimates in controlling the physiology and ecology of species, yet the relevance of fine-scale climatic data for modeling species performance and distribution remains a matter of debate. Using a 6-year monitoring of three potato moth species, major crop pests in the tropical Andes, we asked whether the spatiotemporal resolution of temperature data affect the predictions of models of moth performance and distribution. For this, we used three different climatic data sets: (i) the WorldClim dataset (global dataset), (ii) air temperature recorded using data loggers (weather station dataset), and (iii) air crop canopy temperature (microclimate dataset). We developed a statistical procedure to calibrate all datasets to monthly and yearly variation in temperatures, while keeping both spatial and temporal variances (air monthly temperature at 1 km² for the WorldClim dataset, air hourly temperature for the weather station, and air minute temperature over 250 m radius disks for the microclimate dataset). Then, we computed pest performances based on these three datasets. Results for temperature ranging from 9 to 11°C revealed discrepancies in the simulation outputs in both survival and development rates depending on the spatiotemporal resolution of the temperature dataset. Temperature and simulated pest performances were then combined into multiple linear regression models to compare predicted vs. field data. We used an additional set of study sites to test the ability of the results of our model to be extrapolated over larger scales. Results showed that the model implemented with microclimatic data best predicted observed pest abundances for our study sites, but was less accurate than the global dataset model when performed at larger scales. Our simulations therefore stress the importance to consider different temperature datasets depending on the issue to be solved in order to accurately predict species abundances. In conclusion, keeping in mind that the mismatch between the size of organisms and the scale at which climate data are collected and modeled remains a key issue, temperature dataset selection should be balanced by the desired output spatiotemporal scale for better predicting pest dynamics and developing efficient pest management strategies. PMID:27148077
Shipboard Electrical System Modeling for Early-Stage Design Space Exploration
2013-04-01
method is demonstrated in several system studies. I. INTRODUCTION The integrated engineering plant ( IEP ) of an electric warship can be viewed as a...which it must operate [2], [4]. The desired IEP design should be dependable [5]. The operability metric has previously been defined as a measure of...the performance of an IEP during a specific scenario [2]. Dependability metrics have been derived from the operability metric as measures of the IEP
Three-dimensional effects for radio frequency antenna modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, M.D.; Batchelor, D.B.; Stallings, D.C.
1994-10-15
Electromagnetic field calculations for radio frequency (rf) antennas in two dimensions (2-D) neglect finite antenna length effects as well as the feeders leading to the main current strap. The 2-D calculations predict that the return currents in the sidewalls of the antenna structure depend strongly on the plasma parameters, but this prediction is suspect because of experimental evidence. To study the validity of the 2-D approximation, the Multiple Antenna Implementation System (MAntIS) has been used to perform three-dimensional (3-D) modeling of the power spectrum, plasma loading, and inductance for a relevant loop antenna design. Effects on antenna performance caused bymore » feeders to the main current strap and conducting sidewalls are considered. The modeling shows that the feeders affect the launched power spectrum in an indirect way by forcing the driven rf current to return in the antenna structure rather than the plasma, as in the 2-D model. It has also been found that poloidal dependencies in the plasma impedance matrix can reduce the loading predicted from that predicted in the 2-D model. For some plasma parameters, the combined 3-D effects can lead to a reduction in the predicted loading by as much as a factor of 2 from that given by the 2-D model, even with end-effect corrections for the 2-D model.« less
Three-dimensional effects for radio frequency antenna modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carter, M.D.; Batchelor, D.B.; Stallings, D.C.
1993-12-31
Electromagnetic field calculations for radio frequency (rf) antennas in two dimensions (2-D) neglect finite antenna length effects as well as the feeders leading to the main current strap. The 2-D calculations predict that the return currents in the sidewalls of the antenna structure depend strongly on the plasma parameters, but this prediction is suspect because of experimental evidence. To study the validity of the 2-D approximation, the Multiple Antenna Implementation System (MAntIS) has been used to perform three-dimensional (3-D) modeling of the power spectrum, plasma loading, and inductance for a relevant loop antenna design. Effects on antenna performance caused bymore » feeders to the main current strap and conducting sidewalls are considered. The modeling shows that the feeders affect the launched power spectrum in an indirect way by forcing the driven rf current to return in the antenna structure rather than the plasma, as in the 2-D model. It has also been found that poloidal dependencies in the plasma impedance matrix can reduce the loading predicted from that predicted in the 2-D model. For some plasma parameters, the combined 3-D effects can lead to a reduction in the predicted loading by as much as a factor of 2 from that given by the 2-D model, even with end-effect corrections for the 2-D model.« less
NASA Astrophysics Data System (ADS)
Pogosov, V. V.; Reva, V. I.
2018-04-01
Self-consistent computations of the monovacancy formation energy are performed for Na N , Mg N , and Al N (12 < N ≤ 168) spherical clusters in the drop model for stable jelly. Scenarios of the Schottky vacancy formation and "bubble vacancy blowing" are considered. It is shown that the asymptotic behavior of the size dependences of the energy for the vacancy formation by these two mechanisms is different and the difference between the characteristics of a charged and neutral cluster is entirely determined by the difference between the ionization potentials of clusters and the energies of electron attachment to them.
Nitramine smokeless propellant research
NASA Technical Reports Server (NTRS)
1977-01-01
A transient ballistics and combustion model was derived to represent the closed vessel experiment that is widely used to characterize propellants. The model incorporates the nitramine combustion mechanisms. A computer program was developed to solve the time dependent equations, and was applied to explain aspects of closed vessel behavior. It is found that the rate of pressurization in the closed vessel is insufficient at pressures of interest to augment the burning rate by time dependent processes. Series of T-burner experiments were performed to compare the combustion instability characteristics of nitramine (HMX) containing propellants and ammonium perchlorate (AP) propellants. It is found that the inclusion of HMX consistently renders the propellant more stable.
Xuan Chi; Barry Goodwin
2012-01-01
Spatial and temporal relationships among agricultural prices have been an important topic of applied research for many years. Such research is used to investigate the performance of markets and to examine linkages up and down the marketing chain. This research has empirically evaluated price linkages by using correlation and regression models and, later, linear and...
ERIC Educational Resources Information Center
Tak, Susanne; Plaisier, Marco; van Rooij, Iris
2008-01-01
To explain human performance on the "Traveling Salesperson" problem (TSP), MacGregor, Ormerod, and Chronicle (2000) proposed that humans construct solutions according to the steps described by their convex-hull algorithm. Focusing on tour length as the dependent variable, and using only random or semirandom point sets, the authors…
Coupled modelling of groundwater flow-heat transport for assessing river-aquifer interactions
NASA Astrophysics Data System (ADS)
Engeler, I.; Hendricks Franssen, H. J.; Müller, R.; Stauffer, F.
2010-05-01
A three-dimensional finite element model for coupled variably saturated groundwater flow and heat transport was developed for the aquifer below the city of Zurich. The piezometric heads in the aquifer are strongly influenced by the river Limmat. In the model region, the river Limmat looses water to the aquifer. The river-aquifer interaction was modelled with the standard linear leakage concept. Coupling was implemented by considering temperature dependence of the hydraulic conductivity and of the leakage coefficient (via water viscosity) and density dependent transport. Calibration was performed for isothermal conditions by inverse modelling using the pilot point method. Independent model testing was carried out with help of the available dense monitoring network for piezometric heads and groundwater temperature. The model was tested by residuals analysis with the help of measurements for both groundwater temperature and head. The comparison of model results and measurements showed high accuracy for temperature except for the Southern part of the model area, where important geological heterogeneity is expected, which could not be reproduced by the model. The comparison of simulated and measured head showed that especially in the vicinity of river Limmat model results were improved by a temperature dependent leakage coefficient. Residuals were reduced up to 30% compared to isothermal leakage coefficients. This holds particularly for regions, where the river stage is considerably above the groundwater level. Furthermore additional analysis confirmed prior findings, that seepage rates during flood events cannot be reproduced with the implemented linear leakage-concept. Infiltration during flood events is larger than expected, which can be potentially attributed to additional infiltration areas. It is concluded that the temperature dependent leakage concept improves the model results for this study area significantly, and that we expect that this is also for other areas the case.
Synthetic Proxy Infrastructure for Task Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Junghans, Christoph; Pavel, Robert
The Synthetic Proxy Infrastructure for Task Evaluation is a proxy application designed to support application developers in gauging the performance of various task granularities when determining how best to utilize task based programming models.The infrastructure is designed to provide examples of common communication patterns with a synthetic workload intended to provide performance data to evaluate programming model and platform overheads for the purpose of determining task granularity for task decomposition purposes. This is presented as a reference implementation of a proxy application with run-time configurable input and output task dependencies ranging from an embarrassingly parallel scenario to patterns with stencil-likemore » dependencies upon their nearest neighbors. Once all, if any, inputs are satisfied each task will execute a synthetic workload (a simple DGEMM of in this case) of varying size and output all, if any, outputs to the next tasks.The intent is for this reference implementation to be implemented as a proxy app in different programming models so as to provide the same infrastructure and to allow for application developers to simulate their own communication needs to assist in task decomposition under various models on a given platform.« less
Analyses on hydrophobicity and attractiveness of all-atom distance-dependent potentials
Shirota, Matsuyuki; Ishida, Takashi; Kinoshita, Kengo
2009-01-01
Accurate model evaluation is a crucial step in protein structure prediction. For this purpose, statistical potentials, which evaluate a model structure based on the observed atomic distance frequencies in comparison with those in reference states, have been widely used. The reference state is a virtual state where all of the atomic interactions are turned off, and it provides a standard to measure the observed frequencies. In this study, we examined seven all-atom distance-dependent potentials with different reference states. As results, we observed that the variations of atom pair composition and those of distance distributions in the reference states produced systematic changes in the hydrophobic and attractive characteristics of the potentials. The performance evaluations with the CASP7 structures indicated that the preference of hydrophobic interactions improved the correlation between the energy and the GDT-TS score, but decreased the Z-score of the native structure. The attractiveness of potential improved both the correlation and Z-score for template-based modeling targets, but the benefit was smaller in free modeling targets. These results indicated that the performances of the potentials were more strongly influenced by their characteristics than by the accuracy of the definitions of the reference states. PMID:19588493
A Robust Bayesian Random Effects Model for Nonlinear Calibration Problems
Fong, Y.; Wakefield, J.; De Rosa, S.; Frahm, N.
2013-01-01
Summary In the context of a bioassay or an immunoassay, calibration means fitting a curve, usually nonlinear, through the observations collected on a set of samples containing known concentrations of a target substance, and then using the fitted curve and observations collected on samples of interest to predict the concentrations of the target substance in these samples. Recent technological advances have greatly improved our ability to quantify minute amounts of substance from a tiny volume of biological sample. This has in turn led to a need to improve statistical methods for calibration. In this paper, we focus on developing calibration methods robust to dependent outliers. We introduce a novel normal mixture model with dependent error terms to model the experimental noise. In addition, we propose a re-parameterization of the five parameter logistic nonlinear regression model that allows us to better incorporate prior information. We examine the performance of our methods with simulation studies and show that they lead to a substantial increase in performance measured in terms of mean squared error of estimation and a measure of the average prediction accuracy. A real data example from the HIV Vaccine Trials Network Laboratory is used to illustrate the methods. PMID:22551415
Complex Sequencing Rules of Birdsong Can be Explained by Simple Hidden Markov Processes
Katahira, Kentaro; Suzuki, Kenta; Okanoya, Kazuo; Okada, Masato
2011-01-01
Complex sequencing rules observed in birdsongs provide an opportunity to investigate the neural mechanism for generating complex sequential behaviors. To relate the findings from studying birdsongs to other sequential behaviors such as human speech and musical performance, it is crucial to characterize the statistical properties of the sequencing rules in birdsongs. However, the properties of the sequencing rules in birdsongs have not yet been fully addressed. In this study, we investigate the statistical properties of the complex birdsong of the Bengalese finch (Lonchura striata var. domestica). Based on manual-annotated syllable labeles, we first show that there are significant higher-order context dependencies in Bengalese finch songs, that is, which syllable appears next depends on more than one previous syllable. We then analyze acoustic features of the song and show that higher-order context dependencies can be explained using first-order hidden state transition dynamics with redundant hidden states. This model corresponds to hidden Markov models (HMMs), well known statistical models with a large range of application for time series modeling. The song annotation with these models with first-order hidden state dynamics agreed well with manual annotation, the score was comparable to that of a second-order HMM, and surpassed the zeroth-order model (the Gaussian mixture model; GMM), which does not use context information. Our results imply that the hierarchical representation with hidden state dynamics may underlie the neural implementation for generating complex behavioral sequences with higher-order dependencies. PMID:21915345
ERIC Educational Resources Information Center
Cox, Cody B.; Yang, Yan; Dicke-Bohmann, Amy K.
2014-01-01
The purpose of this study was to propose and test a model of the effects of cultural factors on Hispanic protégés' expectations for and experiences with their mentors. Specifically, the proposed model posits that cultural orientation predicts the mentorship functions protégés desire, and the positive impact of these mentorship functions depends on…
CFDP Performance over Weather-dependent Ka-band Channel
NASA Technical Reports Server (NTRS)
Sung, I. U.; Gao, Jay L.
2006-01-01
This study presents an analysis of the delay performance of the CCSDS File Delivery Protocol (CFDP) over weather-dependent Ka-band channel. The Ka-band channel condition is determined by the strength of the atmospheric noise temperature, which is weather dependent. Noise temperature data collected from the Deep Space Network (DSN) Madrid site is used to characterize the correlations between good and bad channel states in a two-state Markov model. Specifically, the probability distribution of file delivery latency using the CFDP deferred Negative Acknowledgement (NAK) mode is derived and quantified. Deep space communication scenarios with different file sizes and bit error rates (BERs) are studied and compared. Furthermore, we also examine the sensitivity of our analysis with respect to different data sampling methods. Our analysis shows that while the weather-dependent channel only results in fairly small increases in the average number of CFDP retransmissions required, the maximum number of transmissions required to complete 99 percentile, on the other hand, is significantly larger for the weather-dependent channel due to the significant correlation of poor weather states.
CFDP Performance over Weather-Dependent Ka-Band Channel
NASA Technical Reports Server (NTRS)
U, Sung I.; Gao, Jay L.
2006-01-01
This study presents an analysis of the delay performance of the CCSDS File Delivery Protocol (CFDP) over weather-dependent Ka-band channel. The Ka-band channel condition is determined by the strength of the atmospheric noise temperature, which is weather dependent. Noise temperature data collected from the Deep Space Network (DSN) Madrid site is used to characterize the correlations between good and bad channel states in a two-state Markov model. Specifically, the probability distribution of file delivery latency using the CFDP deferred Negative Acknowledgement (NAK) mode is derived and quantified. Deep space communication scenarios with different file sizes and bit error rates (BERs) are studied and compared. Furthermore, we also examine the sensitivity of our analysis with respect to different data sampling methods. Our analysis shows that while the weather-dependent channel only results in fairly small increases in the average number of CFDP retransmissions required, the maximum number of transmissions required to complete 99 percentile, on the other hand, is significantly larger for the weather-dependent channel due to the significant correlation of poor weather states.
ACME Priority Metrics (A-PRIME)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Katherine J; Zender, Charlie; Van Roekel, Luke
A-PRIME, is a collection of scripts designed to provide Accelerated Climate Model for Energy (ACME) model developers and analysts with a variety of analysis of the model needed to determine if the model is producing the desired results, depending on the goals of the simulation. The software is csh scripts based at the top level to enable scientist to provide the input parameters. Within the scripts, the csh scripts calls code to perform the postprocessing of the raw data analysis and create plots for visual assessment.
NASA Astrophysics Data System (ADS)
Evin, Guillaume; Favre, Anne-Catherine; Hingray, Benoit
2018-02-01
We present a multi-site stochastic model for the generation of average daily temperature, which includes a flexible parametric distribution and a multivariate autoregressive process. Different versions of this model are applied to a set of 26 stations located in Switzerland. The importance of specific statistical characteristics of the model (seasonality, marginal distributions of standardized temperature, spatial and temporal dependence) is discussed. In particular, the proposed marginal distribution is shown to improve the reproduction of extreme temperatures (minima and maxima). We also demonstrate that the frequency and duration of cold spells and heat waves are dramatically underestimated when the autocorrelation of temperature is not taken into account in the model. An adequate representation of these characteristics can be crucial depending on the field of application, and we discuss potential implications in different contexts (agriculture, forestry, hydrology, human health).
Managing Analysis Models in the Design Process
NASA Technical Reports Server (NTRS)
Briggs, Clark
2006-01-01
Design of large, complex space systems depends on significant model-based support for exploration of the design space. Integrated models predict system performance in mission-relevant terms given design descriptions and multiple physics-based numerical models. Both the design activities and the modeling activities warrant explicit process definitions and active process management to protect the project from excessive risk. Software and systems engineering processes have been formalized and similar formal process activities are under development for design engineering and integrated modeling. JPL is establishing a modeling process to define development and application of such system-level models.
A Method to Assess Flux Hazards at CSP Plants to Reduce Avian Mortality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Clifford K.; Wendelin, Timothy; Horstman, Luke
A method to evaluate avian flux hazards at concentrating solar power plants (CSP) has been developed. A heat-transfer model has been coupled to simulations of the irradiance in the airspace above a CSP plant to determine the feather temperature along prescribed bird flight paths. Probabilistic modeling results show that the irradiance and assumed feather properties (thickness, absorptance, heat capacity) have the most significant impact on the simulated feather temperature, which can increase rapidly (hundreds of degrees Celsius in seconds) depending on the parameter values. The avian flux hazard model is being combined with a plant performance model to identify alternativemore » heliostat standby aiming strategies that minimize both avian flux hazards and negative impacts on plant performance.« less
Physical and numerical modeling of hydrophysical proceses on the site of underwater pipelines
NASA Astrophysics Data System (ADS)
Garmakova, M. E.; Degtyarev, V. V.; Fedorova, N. N.; Shlychkov, V. A.
2018-03-01
The paper outlines issues related to ensuring the exploitation safety of underwater pipelines that are at risk of accidents. The performed research is based on physical and mathematical modeling of local bottom erosion in the area of pipeline location. The experimental studies were performed on the basis of the Hydraulics Laboratory of the Department of Hydraulic Engineering Construction, Safety and Ecology of NSUACE (Sibstrin). In the course of physical experiments it was revealed that the intensity of the bottom soil reforming depends on the deepening of the pipeline. The ANSYS software has been used for numerical modeling. The process of erosion of the sandy bottom was modeled under the pipeline. Comparison of computational results at various mass flow rates was made.
A method to assess flux hazards at CSP plants to reduce avian mortality
NASA Astrophysics Data System (ADS)
Ho, Clifford K.; Wendelin, Timothy; Horstman, Luke; Yellowhair, Julius
2017-06-01
A method to evaluate avian flux hazards at concentrating solar power plants (CSP) has been developed. A heat-transfer model has been coupled to simulations of the irradiance in the airspace above a CSP plant to determine the feather temperature along prescribed bird flight paths. Probabilistic modeling results show that the irradiance and assumed feather properties (thickness, absorptance, heat capacity) have the most significant impact on the simulated feather temperature, which can increase rapidly (hundreds of degrees Celsius in seconds) depending on the parameter values. The avian flux hazard model is being combined with a plant performance model to identify alternative heliostat standby aiming strategies that minimize both avian flux hazards and negative impacts on plant performance.
Ion thruster performance model
NASA Technical Reports Server (NTRS)
Brophy, J. R.
1984-01-01
A model of ion thruster performance is developed for high flux density, cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature. The model and experiments indicate that thruster performance may be described in terms of only four thruster configuration dependent parameters and two operating parameters. The model also suggests that improved performance should be exhibited by thruster designs which extract a large fraction of the ions produced in the discharge chamber, which have good primary electron and neutral atom containment and which operate at high propellant flow rates.
Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.
Ouyang, Yicun; Yin, Hujun
2018-05-01
Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.
The Influence of Intrinsic Framework Flexibility on Adsorption in Nanoporous Materials
Witman, Matthew; Ling, Sanliang; Jawahery, Sudi; ...
2017-03-30
For applications of metal–organic frameworks (MOFs) such as gas storage and separation, flexibility is often seen as a parameter that can tune material performance. In this work we aim to determine the optimal flexibility for the shape selective separation of similarly sized molecules (e.g., Xe/Kr mixtures). To obtain systematic insight into how the flexibility impacts this type of separation, we develop a simple analytical model that predicts a material’s Henry regime adsorption and selectivity as a function of flexibility. We elucidate the complex dependence of selectivity on a framework’s intrinsic flexibility whereby performance is either improved or reduced with increasingmore » flexibility, depending on the material’s pore size characteristics. However, the selectivity of a material with the pore size and chemistry that already maximizes selectivity in the rigid approximation is continuously diminished with increasing flexibility, demonstrating that the globally optimal separation exists within an entirely rigid pore. Molecular simulations show that our simple model predicts performance trends that are observed when screening the adsorption behavior of flexible MOFs. These flexible simulations provide better agreement with experimental adsorption data in a high-performance material that is not captured when modeling this framework as rigid, an approximation typically made in high-throughput screening studies. We conclude that, for shape selective adsorption applications, the globally optimal material will have the optimal pore size/chemistry and minimal intrinsic flexibility even though other nonoptimal materials’ selectivity can actually be improved by flexibility. In conclusion, equally important, we find that flexible simulations can be critical for correctly modeling adsorption in these types of systems.« less
The Influence of Intrinsic Framework Flexibility on Adsorption in Nanoporous Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witman, Matthew; Ling, Sanliang; Jawahery, Sudi
For applications of metal–organic frameworks (MOFs) such as gas storage and separation, flexibility is often seen as a parameter that can tune material performance. In this work we aim to determine the optimal flexibility for the shape selective separation of similarly sized molecules (e.g., Xe/Kr mixtures). To obtain systematic insight into how the flexibility impacts this type of separation, we develop a simple analytical model that predicts a material’s Henry regime adsorption and selectivity as a function of flexibility. We elucidate the complex dependence of selectivity on a framework’s intrinsic flexibility whereby performance is either improved or reduced with increasingmore » flexibility, depending on the material’s pore size characteristics. However, the selectivity of a material with the pore size and chemistry that already maximizes selectivity in the rigid approximation is continuously diminished with increasing flexibility, demonstrating that the globally optimal separation exists within an entirely rigid pore. Molecular simulations show that our simple model predicts performance trends that are observed when screening the adsorption behavior of flexible MOFs. These flexible simulations provide better agreement with experimental adsorption data in a high-performance material that is not captured when modeling this framework as rigid, an approximation typically made in high-throughput screening studies. We conclude that, for shape selective adsorption applications, the globally optimal material will have the optimal pore size/chemistry and minimal intrinsic flexibility even though other nonoptimal materials’ selectivity can actually be improved by flexibility. In conclusion, equally important, we find that flexible simulations can be critical for correctly modeling adsorption in these types of systems.« less
A Range-Normalization Model of Context-Dependent Choice: A New Model and Evidence
Camerer, Colin
2012-01-01
Most utility theories of choice assume that the introduction of an irrelevant option (called the decoy) to a choice set does not change the preference between existing options. On the contrary, a wealth of behavioral data demonstrates the dependence of preference on the decoy and on the context in which the options are presented. Nevertheless, neural mechanisms underlying context-dependent preference are poorly understood. In order to shed light on these mechanisms, we design and perform a novel experiment to measure within-subject decoy effects. We find within-subject decoy effects similar to what have been shown previously with between-subject designs. More importantly, we find that not only are the decoy effects correlated, pointing to similar underlying mechanisms, but also these effects increase with the distance of the decoy from the original options. To explain these observations, we construct a plausible neuronal model that can account for decoy effects based on the trial-by-trial adjustment of neural representations to the set of available options. This adjustment mechanism, which we call range normalization, occurs when the nervous system is required to represent different stimuli distinguishably, while being limited to using bounded neural activity. The proposed model captures our experimental observations and makes new predictions about the influence of the choice set size on the decoy effects, which are in contrast to previous models of context-dependent choice preference. Critically, unlike previous psychological models, the computational resource required by our range-normalization model does not increase exponentially as the set size increases. Our results show that context-dependent choice behavior, which is commonly perceived as an irrational response to the presence of irrelevant options, could be a natural consequence of the biophysical limits of neural representation in the brain. PMID:22829761
Multivariate Non-Symmetric Stochastic Models for Spatial Dependence Models
NASA Astrophysics Data System (ADS)
Haslauer, C. P.; Bárdossy, A.
2017-12-01
A copula based multivariate framework allows more flexibility to describe different kind of dependences than what is possible using models relying on the confining assumption of symmetric Gaussian models: different quantiles can be modelled with a different degree of dependence; it will be demonstrated how this can be expected given process understanding. maximum likelihood based multivariate quantitative parameter estimation yields stable and reliable results; not only improved results in cross-validation based measures of uncertainty are obtained but also a more realistic spatial structure of uncertainty compared to second order models of dependence; as much information as is available is included in the parameter estimation: incorporation of censored measurements (e.g., below detection limit, or ones that are above the sensitive range of the measurement device) yield to more realistic spatial models; the proportion of true zeros can be jointly estimated with and distinguished from censored measurements which allow estimates about the age of a contaminant in the system; secondary information (categorical and on the rational scale) has been used to improve the estimation of the primary variable; These copula based multivariate statistical techniques are demonstrated based on hydraulic conductivity observations at the Borden (Canada) site, the MADE site (USA), and a large regional groundwater quality data-set in south-west Germany. Fields of spatially distributed K were simulated with identical marginal simulation, identical second order spatial moments, yet substantially differing solute transport characteristics when numerical tracer tests were performed. A statistical methodology is shown that allows the delineation of a boundary layer separating homogenous parts of a spatial data-set. The effects of this boundary layer (macro structure) and the spatial dependence of K (micro structure) on solute transport behaviour is shown.
NASA Astrophysics Data System (ADS)
Griffin, J.; Clark, D.; Allen, T.; Ghasemi, H.; Leonard, M.
2017-12-01
Standard probabilistic seismic hazard assessment (PSHA) simulates earthquake occurrence as a time-independent process. However paleoseismic studies in slowly deforming regions such as Australia show compelling evidence that large earthquakes on individual faults cluster within active periods, followed by long periods of quiescence. Therefore the instrumental earthquake catalog, which forms the basis of PSHA earthquake recurrence calculations, may only capture the state of the system over the period of the catalog. Together this means that data informing our PSHA may not be truly time-independent. This poses challenges in developing PSHAs for typical design probabilities (such as 10% in 50 years probability of exceedance): Is the present state observed through the instrumental catalog useful for estimating the next 50 years of earthquake hazard? Can paleo-earthquake data, that shows variations in earthquake frequency over time-scales of 10,000s of years or more, be robustly included in such PSHA models? Can a single PSHA logic tree be useful over a range of different probabilities of exceedance? In developing an updated PSHA for Australia, decadal-scale data based on instrumental earthquake catalogs (i.e. alternative area based source models and smoothed seismicity models) is integrated with paleo-earthquake data through inclusion of a fault source model. Use of time-dependent non-homogeneous Poisson models allows earthquake clustering to be modeled on fault sources with sufficient paleo-earthquake data. This study assesses the performance of alternative models by extracting decade-long segments of the instrumental catalog, developing earthquake probability models based on the remaining catalog, and testing performance against the extracted component of the catalog. Although this provides insights into model performance over the short-term, for longer timescales it is recognised that model choice is subject to considerable epistemic uncertainty. Therefore a formal expert elicitation process has been used to assign weights to alternative models for the 2018 update to Australia's national PSHA.
NASA Astrophysics Data System (ADS)
Yan, Peng; Zhang, Yangming
2018-06-01
High performance scanning of nano-manipulators is widely deployed in various precision engineering applications such as SPM (scanning probe microscope), where trajectory tracking of sophisticated reference signals is an challenging control problem. The situation is further complicated when rate dependent hysteresis of the piezoelectric actuators and the stress-stiffening induced nonlinear stiffness of the flexure mechanism are considered. In this paper, a novel control framework is proposed to achieve high precision tracking of a piezoelectric nano-manipulator subjected to hysteresis and stiffness nonlinearities. An adaptive parameterized rate-dependent Prandtl-Ishlinskii model is constructed and the corresponding adaptive inverse model based online compensation is derived. Meanwhile a robust adaptive control architecture is further introduced to improve the tracking accuracy and robustness of the compensated system, where the parametric uncertainties of the nonlinear dynamics can be well eliminated by on-line estimations. Comparative experimental studies of the proposed control algorithm are conducted on a PZT actuated nano-manipulating stage, where hysteresis modeling accuracy and excellent tracking performance are demonstrated in real-time implementations, with significant improvement over existing results.
Estimation of signal-dependent noise level function in transform domain via a sparse recovery model.
Yang, Jingyu; Gan, Ziqiao; Wu, Zhaoyang; Hou, Chunping
2015-05-01
This paper proposes a novel algorithm to estimate the noise level function (NLF) of signal-dependent noise (SDN) from a single image based on the sparse representation of NLFs. Noise level samples are estimated from the high-frequency discrete cosine transform (DCT) coefficients of nonlocal-grouped low-variation image patches. Then, an NLF recovery model based on the sparse representation of NLFs under a trained basis is constructed to recover NLF from the incomplete noise level samples. Confidence levels of the NLF samples are incorporated into the proposed model to promote reliable samples and weaken unreliable ones. We investigate the behavior of the estimation performance with respect to the block size, sampling rate, and confidence weighting. Simulation results on synthetic noisy images show that our method outperforms existing state-of-the-art schemes. The proposed method is evaluated on real noisy images captured by three types of commodity imaging devices, and shows consistently excellent SDN estimation performance. The estimated NLFs are incorporated into two well-known denoising schemes, nonlocal means and BM3D, and show significant improvements in denoising SDN-polluted images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoynev, S.; et al.
The development ofmore » $$Nb_3Sn$$ quadrupole magnets for the High-Luminosity LHC upgrade is a joint venture between the US LHC Accelerator Research Program (LARP)* and CERN with the goal of fabricating large aperture quadrupoles for the LHC in-teraction regions (IR). The inner triplet (low-β) NbTi quadrupoles in the IR will be replaced by the stronger Nb3Sn magnets boosting the LHC program of having 10-fold increase in integrated luminos-ity after the foreseen upgrades. Previously LARP conducted suc-cessful tests of short and long models with up to 120 mm aperture. The first short 150 mm aperture quadrupole model MQXFS1 was assembled with coils fabricated by both CERN and LARP. The magnet demonstrated strong performance at the Fermilab’s verti-cal magnet test facility reaching the LHC operating limits. This paper reports the latest results from MQXFS1 tests with changed pre-stress levels. The overall magnet performance, including quench training and memory, ramp rate and temperature depend-ence, is also summarized.« less
Incorporating measurement error in n = 1 psychological autoregressive modeling.
Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.
Hararuk, Oleksandra; Smith, Matthew J; Luo, Yiqi
2015-06-01
Long-term carbon (C) cycle feedbacks to climate depend on the future dynamics of soil organic carbon (SOC). Current models show low predictive accuracy at simulating contemporary SOC pools, which can be improved through parameter estimation. However, major uncertainty remains in global soil responses to climate change, particularly uncertainty in how the activity of soil microbial communities will respond. To date, the role of microbes in SOC dynamics has been implicitly described by decay rate constants in most conventional global carbon cycle models. Explicitly including microbial biomass dynamics into C cycle model formulations has shown potential to improve model predictive performance when assessed against global SOC databases. This study aimed to data-constrained parameters of two soil microbial models, evaluate the improvements in performance of those calibrated models in predicting contemporary carbon stocks, and compare the SOC responses to climate change and their uncertainties between microbial and conventional models. Microbial models with calibrated parameters explained 51% of variability in the observed total SOC, whereas a calibrated conventional model explained 41%. The microbial models, when forced with climate and soil carbon input predictions from the 5th Coupled Model Intercomparison Project (CMIP5), produced stronger soil C responses to 95 years of climate change than any of the 11 CMIP5 models. The calibrated microbial models predicted between 8% (2-pool model) and 11% (4-pool model) soil C losses compared with CMIP5 model projections which ranged from a 7% loss to a 22.6% gain. Lastly, we observed unrealistic oscillatory SOC dynamics in the 2-pool microbial model. The 4-pool model also produced oscillations, but they were less prominent and could be avoided, depending on the parameter values. © 2014 John Wiley & Sons Ltd.
Application of the NUREG/CR-6850 EPRI/NRC Fire PRA Methodology to a DOE Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom Elicson; Bentley Harwood; Richard Yorg
2011-03-01
The application NUREG/CR-6850 EPRI/NRC fire PRA methodology to DOE facility presented several challenges. This paper documents the process and discusses several insights gained during development of the fire PRA. A brief review of the tasks performed is provided with particular focus on the following: • Tasks 5 and 14: Fire-induced risk model and fire risk quantification. A key lesson learned was to begin model development and quantification as early as possible in the project using screening values and simplified modeling if necessary. • Tasks 3 and 9: Fire PRA cable selection and detailed circuit failure analysis. In retrospect, it wouldmore » have been beneficial to perform the model development and quantification in 2 phases with detailed circuit analysis applied during phase 2. This would have allowed for development of a robust model and quantification earlier in the project and would have provided insights into where to focus the detailed circuit analysis efforts. • Tasks 8 and 11: Scoping fire modeling and detailed fire modeling. More focus should be placed on detailed fire modeling and less focus on scoping fire modeling. This was the approach taken for the fire PRA. • Task 14: Fire risk quantification. Typically, multiple safe shutdown (SSD) components fail during a given fire scenario. Therefore dependent failure analysis is critical to obtaining a meaningful fire risk quantification. Dependent failure analysis for the fire PRA presented several challenges which will be discussed in the full paper.« less
Artificial Neural Network Models for Long Lead Streamflow Forecasts using Climate Information
NASA Astrophysics Data System (ADS)
Kumar, J.; Devineni, N.
2007-12-01
Information on season ahead stream flow forecasts is very beneficial for the operation and management of water supply systems. Daily streamflow conditions at any particular reservoir primarily depend on atmospheric and land surface conditions including the soil moisture and snow pack. On the other hand recent studies suggest that developing long lead streamflow forecasts (3 months ahead) typically depends on exogenous climatic conditions particularly Sea Surface Temperature conditions (SST) in the tropical oceans. Examples of some oceanic variables are El Nino Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO). Identification of such conditions that influence the moisture transport into a given basin poses many challenges given the nonlinear dependency between the predictors (SST) and predictand (stream flows). In this study, we apply both linear and nonlinear dependency measures to identify the predictors that influence the winter flows into the Neuse basin. The predictor identification approach here adopted uses simple correlation coefficients to spearman rank correlation measures for detecting nonlinear dependency. All these dependency measures are employed with a lag 3 time series of the high flow season (January - February - March) using 75 years (1928-2002) of stream flows recorded in to the Falls Lake, Neuse River Basin. Developing streamflow forecasts contingent on these exogenous predictors will play an important role towards improved water supply planning and management. Recently, the soft computing techniques, such as artificial neural networks (ANNs) have provided an alternative method to solve complex problems efficiently. ANNs are data driven models which trains on the examples given to it. The ANNs functions as universal approximators and are non linear in nature. This paper presents a study aiming towards using climatic predictors for 3 month lead time streamflow forecast. ANN models representing the physical process of the system are developed between the identified predictors and the predictand. Predictors used are the scores of Principal Components Analysis (PCA). The models were tested and validated. The feed- forward multi-layer perceptron (MLP) type neural networks trained using the back-propagation algorithms are employed in the current study. The performance of the ANN-model forecasts are evaluated using various performance evaluation measures such as correlation coefficient, root mean square error (RMSE). The preliminary results shows that ANNs are efficient to forecast long lead time streamflows using climatic predictors.
Fractional time-dependent apparent viscosity model for semisolid foodstuffs
NASA Astrophysics Data System (ADS)
Yang, Xu; Chen, Wen; Sun, HongGuang
2017-10-01
The difficulty in the description of thixotropic behaviors in semisolid foodstuffs is the time dependent nature of apparent viscosity under constant shear rate. In this study, we propose a novel theoretical model via fractional derivative to address the high demand by industries. The present model adopts the critical parameter of fractional derivative order α to describe the corresponding time-dependent thixotropic behavior. More interestingly, the parameter α provides a quantitative insight into discriminating foodstuffs. With the re-exploration of three groups of experimental data (tehineh, balangu, and natillas), the proposed methodology is validated in good applicability and efficiency. The results show that the present fractional apparent viscosity model performs successfully for tested foodstuffs in the shear rate range of 50-150 s^{ - 1}. The fractional order α decreases with the increase of temperature at low temperature, below 50 °C, but increases with growing shear rate. While the ideal initial viscosity k decreases with the increase of temperature, shear rate, and ingredient content. It is observed that the magnitude of α is capable of characterizing the thixotropy of semisolid foodstuffs.
Khalid, Ruzelan; M. Nawawi, Mohd Kamal; Kawsar, Luthful A.; Ghani, Noraida A.; Kamil, Anton A.; Mustafa, Adli
2013-01-01
M/G/C/C state dependent queuing networks consider service rates as a function of the number of residing entities (e.g., pedestrians, vehicles, and products). However, modeling such dynamic rates is not supported in modern Discrete Simulation System (DES) software. We designed an approach to cater this limitation and used it to construct the M/G/C/C state-dependent queuing model in Arena software. Using the model, we have evaluated and analyzed the impacts of various arrival rates to the throughput, the blocking probability, the expected service time and the expected number of entities in a complex network topology. Results indicated that there is a range of arrival rates for each network where the simulation results fluctuate drastically across replications and this causes the simulation results and analytical results exhibit discrepancies. Detail results that show how tally the simulation results and the analytical results in both abstract and graphical forms and some scientific justifications for these have been documented and discussed. PMID:23560037
Onboard Navigation Systems Characteristics
NASA Technical Reports Server (NTRS)
1979-01-01
The space shuttle onboard navigation systems characteristics are described. A standard source of equations and numerical data for use in error analyses and mission simulations related to space shuttle development is reported. The sensor characteristics described are used for shuttle onboard navigation performance assessment. The use of complete models in the studies depend on the analyses to be performed, the capabilities of the computer programs, and the availability of computer resources.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-23
...-bonding between the skin and honeycomb core. Such reworks were also performed on some rudders fitted on... as a result of de-bonding between the skin and honeycomb core. Such reworks were also performed on..., depending on findings, the application of corrective actions for those rudders where production reworks have...
SPARC GENERATED CHEMICAL PROPERTIES DATABASE FOR USE IN NATIONAL RISK ASSESSMENTS
The SPARC (Sparc Performs Automated Reasoning in Chemistry) Model was used to provide temperature dependent algorithms used to estimate chemical properties for approximately 200 chemicals of interest to the promulgation of the Hazardous Waste Identification Rule (HWIR) . Proper...
NASA Astrophysics Data System (ADS)
Kenny, Natasha A.; Warland, Jon S.; Brown, Robert D.; Gillespie, Terry G.
2009-09-01
This study assessed the performance of the COMFA outdoor thermal comfort model on subjects performing moderate to vigorous physical activity. Field tests were conducted on 27 subjects performing 30 min of steady-state activity (walking, running, and cycling) in an outdoor environment. The predicted COMFA budgets were compared to the actual thermal sensation (ATS) votes provided by participants during each 5-min interval. The results revealed a normal distribution in the subjects’ ATS votes, with 82% of votes received in categories 0 (neutral) to +2 (warm). The ATS votes were significantly dependent upon sex, air temperature, short and long-wave radiation, wind speed, and metabolic activity rate. There was a significant positive correlation between the ATS and predicted budgets (Spearman’s rho = 0.574, P < 0.01). However, the predicted budgets did not display a normal distribution, and the model produced erroneous estimates of the heat and moisture exchange between the human body and the ambient environment in 6% of the cases.
NASA Astrophysics Data System (ADS)
Lin, Shangfei; Sheng, Jinyu
2017-12-01
Depth-induced wave breaking is the primary dissipation mechanism for ocean surface waves in shallow waters. Different parametrizations were developed for parameterizing depth-induced wave breaking process in ocean surface wave models. The performance of six commonly-used parameterizations in simulating significant wave heights (SWHs) is assessed in this study. The main differences between these six parameterizations are representations of the breaker index and the fraction of breaking waves. Laboratory and field observations consisting of 882 cases from 14 sources of published observational data are used in the assessment. We demonstrate that the six parameterizations have reasonable performance in parameterizing depth-induced wave breaking in shallow waters, but with their own limitations and drawbacks. The widely-used parameterization suggested by Battjes and Janssen (1978, BJ78) has a drawback of underpredicting the SWHs in the locally-generated wave conditions and overpredicting in the remotely-generated wave conditions over flat bottoms. The drawback of BJ78 was addressed by a parameterization suggested by Salmon et al. (2015, SA15). But SA15 had relatively larger errors in SWHs over sloping bottoms than BJ78. We follow SA15 and propose a new parameterization with a dependence of the breaker index on the normalized water depth in deep waters similar to SA15. In shallow waters, the breaker index of the new parameterization has a nonlinear dependence on the local bottom slope rather than the linear dependence used in SA15. Overall, this new parameterization has the best performance with an average scatter index of ∼8.2% in comparison with the three best performing existing parameterizations with the average scatter index between 9.2% and 13.6%.
The use of neural network technology to model swimming performance.
Silva, António José; Costa, Aldo Manuel; Oliveira, Paulo Moura; Reis, Victor Machado; Saavedra, José; Perl, Jurgen; Rouboa, Abel; Marinho, Daniel Almeida
2007-01-01
to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons) and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females) of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility), swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics) and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron) with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances) is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports. Key pointsThe non-linear analysis resulting from the use of feed forward neural network allowed us the development of four performance models.The mean difference between the true and estimated results performed by each one of the four neural network models constructed was low.The neural network tool can be a good approach in the resolution of the performance modeling as an alternative to the standard statistical models that presume well-defined distributions and independence among all inputs.The use of neural networks for sports sciences application allowed us to create very realistic models for swimming performance prediction based on previous selected criterions that were related with the dependent variable (performance).
NASA Astrophysics Data System (ADS)
Shaari, Safizan; Naka, Shigeki; Okada, Hiroyuki
2018-04-01
We investigated the gate-bias and temperature dependence of the voltage-current (V-I) characteristics of dinaphtho[2,3-b:2‧,3‧-d]thiophene with MoO3/Au electrodes. The insertion of the MoO3 layer significantly improved the device performance. The temperature dependent V-I characteristics were evaluated and could be well fitted by the Schottky thermionic emission model with barrier height under forward- and reverse-biased regimes in the ranges of 33-57 and 49-73 meV, respectively. However, at a gate voltage of 0 V, at which a small activation energy was obtained, we needed to consider another conduction mechanism at the grain boundary. From the obtained results, we concluded that two possible conduction mechanisms governed the charge injection at the metal electrode-organic semiconductor interface: the Schottky thermionic emission model and the conduction model in the organic thin-film layer and grain boundary.
NASA Astrophysics Data System (ADS)
Constantin, Lucian A.; Fabiano, Eduardo; Della Sala, Fabio
2018-05-01
Orbital-free density functional theory (OF-DFT) promises to describe the electronic structure of very large quantum systems, being its computational cost linear with the system size. However, the OF-DFT accuracy strongly depends on the approximation made for the kinetic energy (KE) functional. To date, the most accurate KE functionals are nonlocal functionals based on the linear-response kernel of the homogeneous electron gas, i.e., the jellium model. Here, we use the linear-response kernel of the jellium-with-gap model to construct a simple nonlocal KE functional (named KGAP) which depends on the band-gap energy. In the limit of vanishing energy gap (i.e., in the case of metals), the KGAP is equivalent to the Smargiassi-Madden (SM) functional, which is accurate for metals. For a series of semiconductors (with different energy gaps), the KGAP performs much better than SM, and results are close to the state-of-the-art functionals with sophisticated density-dependent kernels.
Regression analysis of sparse asynchronous longitudinal data
Cao, Hongyuan; Zeng, Donglin; Fine, Jason P.
2015-01-01
Summary We consider estimation of regression models for sparse asynchronous longitudinal observations, where time-dependent responses and covariates are observed intermittently within subjects. Unlike with synchronous data, where the response and covariates are observed at the same time point, with asynchronous data, the observation times are mismatched. Simple kernel-weighted estimating equations are proposed for generalized linear models with either time invariant or time-dependent coefficients under smoothness assumptions for the covariate processes which are similar to those for synchronous data. For models with either time invariant or time-dependent coefficients, the estimators are consistent and asymptotically normal but converge at slower rates than those achieved with synchronous data. Simulation studies evidence that the methods perform well with realistic sample sizes and may be superior to a naive application of methods for synchronous data based on an ad hoc last value carried forward approach. The practical utility of the methods is illustrated on data from a study on human immunodeficiency virus. PMID:26568699
Manycore Performance-Portability: Kokkos Multidimensional Array Library
Edwards, H. Carter; Sunderland, Daniel; Porter, Vicki; ...
2012-01-01
Large, complex scientific and engineering application code have a significant investment in computational kernels to implement their mathematical models. Porting these computational kernels to the collection of modern manycore accelerator devices is a major challenge in that these devices have diverse programming models, application programming interfaces (APIs), and performance requirements. The Kokkos Array programming model provides library-based approach to implement computational kernels that are performance-portable to CPU-multicore and GPGPU accelerator devices. This programming model is based upon three fundamental concepts: (1) manycore compute devices each with its own memory space, (2) data parallel kernels and (3) multidimensional arrays. Kernel executionmore » performance is, especially for NVIDIA® devices, extremely dependent on data access patterns. Optimal data access pattern can be different for different manycore devices – potentially leading to different implementations of computational kernels specialized for different devices. The Kokkos Array programming model supports performance-portable kernels by (1) separating data access patterns from computational kernels through a multidimensional array API and (2) introduce device-specific data access mappings when a kernel is compiled. An implementation of Kokkos Array is available through Trilinos [Trilinos website, http://trilinos.sandia.gov/, August 2011].« less
NASA Astrophysics Data System (ADS)
Ni, Fang; Nakatsukasa, Takashi
2018-04-01
To describe quantal collective phenomena, it is useful to requantize the time-dependent mean-field dynamics. We study the time-dependent Hartree-Fock-Bogoliubov (TDHFB) theory for the two-level pairing Hamiltonian, and compare results of different quantization methods. The one constructing microscopic wave functions, using the TDHFB trajectories fulfilling the Einstein-Brillouin-Keller quantization condition, turns out to be the most accurate. The method is based on the stationary-phase approximation to the path integral. We also examine the performance of the collective model which assumes that the pairing gap parameter is the collective coordinate. The applicability of the collective model is limited for the nuclear pairing with a small number of single-particle levels, because the pairing gap parameter represents only a half of the pairing collective space.
Viscoelastic Properties of Collagen-Adhesive Composites under Water Saturated and Dry Conditions
Singh, Viraj; Misra, Anil; Parthasarathy, Ranganathan; Ye, Qiang; Spencer, Paulette
2014-01-01
To investigate the time and rate dependent mechanical properties of collagen-adhesive composites, creep and monotonic experiments are performed under dry and wet conditions. The composites are prepared by infiltration of dentin adhesive into a demineralized bovine dentin. Experimental results show that for small stress level under dry conditions, both the composite and neat adhesive have similar behavior. On the other hand, in wet conditions, the composites are significantly soft and weak compared to the neat adhesives. The behavior in the wet condition is found to be affected by the hydrophilicity of both the adhesive and collagen. Since the adhesive-collagen composites area part of the complex construct that forms the adhesive-dentin interface, their presence will affect the overall performance of the restoration. We find that Kelvin-Voigt model with at least 4-elements is required to fit the creep compliance data, indicating that the adhesive-collagen composites are complex polymers with several characteristics time-scales whose mechanical behavior will be significantly affected by loading rates and frequencies. Such mechanical properties have not been investigated widely for these types of materials. The derived model provides an additional advantage that it can be exploited to extract other viscoelastic properties which are, generally, time consuming to obtain experimentally. The calibrated model is utilized to obtain stress relaxation function, frequency-dependent storage and loss modulus, and rate dependent elastic modulus. PMID:24753362
Kutasy, Balazs; Friedmacher, Florian; Pes, Lara; Coyle, David; Doi, Takashi; Paradisi, Francesca; Puri, Prem
2016-04-01
Low pulmonary retinol levels and disrupted retinoid signaling pathway (RSP) have been implicated in the pathogenesis of congenital diaphragmatic hernia (CDH) and associated pulmonary hypoplasia (PH). It has been demonstrated that nitrofen disturbs the main retinol-binding protein (RBP)-dependent trophoblastic retinol transport. Several studies have demonstrated that prenatal treatment with retinoic acid (RA) can reverse PH in the nitrofen-induced CDH model. We hypothesized that maternal administration of RA can increase trophoblastic RBP-dependent retinol transport in a nitrofen model of CDH. Pregnant rats were treated with nitrofen or vehicle on gestational day 9 (D9) and sacrificed on D21. RA was given i.p. on D18, D19, and D20. Retinol and RA levels were measured using high-performance liquid chromatography. Immunohistochemistry was performed to evaluate trophoblastic expression of RBP. Expression levels of the primary RSP genes were determined using quantitative real-time PCR and immunohistochemistry. Markedly increased trophoblastic RBP immunoreactivity was observed in CDH+RA compared to CDH. Significantly increased serum and pulmonary retinol and RA levels were detected in CDH+RA compared to CDH. Pulmonary expression of RSP genes and proteins were increased in CDH+RA compared to CDH. Increased trophoblastic RBP expression and retinol transport after antenatal administration of RA suggest that retinol-triggered RSP activation may attenuate CDH-associated PH by elevating serum and pulmonary retinol levels.
Computational fluid dynamics (CFD) simulation of a newly designed passive particle sampler.
Sajjadi, H; Tavakoli, B; Ahmadi, G; Dhaniyala, S; Harner, T; Holsen, T M
2016-07-01
In this work a series of computational fluid dynamics (CFD) simulations were performed to predict the deposition of particles on a newly designed passive dry deposition (Pas-DD) sampler. The sampler uses a parallel plate design and a conventional polyurethane foam (PUF) disk as the deposition surface. The deposition of particles with sizes between 0.5 and 10 μm was investigated for two different geometries of the Pas-DD sampler for different wind speeds and various angles of attack. To evaluate the mean flow field, the k-ɛ turbulence model was used and turbulent fluctuating velocities were generated using the discrete random walk (DRW) model. The CFD software ANSYS-FLUENT was used for performing the numerical simulations. It was found that the deposition velocity increased with particle size or wind speed. The modeled deposition velocities were in general agreement with the experimental measurements and they increased when flow entered the sampler with a non-zero angle of attack. The particle-size dependent deposition velocity was also dependent on the geometry of the leading edge of the sampler; deposition velocities were more dependent on particle size and wind speeds for the sampler without the bend in the leading edge of the deposition plate, compared to a flat plate design. Foam roughness was also found to have a small impact on particle deposition. Copyright © 2016 Elsevier Ltd. All rights reserved.
Chian, Chih-Feng; Hwang, Yi-Ting; Terng, Harn-Jing; Lee, Shih-Chun; Chao, Tsui-Yi; Chang, Hung; Ho, Ching-Liang; Wu, Yi-Ying; Perng, Wann-Cherng
2016-08-02
Peripheral blood mononuclear cell (PBMC)-derived gene signatures were investigated for their potential use in the early detection of non-small cell lung cancer (NSCLC). In our study, 187 patients with NSCLC and 310 age- and gender-matched controls, and an independent set containing 29 patients for validation were included. Eight significant NSCLC-associated genes were identified, including DUSP6, EIF2S3, GRB2, MDM2, NF1, POLDIP2, RNF4, and WEE1. The logistic model containing these significant markers was able to distinguish subjects with NSCLC from controls with an excellent performance, 80.7% sensitivity, 90.6% specificity, and an area under the receiver operating characteristic curve (AUC) of 0.924. Repeated random sub-sampling for 100 times was used to validate the performance of classification training models with an average AUC of 0.92. Additional cross-validation using the independent set resulted in the sensitivity 75.86%. Furthermore, six age/gender-dependent genes: CPEB4, EIF2S3, GRB2, MCM4, RNF4, and STAT2 were identified using age and gender stratification approach. STAT2 and WEE1 were explored as stage-dependent using stage-stratified subpopulation. We conclude that these logistic models using different signatures for total and stratified samples are potential complementary tools for assessing the risk of NSCLC.
Haque, Muhammad E; Franklin, Tammy; Bokhary, Ujala; Mathew, Liby; Hack, Bradley K; Chang, Anthony; Puri, Tipu S; Prasad, Pottumarthi V
2014-04-01
To evaluate longitudinal changes in renal oxygenation and diffusion measurements in a model of reversible unilateral ureteral obstruction (rUUO) which has been shown to induce chronic renal functional deficits in a strain dependent way. C57BL/6 mice show higher degree of functional deficit compared with BALB/c mice. Because hypoxia and development of fibrosis are associated with chronic kidney diseases and are responsible for progression, we hypothesized that MRI measurements would be able to monitor the longitudinal changes in this model and will show strain dependent differences in response. Here blood oxygenation level dependent (BOLD) and diffusion MRI measurements were performed at three time points over a 30 day period in mice with rUUO. The studies were performed on a 4.7T scanner with the mice anesthetized with isoflurane before UUO, 2 and 28 days postrelease of 6 days of obstruction. We found at the early time point (∼2 days after releasing the obstruction), the relative oxygenation in C57Bl/6 mice were lower compared with BALB/c. Diffusion measurements were lower at this time point and reached statistical significance in BALB/c These methods may prove valuable in better understanding the natural progression of kidney diseases and in evaluating novel interventions to limit progression. Copyright © 2013 Wiley Periodicals, Inc.
Reversible and Irreversible Time-Dependent Behavior of GRCop-84
NASA Technical Reports Server (NTRS)
Lerch, Bradley A.; Arnold, Steven M.; Ellis, David L.
2017-01-01
A series of mechanical tests were conducted on a high-conductivity copper alloy, GRCop-84, in order to understand the time dependent response of this material. Tensile, creep, and stress relaxation tests were performed over a wide range of temperatures, strain rates, and stress levels to excite various amounts of time-dependent behavior. At low applied stresses the deformation behavior was found to be fully reversible. Above a certain stress, termed the viscoelastic threshold, irreversible deformation was observed. At these higher stresses the deformation was observed to be viscoplastic. Both reversible and irreversible regions contained time dependent deformation. These experimental data are documented to enable characterization of constitutive models to aid in design of high temperature components.
Electron Transport in Tellurium Nanowires
NASA Astrophysics Data System (ADS)
Berezovets, V. A.; Kumzerov, Yu. A.; Firsov, Yu. A.
2018-02-01
The temperature and magnetic field dependences of the voltage-current characteristics of tellurium nanowires manufactured via the insertion of tellurium into chrysotile asbestos pores from a melt have been measured. The measurements have been performed within a broad range of temperatures and magnetic fields. The results of such measurements are analyzed by means of their comparison with the predictions of theoretical models developed for the case of one-dimensional structures. The obtained dependences are concluded to most closely correspond to Luttinger liquid theory predictions. This result agrees with the concepts that the major mechanism of current in such one-dimensional wires does not depend on the material inserted into pores, but depends only on the dimension of conducting wires.
The influence of enterprise resource planning (ERP) systems' performance on earnings management
NASA Astrophysics Data System (ADS)
Tsai, Wen-Hsien; Lee, Kuen-Chang; Liu, Jau-Yang; Lin, Sin-Jin; Chou, Yu-Wei
2012-11-01
We analyse whether there is a linkage between performance measures of enterprise resource planning (ERP) systems and earnings management. We find that earnings management decreases with the higher performance of ERP systems. The empirical result is as expected. We further analyse how the dimension of the DeLone and McLean model of information systems success affects earnings management. We find that the relationship between the performance of ERP systems and earnings management depends on System Quality after ERP implementation. The more System Quality improves, the more earnings management is reduced.
Constraints on a scale-dependent bias from galaxy clustering
NASA Astrophysics Data System (ADS)
Amendola, L.; Menegoni, E.; Di Porto, C.; Corsi, M.; Branchini, E.
2017-01-01
We forecast the future constraints on scale-dependent parametrizations of galaxy bias and their impact on the estimate of cosmological parameters from the power spectrum of galaxies measured in a spectroscopic redshift survey. For the latter we assume a wide survey at relatively large redshifts, similar to the planned Euclid survey, as the baseline for future experiments. To assess the impact of the bias we perform a Fisher matrix analysis, and we adopt two different parametrizations of scale-dependent bias. The fiducial models for galaxy bias are calibrated using mock catalogs of H α emitting galaxies mimicking the expected properties of the objects that will be targeted by the Euclid survey. In our analysis we have obtained two main results. First of all, allowing for a scale-dependent bias does not significantly increase the errors on the other cosmological parameters apart from the rms amplitude of density fluctuations, σ8 , and the growth index γ , whose uncertainties increase by a factor up to 2, depending on the bias model adopted. Second, we find that the accuracy in the linear bias parameter b0 can be estimated to within 1%-2% at various redshifts regardless of the fiducial model. The nonlinear bias parameters have significantly large errors that depend on the model adopted. Despite this, in the more realistic scenarios departures from the simple linear bias prescription can be detected with a ˜2 σ significance at each redshift explored. Finally, we use the Fisher matrix formalism to assess the impact od assuming an incorrect bias model and find that the systematic errors induced on the cosmological parameters are similar or even larger than the statistical ones.
Factors influencing crime rates: an econometric analysis approach
NASA Astrophysics Data System (ADS)
Bothos, John M. A.; Thomopoulos, Stelios C. A.
2016-05-01
The scope of the present study is to research the dynamics that determine the commission of crimes in the US society. Our study is part of a model we are developing to understand urban crime dynamics and to enhance citizens' "perception of security" in large urban environments. The main targets of our research are to highlight dependence of crime rates on certain social and economic factors and basic elements of state anticrime policies. In conducting our research, we use as guides previous relevant studies on crime dependence, that have been performed with similar quantitative analyses in mind, regarding the dependence of crime on certain social and economic factors using statistics and econometric modelling. Our first approach consists of conceptual state space dynamic cross-sectional econometric models that incorporate a feedback loop that describes crime as a feedback process. In order to define dynamically the model variables, we use statistical analysis on crime records and on records about social and economic conditions and policing characteristics (like police force and policing results - crime arrests), to determine their influence as independent variables on crime, as the dependent variable of our model. The econometric models we apply in this first approach are an exponential log linear model and a logit model. In a second approach, we try to study the evolvement of violent crime through time in the US, independently as an autonomous social phenomenon, using autoregressive and moving average time-series econometric models. Our findings show that there are certain social and economic characteristics that affect the formation of crime rates in the US, either positively or negatively. Furthermore, the results of our time-series econometric modelling show that violent crime, viewed solely and independently as a social phenomenon, correlates with previous years crime rates and depends on the social and economic environment's conditions during previous years.
NASA Astrophysics Data System (ADS)
Schneider, E. A.; Deinert, M. R.; Cady, K. B.
2006-10-01
The balance of isotopes in a nuclear reactor core is key to understanding the overall performance of a given fuel cycle. This balance is in turn most strongly affected by the time and energy-dependent neutron flux. While many large and involved computer packages exist for determining this spectrum, a simplified approach amenable to rapid computation is missing from the literature. We present such a model, which accepts as inputs the fuel element/moderator geometry and composition, reactor geometry, fuel residence time and target burnup and we compare it to OECD/NEA benchmarks for homogeneous MOX and UOX LWR cores. Collision probability approximations to the neutron transport equation are used to decouple the spatial and energy variables. The lethargy dependent neutron flux, governed by coupled integral equations for the fuel and moderator/coolant regions is treated by multigroup thermalization methods, and the transport of neutrons through space is modeled by fuel to moderator transport and escape probabilities. Reactivity control is achieved through use of a burnable poison or adjustable control medium. The model calculates the buildup of 24 actinides, as well as fission products, along with the lethargy dependent neutron flux and the results of several simulations are compared with benchmarked standards.
Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation
NASA Astrophysics Data System (ADS)
Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong
2018-04-01
The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.
Human factors with nonhumans - Factors that affect computer-task performance
NASA Technical Reports Server (NTRS)
Washburn, David A.
1992-01-01
There are two general strategies that may be employed for 'doing human factors research with nonhuman animals'. First, one may use the methods of traditional human factors investigations to examine the nonhuman animal-to-machine interface. Alternatively, one might use performance by nonhuman animals as a surrogate for or model of performance by a human operator. Each of these approaches is illustrated with data in the present review. Chronic ambient noise was found to have a significant but inconsequential effect on computer-task performance by rhesus monkeys (Macaca mulatta). Additional data supported the generality of findings such as these to humans, showing that rhesus monkeys are appropriate models of human psychomotor performance. It is argued that ultimately the interface between comparative psychology and technology will depend on the coordinated use of both strategies of investigation.
Implementing team huddles in small rural hospitals: How does the Kotter model of change apply?
Baloh, Jure; Zhu, Xi; Ward, Marcia M
2017-12-17
To examine how the process of change prescribed in Kotter's change model applies in implementing team huddles, and to assess the impact of the execution of early change phases on change success in later phases. Kotter's model can help to guide hospital leaders to implement change and potentially to improve success rates. However, the model is under studied, particularly in health care. We followed eight hospitals implementing team huddles for 2 years, interviewing the change teams quarterly to inquire about implementation progress. We assessed how the hospitals performed in the three overarching phases of the Kotter model, and examined whether performance in the initial phase influenced subsequent performance. In half of the hospitals, change processes were congruent with Kotter's model, where performance in the initial phase influenced their success in subsequent phases. In other hospitals, change processes were incongruent with the model, and their success depended on implementation scope and the strategies employed. We found mixed support for the Kotter model. It better fits implementation that aims to spread to multiple hospital units. When the scope is limited, changes can be successful even when steps are skipped. Kotter's model can be a useful guide for nurse managers implementing changes. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Schmid, Philipp; Liewald, Mathias
2011-08-01
The forming behavior of metastable austenitic stainless steel is mainly dominated by the temperature-dependent TRIP effect (transformation induced plasticity). Of course, the high dependency of material properties on the temperature level during forming means the temperature must be considered during the FE analysis. The strain-induced formation of α'-martensite from austenite can be represented by using finite element programs utilizing suitable models such as the Haensel-model. This paper discusses the determination of parameters for a completely thermal-mechanical forming simulation in LS-DYNA based on the material model of Haensel. The measurement of the martensite evolution in non-isothermal tensile tests was performed with metastable austenitic stainless steel EN 1.4301 at different rolling directions between 0° and 90 °. This allows an estimation of the influence of the rolling direction to the martensite formation. Of specific importance is the accuracy of the martensite content measured by magnetic induction methods (Feritscope). The observation of different factors, such as stress dependence of the magnetisation, blank thickness and numerous calibration curves discloses a substantial important influence on the parameter determination for the material models. The parameters obtained for use of Haensel model and temperature-dependent friction coefficients are used to simulate forming process of a real component and to validate its implementation in the commercial code LS-DYNA.
Goodness-Of-Fit Test for Nonparametric Regression Models: Smoothing Spline ANOVA Models as Example.
Teran Hidalgo, Sebastian J; Wu, Michael C; Engel, Stephanie M; Kosorok, Michael R
2018-06-01
Nonparametric regression models do not require the specification of the functional form between the outcome and the covariates. Despite their popularity, the amount of diagnostic statistics, in comparison to their parametric counter-parts, is small. We propose a goodness-of-fit test for nonparametric regression models with linear smoother form. In particular, we apply this testing framework to smoothing spline ANOVA models. The test can consider two sources of lack-of-fit: whether covariates that are not currently in the model need to be included, and whether the current model fits the data well. The proposed method derives estimated residuals from the model. Then, statistical dependence is assessed between the estimated residuals and the covariates using the HSIC. If dependence exists, the model does not capture all the variability in the outcome associated with the covariates, otherwise the model fits the data well. The bootstrap is used to obtain p-values. Application of the method is demonstrated with a neonatal mental development data analysis. We demonstrate correct type I error as well as power performance through simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom Elicson; Bentley Harwood; Jim Bouchard
Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. Themore » fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: • Development of time-dependent fire heat release rate profiles (required as input to CFAST), • Calculation of fire severity factors based on CFAST detailed fire modeling, and • Calculation of fire non-suppression probabilities.« less
Sun, Yanqing; Sun, Liuquan; Zhou, Jie
2013-07-01
This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A [Formula: see text]-fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example.
Quantifying and Mitigating the Effect of Preferential Sampling on Phylodynamic Inference
Karcher, Michael D.; Palacios, Julia A.; Bedford, Trevor; Suchard, Marc A.; Minin, Vladimir N.
2016-01-01
Phylodynamics seeks to estimate effective population size fluctuations from molecular sequences of individuals sampled from a population of interest. One way to accomplish this task formulates an observed sequence data likelihood exploiting a coalescent model for the sampled individuals’ genealogy and then integrating over all possible genealogies via Monte Carlo or, less efficiently, by conditioning on one genealogy estimated from the sequence data. However, when analyzing sequences sampled serially through time, current methods implicitly assume either that sampling times are fixed deterministically by the data collection protocol or that their distribution does not depend on the size of the population. Through simulation, we first show that, when sampling times do probabilistically depend on effective population size, estimation methods may be systematically biased. To correct for this deficiency, we propose a new model that explicitly accounts for preferential sampling by modeling the sampling times as an inhomogeneous Poisson process dependent on effective population size. We demonstrate that in the presence of preferential sampling our new model not only reduces bias, but also improves estimation precision. Finally, we compare the performance of the currently used phylodynamic methods with our proposed model through clinically-relevant, seasonal human influenza examples. PMID:26938243
MacAlpine, Sara; Deline, Chris; Dobos, Aron
2017-03-16
Shade obstructions can significantly impact the performance of photovoltaic (PV) systems. Although there are many models for partially shaded PV arrays, there is a lack of information available regarding their accuracy and uncertainty when compared with actual field performance. This work assesses the recorded performance of 46 residential PV systems, equipped with either string-level or module-level inverters, under a variety of shading conditions. We compare their energy production data to annual PV performance predictions, with a focus on the practical models developed here for National Renewable Energy Laboratory's system advisor model software. This includes assessment of shade extent on eachmore » PV system by using traditional onsite surveys and newer 3D obstruction modelling. The electrical impact of shade is modelled by either a nonlinear performance model or assumption of linear impact with shade extent, depending on the inverter type. When applied to the fleet of residential PV systems, performance is predicted with median annual bias errors of 2.5% or less, for systems with up to 20% estimated shading loss. The partial shade models are not found to add appreciable uncertainty to annual predictions of energy production for this fleet of systems but do introduce a monthly root-mean-square error of approximately 4%-9% due to seasonal effects. Here the use of a detailed 3D model results in similar or improved accuracy over site survey methods, indicating that, with proper description of shade obstructions, modelling of partially shaded PV arrays can be done completely remotely, potentially saving time and cost.« less
Leffondré, Karen; Abrahamowicz, Michal; Siemiatycki, Jack
2003-12-30
Case-control studies are typically analysed using the conventional logistic model, which does not directly account for changes in the covariate values over time. Yet, many exposures may vary over time. The most natural alternative to handle such exposures would be to use the Cox model with time-dependent covariates. However, its application to case-control data opens the question of how to manipulate the risk sets. Through a simulation study, we investigate how the accuracy of the estimates of Cox's model depends on the operational definition of risk sets and/or on some aspects of the time-varying exposure. We also assess the estimates obtained from conventional logistic regression. The lifetime experience of a hypothetical population is first generated, and a matched case-control study is then simulated from this population. We control the frequency, the age at initiation, and the total duration of exposure, as well as the strengths of their effects. All models considered include a fixed-in-time covariate and one or two time-dependent covariate(s): the indicator of current exposure and/or the exposure duration. Simulation results show that none of the models always performs well. The discrepancies between the odds ratios yielded by logistic regression and the 'true' hazard ratio depend on both the type of the covariate and the strength of its effect. In addition, it seems that logistic regression has difficulty separating the effects of inter-correlated time-dependent covariates. By contrast, each of the two versions of Cox's model systematically induces either a serious under-estimation or a moderate over-estimation bias. The magnitude of the latter bias is proportional to the true effect, suggesting that an improved manipulation of the risk sets may eliminate, or at least reduce, the bias. Copyright 2003 JohnWiley & Sons, Ltd.
Generating Performance Models for Irregular Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav
2017-05-30
Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scalingmore » when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.« less
Xu, Chen; Reece, Charles E.; Kelley, Michael J.
2016-03-22
A simplified numerical model has been developed to simulate nonlinear superconducting radiofrequency (SRF) losses on Nb surfaces. This study focuses exclusively on excessive surface resistance (R s) losses due to the microscopic topographical magnetic field enhancements. When the enhanced local surface magnetic field exceeds the superconducting critical transition magnetic field H c, small volumes of surface material may become normal conducting and increase the effective surface resistance without inducing a quench. We seek to build an improved quantitative characterization of this qualitative model. Using topographic data from typical buffered chemical polish (BCP)- and electropolish (EP)-treated fine grain niobium, we havemore » estimated the resulting field-dependent losses and extrapolated this model to the implications for cavity performance. The model predictions correspond well to the characteristic BCP versus EP high field Q 0 performance differences for fine grain niobium. Lastly, we describe the algorithm of the model, its limitations, and the effects of this nonlinear loss contribution on SRF cavity performance.« less
Scale effect challenges in urban hydrology highlighted with a distributed hydrological model
NASA Astrophysics Data System (ADS)
Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire
2018-01-01
Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration
by innovative methods of model resolution alteration
based on the spatial data variability and scaling of flows in urban hydrology.
Status Report On Preparation of a Revised Longwave Communication Link Vulnerability Code.
1981-10-01
Gallery or Earth-Detached) .J.J Cc) ONE ) E BOVTH ANDENA O EVOUDATENN (EE-G GE d., (c) O (d) BOVTHbAD GROUND ANTENN (-G - Figre3.Ilusrain te osibl snge...shows simplified flow diagrams for the ionization models re- quired for LOS and ionospheric dependent propagation calculations. The time loop could be...placed outside the path loop for ionospheric dependent propagation but this requires considerable storage in order to perform nonequilibrium ionization
Khan, Naiman A.; Baym, Carol L.; Monti, Jim M.; Raine, Lauren B.; Drollette, Eric S.; Scudder, Mark R.; Moore, R. Davis; Kramer, Arthur F.; Hillman, Charles H.; Cohen, Neal J.
2014-01-01
Objective To assess associations between adiposity and hippocampal-dependent and hippocampal-independent memory forms among prepubertal children. Study design Prepubertal children (7–9-year-olds, n = 126), classified as non-overweight (<85th %tile BMI-for-age [n = 73]) or overweight/obese (≥85th %tile BMI-for-age [n = 53]), completed relational (hippocampal-dependent) and item (hippocampal-independent) memory tasks, and performance was assessed with both direct (behavioral accuracy) and indirect (preferential disproportionate viewing [PDV]) measures. Adiposity (%whole body fat mass, subcutaneous abdominal adipose tissue, visceral adipose tissue, and total abdominal adipose tissue) was assessed using DXA. Backward regressions identified significant (P <0.05) predictive models of memory performance. Covariates included age, sex, pubertal timing, socioeconomic status, IQ, oxygen consumption (VO2max), and body mass index (BMI) z-score. Results Among overweight/obese children, total abdominal adipose tissue was a significant negative predictor of relational memory behavioral accuracy, and pubertal timing together with socioeconomic status jointly predicted the PDV measure of relational memory. In contrast, among non-overweight children, male sex predicted item memory behavioral accuracy, and a model consisting of socioeconomic status and BMI z-score jointly predicted the PDV measure of relational memory. Conclusions Regional, and not whole body, fat deposition was selectively and negatively associated with hippocampal-dependent relational memory among overweight/obese prepubertal children. PMID:25454939
Khan, Naiman A; Baym, Carol L; Monti, Jim M; Raine, Lauren B; Drollette, Eric S; Scudder, Mark R; Moore, R Davis; Kramer, Arthur F; Hillman, Charles H; Cohen, Neal J
2015-02-01
To assess associations between adiposity and hippocampal-dependent and hippocampal-independent memory forms among prepubertal children. Prepubertal children (age 7-9 years; n = 126), classified as non-overweight (<85th percentile body mass index [BMI]-for-age [n = 73]) or overweight/obese (≥85th percentile BMI-for-age [n = 53]), completed relational (hippocampal-dependent) and item (hippocampal-independent) memory tasks. Performance was assessed with both direct (behavioral accuracy) and indirect (preferential disproportionate viewing [PDV]) measures. Adiposity (ie, percent whole-body fat mass, subcutaneous abdominal adipose tissue, visceral adipose tissue, and total abdominal adipose tissue) was assessed by dual-energy X-ray absorptiometry. Backward regression identified significant (P < .05) predictive models of memory performance. Covariates included age, sex, pubertal timing, socioeconomic status (SES), IQ, oxygen consumption, and BMI z-score. Among overweight/obese children, total abdominal adipose tissue was a significant negative predictor of relational memory behavioral accuracy, and pubertal timing together with SES jointly predicted the PDV measure of relational memory. In contrast, among non-overweight children, male sex predicted item memory behavioral accuracy, and a model consisting of SES and BMI z-score jointly predicted the PDV measure of relational memory. Regional, but not whole-body, fat deposition was selectively and negatively associated with hippocampal-dependent relational memory among overweight/obese prepubertal children. Copyright © 2015 Elsevier Inc. All rights reserved.
Performance Evaluation of Bluetooth Low Energy: A Systematic Review.
Tosi, Jacopo; Taffoni, Fabrizio; Santacatterina, Marco; Sannino, Roberto; Formica, Domenico
2017-12-13
Small, compact and embedded sensors are a pervasive technology in everyday life for a wide number of applications (e.g., wearable devices, domotics, e-health systems, etc.). In this context, wireless transmission plays a key role, and among available solutions, Bluetooth Low Energy (BLE) is gaining more and more popularity. BLE merges together good performance, low-energy consumption and widespread diffusion. The aim of this work is to review the main methodologies adopted to investigate BLE performance. The first part of this review is an in-depth description of the protocol, highlighting the main characteristics and implementation details. The second part reviews the state of the art on BLE characteristics and performance. In particular, we analyze throughput, maximum number of connectable sensors, power consumption, latency and maximum reachable range, with the aim to identify what are the current limits of BLE technology. The main results can be resumed as follows: throughput may theoretically reach the limit of ~230 kbps, but actual applications analyzed in this review show throughputs limited to ~100 kbps; the maximum reachable range is strictly dependent on the radio power, and it goes up to a few tens of meters; the maximum number of nodes in the network depends on connection parameters, on the network architecture and specific device characteristics, but it is usually lower than 10; power consumption and latency are largely modeled and analyzed and are strictly dependent on a huge number of parameters. Most of these characteristics are based on analytical models, but there is a need for rigorous experimental evaluations to understand the actual limits.
Performance Evaluation of Bluetooth Low Energy: A Systematic Review
Taffoni, Fabrizio; Santacatterina, Marco; Sannino, Roberto
2017-01-01
Small, compact and embedded sensors are a pervasive technology in everyday life for a wide number of applications (e.g., wearable devices, domotics, e-health systems, etc.). In this context, wireless transmission plays a key role, and among available solutions, Bluetooth Low Energy (BLE) is gaining more and more popularity. BLE merges together good performance, low-energy consumption and widespread diffusion. The aim of this work is to review the main methodologies adopted to investigate BLE performance. The first part of this review is an in-depth description of the protocol, highlighting the main characteristics and implementation details. The second part reviews the state of the art on BLE characteristics and performance. In particular, we analyze throughput, maximum number of connectable sensors, power consumption, latency and maximum reachable range, with the aim to identify what are the current limits of BLE technology. The main results can be resumed as follows: throughput may theoretically reach the limit of ~230 kbps, but actual applications analyzed in this review show throughputs limited to ~100 kbps; the maximum reachable range is strictly dependent on the radio power, and it goes up to a few tens of meters; the maximum number of nodes in the network depends on connection parameters, on the network architecture and specific device characteristics, but it is usually lower than 10; power consumption and latency are largely modeled and analyzed and are strictly dependent on a huge number of parameters. Most of these characteristics are based on analytical models, but there is a need for rigorous experimental evaluations to understand the actual limits. PMID:29236085
Effinger, Angela; O'Driscoll, Caitriona M; McAllister, Mark; Fotaki, Nikoletta
2018-05-16
Drug product performance in patients with gastrointestinal (GI) diseases can be altered compared to healthy subjects due to pathophysiological changes. In this review, relevant differences in patients with inflammatory bowel diseases, coeliac disease, irritable bowel syndrome and short bowel syndrome are discussed and possible in vitro and in silico tools to predict drug product performance in this patient population are assessed. Drug product performance was altered in patients with GI diseases compared to healthy subjects, as assessed in a limited number of studies for some drugs. Underlying causes can be observed pathophysiological alterations such as the differences in GI transit time, the composition of the GI fluids and GI permeability. Additionally, alterations in the abundance of metabolising enzymes and transporter systems were observed. The effect of the GI diseases on each parameter is not always evident as it may depend on the location and the state of the disease. The impact of the pathophysiological change on drug bioavailability depends on the physicochemical characteristics of the drug, the pharmaceutical formulation and drug metabolism. In vitro and in silico methods to predict drug product performance in patients with GI diseases are currently limited but could be a useful tool to improve drug therapy. Development of suitable in vitro dissolution and in silico models for patients with GI diseases can improve their drug therapy. The likeliness of the models to provide accurate predictions depends on the knowledge of pathophysiological alterations, and thus, further assessment of physiological differences is essential. © 2018 Royal Pharmaceutical Society.
Analysis of Dependencies and Impacts of Metroplex Operations
NASA Technical Reports Server (NTRS)
DeLaurentis, Daniel A.; Ayyalasomayajula, Sricharan
2010-01-01
This report documents research performed by Purdue University under subcontract to the George Mason University (GMU) for the Metroplex Operations effort sponsored by NASA's Airportal Project. Purdue University conducted two tasks in support of the larger efforts led by GMU: a) a literature review on metroplex operations followed by identification and analysis of metroplex dependencies, and b) the analysis of impacts of metroplex operations on the larger U.S. domestic airline service network. The tasks are linked in that the ultimate goal is an understanding of the role of dependencies among airports in a metroplex in causing delays both locally and network-wide. The Purdue team has formulated a system-of-systems framework to analyze metroplex dependencies (including simple metrics to quantify them) and develop compact models to predict delays based on network structure. These metrics and models were developed to provide insights for planners to formulate tailored policies and operational strategies that streamline metroplex operations and mitigate delays and congestion.
NASA Astrophysics Data System (ADS)
Korelin, Ivan A.; Porshnev, Sergey V.
2018-01-01
The paper demonstrates the possibility of calculating the characteristics of the flow of visitors to objects carrying out mass events passing through checkpoints. The mathematical model is based on the non-stationary queuing system (NQS) where dependence of requests input rate from time is described by the function. This function was chosen in such way that its properties were similar to the real dependencies of speed of visitors arrival on football matches to the stadium. A piecewise-constant approximation of the function is used when statistical modeling of NQS performing. Authors calculated the dependencies of the queue length and waiting time for visitors to service (time in queue) on time for different laws. Time required to service the entire queue and the number of visitors entering the stadium at the beginning of the match were calculated too. We found the dependence for macroscopic quantitative characteristics of NQS from the number of averaging sections of the input rate.
Pellegrino Baena, Cristina; Goulart, Alessandra Carvalho; Santos, Itamar de Souza; Suemoto, Claudia Kimie; Lotufo, Paulo Andrade; Bensenor, Isabela Judith
2017-01-01
Background The association between migraine and cognitive performance is unclear. We analyzed whether migraine is associated with cognitive performance among participants of the Brazilian Longitudinal Study of Adult Health, ELSA-Brasil. Methods Cross-sectional analysis, including participants with complete information about migraine and aura at baseline. Headache status (no headaches, non-migraine headaches, migraine without aura and migraine with aura), based on the International Headache Society classification, was used as the dependent variable in the multilinear regression models, using the category "no headache" as reference. Cognitive performance was measured with the Consortium to Establish a Registry for Alzheimer's Disease word list memory test (CERAD-WLMT), the semantic fluency test (SFT), and the Trail Making Test version B (TMTB). Z-scores for each cognitive test and a composite global score were created and analyzed as dependent variables. Multivariate models were adjusted for age, gender, education, race, coronary heart disease, heart failure, hypertension, diabetes, dyslipidemia, body mass index, smoking, alcohol use, physical activity, depression, and anxiety. In women, the models were further adjusted for hormone replacement therapy. Results We analyzed 4208 participants. Of these, 19% presented migraine without aura and 10.3% presented migraine with aura. All migraine headaches were associated with poor cognitive performance (linear coefficient β; 95% CI) at TMTB -0.083 (-0.160; -0.008) and poorer global z-score -0.077 (-0.152; -0.002). Also, migraine without aura was associated with poor cognitive performance at TMTB -0.084 (-0.160, -0.008 and global z-score -0.077 (-0.152; -0.002). Conclusion In participants of the ELSA-study, all migraine headaches and migraine without aura were significantly and independently associated with poorer cognitive performance.
Generic icing effects on forward flight performance of a model helicopter rotor
NASA Technical Reports Server (NTRS)
Tinetti, Ana F.; Korkan, Kenneth D.
1989-01-01
An experimental program using a commercially available model helicopter has been conducted in the TAMU 7 ft x 10 ft Subsonic Wind Tunnel to investigate main rotor performance degradation due to generic ice adhesion. Base and iced performance data were gathered as functions of fuselage incidence, blade collective pitch, main rotor rotational velocity, and freestream velocity. The experimental values have shown that, in general, the presence of generic ice introduces decrements in performance caused by leading edge separation regions and increased surface roughness. In addition to the expected changes in aerodynamic forces caused by variations in test Reynolds number, forward flight data seemed to be influenced by changes in freestream and rotational velocity. The dependence of the data upon such velocity variations was apparently enhanced by increases in blade chord.
Performance testing and analysis results of AMTEC cells for space applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borkowski, C.A.; Barkan, A.; Hendricks, T.J.
1998-01-01
Testing and analysis has shown that AMTEC (Alkali Metal Thermal to Electric Conversion) (Weber, 1974) cells can reach the performance (power) levels required by a variety of space applications. The performance of an AMTEC cell is highly dependent on the thermal environment to which it is subjected. A guard heater assembly has been designed, fabricated, and used to expose individual AMTEC cells to various thermal environments. The design and operation of the guard heater assembly will be discussed. Performance test results of an AMTEC cell operated under guard heated conditions to simulate an adiabatic cell wall thermal environment are presented.more » Experimental data and analytic model results are compared to illustrate validation of the model. {copyright} {ital 1998 American Institute of Physics.}« less
Value-at-Risk forecasts by a spatiotemporal model in Chinese stock market
NASA Astrophysics Data System (ADS)
Gong, Pu; Weng, Yingliang
2016-01-01
This paper generalizes a recently proposed spatial autoregressive model and introduces a spatiotemporal model for forecasting stock returns. We support the view that stock returns are affected not only by the absolute values of factors such as firm size, book-to-market ratio and momentum but also by the relative values of factors like trading volume ranking and market capitalization ranking in each period. This article studies a new method for constructing stocks' reference groups; the method is called quartile method. Applying the method empirically to the Shanghai Stock Exchange 50 Index, we compare the daily volatility forecasting performance and the out-of-sample forecasting performance of Value-at-Risk (VaR) estimated by different models. The empirical results show that the spatiotemporal model performs surprisingly well in terms of capturing spatial dependences among individual stocks, and it produces more accurate VaR forecasts than the other three models introduced in the previous literature. Moreover, the findings indicate that both allowing for serial correlation in the disturbances and using time-varying spatial weight matrices can greatly improve the predictive accuracy of a spatial autoregressive model.
Skill of Predicting Heavy Rainfall Over India: Improvement in Recent Years Using UKMO Global Model
NASA Astrophysics Data System (ADS)
Sharma, Kuldeep; Ashrit, Raghavendra; Bhatla, R.; Mitra, A. K.; Iyengar, G. R.; Rajagopal, E. N.
2017-11-01
The quantitative precipitation forecast (QPF) performance for heavy rains is still a challenge, even for the most advanced state-of-art high-resolution Numerical Weather Prediction (NWP) modeling systems. This study aims to evaluate the performance of UK Met Office Unified Model (UKMO) over India for prediction of high rainfall amounts (>2 and >5 cm/day) during the monsoon period (JJAS) from 2007 to 2015 in short range forecast up to Day 3. Among the various modeling upgrades and improvements in the parameterizations during this period, the model horizontal resolution has seen an improvement from 40 km in 2007 to 17 km in 2015. Skill of short range rainfall forecast has improved in UKMO model in recent years mainly due to increased horizontal and vertical resolution along with improved physics schemes. Categorical verification carried out using the four verification metrics, namely, probability of detection (POD), false alarm ratio (FAR), frequency bias (Bias) and Critical Success Index, indicates that QPF has improved by >29 and >24% in case of POD and FAR. Additionally, verification scores like EDS (Extreme Dependency Score), EDI (Extremal Dependence Index) and SEDI (Symmetric EDI) are used with special emphasis on verification of extreme and rare rainfall events. These scores also show an improvement by 60% (EDS) and >34% (EDI and SEDI) during the period of study, suggesting an improved skill of predicting heavy rains.
Color model comparative analysis for breast cancer diagnosis using H and E stained images
NASA Astrophysics Data System (ADS)
Li, Xingyu; Plataniotis, Konstantinos N.
2015-03-01
Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.
NASA Astrophysics Data System (ADS)
Dixit, V. K.; Porwal, S.; Singh, S. D.; Sharma, T. K.; Ghosh, Sandip; Oak, S. M.
2014-02-01
Temperature dependence of the photoluminescence (PL) peak energy of bulk and quantum well (QW) structures is studied by using a new phenomenological model for including the effect of localized states. In general an anomalous S-shaped temperature dependence of the PL peak energy is observed for many materials which is usually associated with the localization of excitons in band-tail states that are formed due to potential fluctuations. Under such conditions, the conventional models of Varshni, Viña and Passler fail to replicate the S-shaped temperature dependence of the PL peak energy and provide inconsistent and unrealistic values of the fitting parameters. The proposed formalism persuasively reproduces the S-shaped temperature dependence of the PL peak energy and provides an accurate determination of the exciton localization energy in bulk and QW structures along with the appropriate values of material parameters. An example of a strained InAs0.38P0.62/InP QW is presented by performing detailed temperature and excitation intensity dependent PL measurements and subsequent in-depth analysis using the proposed model. Versatility of the new formalism is tested on a few other semiconductor materials, e.g. GaN, nanotextured GaN, AlGaN and InGaN, which are known to have a significant contribution from the localized states. A quantitative evaluation of the fractional contribution of the localized states is essential for understanding the temperature dependence of the PL peak energy of bulk and QW well structures having a large contribution of the band-tail states.
Benchmarking hydrological model predictive capability for UK River flows and flood peaks.
NASA Astrophysics Data System (ADS)
Lane, Rosanna; Coxon, Gemma; Freer, Jim; Wagener, Thorsten
2017-04-01
Data and hydrological models are now available for national hydrological analyses. However, hydrological model performance varies between catchments, and lumped, conceptual models are not able to produce adequate simulations everywhere. This study aims to benchmark hydrological model performance for catchments across the United Kingdom within an uncertainty analysis framework. We have applied four hydrological models from the FUSE framework to 1128 catchments across the UK. These models are all lumped models and run at a daily timestep, but differ in the model structural architecture and process parameterisations, therefore producing different but equally plausible simulations. We apply FUSE over a 20 year period from 1988-2008, within a GLUE Monte Carlo uncertainty analyses framework. Model performance was evaluated for each catchment, model structure and parameter set using standard performance metrics. These were calculated both for the whole time series and to assess seasonal differences in model performance. The GLUE uncertainty analysis framework was then applied to produce simulated 5th and 95th percentile uncertainty bounds for the daily flow time-series and additionally the annual maximum prediction bounds for each catchment. The results show that the model performance varies significantly in space and time depending on catchment characteristics including climate, geology and human impact. We identify regions where models are systematically failing to produce good results, and present reasons why this could be the case. We also identify regions or catchment characteristics where one model performs better than others, and have explored what structural component or parameterisation enables certain models to produce better simulations in these catchments. Model predictive capability was assessed for each catchment, through looking at the ability of the models to produce discharge prediction bounds which successfully bound the observed discharge. These results improve our understanding of the predictive capability of simple conceptual hydrological models across the UK and help us to identify where further effort is needed to develop modelling approaches to better represent different catchment and climate typologies.
Effectiveness of diaphragmatic stimulation with single-channel electrodes in rabbits*
Ghedini, Rodrigo Guellner; Espinel, Julio de Oliveira; Felix, Elaine Aparecida; Paludo, Artur de Oliveira; Mariano, Rodrigo; Holand, Arthur Rodrigo Ronconi; Andrade, Cristiano Feijó
2013-01-01
Every year, a large number of individuals become dependent on mechanical ventilation because of a loss of diaphragm function. The most common causes are cervical spinal trauma and neuromuscular diseases. We have developed an experimental model to evaluate the performance of electrical stimulation of the diaphragm in rabbits using single-channel electrodes implanted directly into the muscle. Various current intensities (10, 16, 20, and 26 mA) produced tidal volumes above the baseline value, showing that this model is effective for the study of diaphragm performance at different levels of electrical stimulation PMID:24068272
Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B.
2012-01-01
The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem. PMID:22649480
NASA Astrophysics Data System (ADS)
Wegrzyński, Wojciech; Krajewski, Grzegorz; Kimbar, Grzegorz
2018-01-01
This paper is a proposal of a new device that may be used as a component of natural smoke ventilation systems - an external aerodynamic baffle used to limit the wind effect at the most adverse angle. Natural ventilation is not only affected by the external wind, but also dependent on the angle of wind attack. It has been proven, that at angles between 45° to 60° the performance of such device is the lowest. This is the reason why additional device is proposed - external baffle that could hypothetically increase the performance at chosen angles. The purpose of this paper is to explore this idea by numerical modelling of such external elements on a validated natural ventilator model, with use of ANSYS® Fluent® CFD model.
Constrained optimization via simulation models for new product innovation
NASA Astrophysics Data System (ADS)
Pujowidianto, Nugroho A.
2017-11-01
We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.
Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B
2012-01-01
The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem.
Assessment and Improvement of GOCE based Global Geopotential Models Using Wavelet Decomposition
NASA Astrophysics Data System (ADS)
Erol, Serdar; Erol, Bihter; Serkan Isik, Mustafa
2016-07-01
The contribution of recent Earth gravity field satellite missions, specifically GOCE mission, leads significant improvement in quality of gravity field models in both accuracy and resolution manners. However the performance and quality of each released model vary not only depending on the spatial location of the Earth but also the different bands of the spectral expansion. Therefore the assessment of the global model performances with validations using in situ-data in varying territories on the Earth is essential for clarifying their exact performances in local. Beside of this, their spectral evaluation and quality assessment of the signal in each part of the spherical harmonic expansion spectrum is essential to have a clear decision for the commission error content of the model and determining its optimal degree, revealed the best results, as well. The later analyses provide also a perspective and comparison on the global behavior of the models and opportunity to report the sequential improvement of the models depending on the mission developments and hence the contribution of the new data of missions. In this study a review on spectral assessment results of the recently released GOCE based global geopotential models DIR-R5, TIM-R5 with the enhancement using EGM2008, as reference model, in Turkey, versus the terrestrial data is provided. Beside of reporting the GOCE mission contribution to the models in Turkish territory, the possible improvement in the spectral quality of these models, via decomposition that are highly contaminated by noise, is purposed. In the analyses the motivation is on achieving an optimal amount of improvement that rely on conserving the useful component of the GOCE signal as much as possible, while fusing the filtered GOCE based models with EGM2008 in the appropriate spectral bands. The investigation also contain the assessment of the coherence and the correlation between the Earth gravity field parameters (free-air gravity anomalies and geoid undulations), derived from the validated geopotential models and terrestrial data (GPS/leveling, terrestrial gravity observations, DTM etc.), as well as the WGM2012 products. In the conclusion, with the numerical results, the performance of the assessed models are clarified in Turkish territory and the potential of the Wavelet decomposition in the improvement of the geopotential models is verified.
Giorgio Vacchiano; John D. Shaw; R. Justin DeRose; James N. Long
2008-01-01
Diameter increment is an important variable in modeling tree growth. Most facets of predicted tree development are dependent in part on diameter or diameter increment, the most commonly measured stand variable. The behavior of the Forest Vegetation Simulator (FVS) largely relies on the performance of the diameter increment model and the subsequent use of predicted dbh...
Estimating Classifier Accuracy Using Noisy Expert Labels
estimators to real -world problems is limited. We applythe estimators to labels simulated from three models of the expert labeling process and also four real ...thatconditional dependence between experts negatively impacts estimator performance. On two of the real datasets, the estimatorsclearly outperformed the
EVALUATING DEGRADATION RATES OF CHLORINATED ORGANICS IN GROUNDWATER USING ANALYTICAL MODELS
The persistence and fate of organic contaminants in the environment largely depends on their rate of degradation. Most studies of degradation rate are performed in the lab where chemical conditions can be controlled precisely. Unfortunately, literature values for lab degradation ...
NASA Astrophysics Data System (ADS)
Zhang, Wei; Wang, Jun
2017-09-01
In attempt to reproduce price dynamics of financial markets, a stochastic agent-based financial price model is proposed and investigated by stochastic exclusion process. The exclusion process, one of interacting particle systems, is usually thought of as modeling particle motion (with the conserved number of particles) in a continuous time Markov process. In this work, the process is utilized to imitate the trading interactions among the investing agents, in order to explain some stylized facts found in financial time series dynamics. To better understand the correlation behaviors of the proposed model, a new time-dependent intrinsic detrended cross-correlation (TDI-DCC) is introduced and performed, also, the autocorrelation analyses are applied in the empirical research. Furthermore, to verify the rationality of the financial price model, the actual return series are also considered to be comparatively studied with the simulation ones. The comparison results of return behaviors reveal that this financial price dynamics model can reproduce some correlation features of actual stock markets.
Supervised Learning Using Spike-Timing-Dependent Plasticity of Memristive Synapses.
Nishitani, Yu; Kaneko, Yukihiro; Ueda, Michihito
2015-12-01
We propose a supervised learning model that enables error backpropagation for spiking neural network hardware. The method is modeled by modifying an existing model to suit the hardware implementation. An example of a network circuit for the model is also presented. In this circuit, a three-terminal ferroelectric memristor (3T-FeMEM), which is a field-effect transistor with a gate insulator composed of ferroelectric materials, is used as an electric synapse device to store the analog synaptic weight. Our model can be implemented by reflecting the network error to the write voltage of the 3T-FeMEMs and introducing a spike-timing-dependent learning function to the device. An XOR problem was successfully demonstrated as a benchmark learning by numerical simulations using the circuit properties to estimate the learning performance. In principle, the learning time per step of this supervised learning model and the circuit is independent of the number of neurons in each layer, promising a high-speed and low-power calculation in large-scale neural networks.
Robust inference in discrete hazard models for randomized clinical trials.
Nguyen, Vinh Q; Gillen, Daniel L
2012-10-01
Time-to-event data in which failures are only assessed at discrete time points are common in many clinical trials. Examples include oncology studies where events are observed through periodic screenings such as radiographic scans. When the survival endpoint is acknowledged to be discrete, common methods for the analysis of observed failure times include the discrete hazard models (e.g., the discrete-time proportional hazards and the continuation ratio model) and the proportional odds model. In this manuscript, we consider estimation of a marginal treatment effect in discrete hazard models where the constant treatment effect assumption is violated. We demonstrate that the estimator resulting from these discrete hazard models is consistent for a parameter that depends on the underlying censoring distribution. An estimator that removes the dependence on the censoring mechanism is proposed and its asymptotic distribution is derived. Basing inference on the proposed estimator allows for statistical inference that is scientifically meaningful and reproducible. Simulation is used to assess the performance of the presented methodology in finite samples.
Anomalous pH-Dependent Nanofluidic Salinity Gradient Power.
Yeh, Li-Hsien; Chen, Fu; Chiou, Yu-Ting; Su, Yen-Shao
2017-12-01
Previous studies on nanofluidic salinity gradient power (NSGP), where energy associated with the salinity gradient can be harvested with ion-selective nanopores, all suggest that nanofluidic devices having higher surface charge density should have higher performance, including osmotic power and conversion efficiency. In this manuscript, this viewpoint is challenged and anomalous counterintuitive pH-dependent NSGP behaviors are reported. For example, with equal pH deviation from its isoelectric point (IEP), the nanopore at pH < IEP is shown to have smaller surface charge density but remarkably higher NSGP performance than that at pH > IEP. Moreover, for sufficiently low pH, the NSGP performance decreases with lowering pH (increasing nanopore charge density). As a result, a maximum osmotic power density as high as 5.85 kW m -2 can be generated along with a conversion efficiency of 26.3% achieved for a single alumina nanopore at pH 3.5 under a 1000-fold concentration ratio. Using the rigorous model with considering the surface equilibrium reactions on the pore wall, it is proved that these counterintuitive surface-charge-dependent NSGP behaviors result from the pH-dependent ion concentration polarization effect, which yields the degradation in effective concentration ratio across the nanopore. These findings provide significant insight for the design of next-generation, high-performance NSGP devices. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ayton, Ellyn; Porterfield, Katherine; Corley, Courtney D.
2017-01-01
This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs) units capable of nowcasting (predicting in “real-time”) and forecasting (predicting the future) ILI dynamics in the 2011 – 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a) Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns. (g) Model performance improves with more tweets available per geo-location e.g., the error gets lower and the Pearson score gets higher for locations with more tweets. PMID:29244814
Volkova, Svitlana; Ayton, Ellyn; Porterfield, Katherine; Corley, Courtney D
2017-01-01
This work is the first to take advantage of recurrent neural networks to predict influenza-like illness (ILI) dynamics from various linguistic signals extracted from social media data. Unlike other approaches that rely on timeseries analysis of historical ILI data and the state-of-the-art machine learning models, we build and evaluate the predictive power of neural network architectures based on Long Short Term Memory (LSTMs) units capable of nowcasting (predicting in "real-time") and forecasting (predicting the future) ILI dynamics in the 2011 - 2014 influenza seasons. To build our models we integrate information people post in social media e.g., topics, embeddings, word ngrams, stylistic patterns, and communication behavior using hashtags and mentions. We then quantitatively evaluate the predictive power of different social media signals and contrast the performance of the-state-of-the-art regression models with neural networks using a diverse set of evaluation metrics. Finally, we combine ILI and social media signals to build a joint neural network model for ILI dynamics prediction. Unlike the majority of the existing work, we specifically focus on developing models for local rather than national ILI surveillance, specifically for military rather than general populations in 26 U.S. and six international locations., and analyze how model performance depends on the amount of social media data available per location. Our approach demonstrates several advantages: (a) Neural network architectures that rely on LSTM units trained on social media data yield the best performance compared to previously used regression models. (b) Previously under-explored language and communication behavior features are more predictive of ILI dynamics than stylistic and topic signals expressed in social media. (c) Neural network models learned exclusively from social media signals yield comparable or better performance to the models learned from ILI historical data, thus, signals from social media can be potentially used to accurately forecast ILI dynamics for the regions where ILI historical data is not available. (d) Neural network models learned from combined ILI and social media signals significantly outperform models that rely solely on ILI historical data, which adds to a great potential of alternative public sources for ILI dynamics prediction. (e) Location-specific models outperform previously used location-independent models e.g., U.S. only. (f) Prediction results significantly vary across geolocations depending on the amount of social media data available and ILI activity patterns. (g) Model performance improves with more tweets available per geo-location e.g., the error gets lower and the Pearson score gets higher for locations with more tweets.
Dependency-based long short term memory network for drug-drug interaction extraction.
Wang, Wei; Yang, Xi; Yang, Canqun; Guo, Xiaowei; Zhang, Xiang; Wu, Chengkun
2017-12-28
Drug-drug interaction extraction (DDI) needs assistance from automated methods to address the explosively increasing biomedical texts. In recent years, deep neural network based models have been developed to address such needs and they have made significant progress in relation identification. We propose a dependency-based deep neural network model for DDI extraction. By introducing the dependency-based technique to a bi-directional long short term memory network (Bi-LSTM), we build three channels, namely, Linear channel, DFS channel and BFS channel. All of these channels are constructed with three network layers, including embedding layer, LSTM layer and max pooling layer from bottom up. In the embedding layer, we extract two types of features, one is distance-based feature and another is dependency-based feature. In the LSTM layer, a Bi-LSTM is instituted in each channel to better capture relation information. Then max pooling is used to get optimal features from the entire encoding sequential data. At last, we concatenate the outputs of all channels and then link it to the softmax layer for relation identification. To the best of our knowledge, our model achieves new state-of-the-art performance with the F-score of 72.0% on the DDIExtraction 2013 corpus. Moreover, our approach obtains much higher Recall value compared to the existing methods. The dependency-based Bi-LSTM model can learn effective relation information with less feature engineering in the task of DDI extraction. Besides, the experimental results show that our model excels at balancing the Precision and Recall values.