Using Design-Based Latent Growth Curve Modeling with Cluster-Level Predictor to Address Dependency
ERIC Educational Resources Information Center
Wu, Jiun-Yu; Kwok, Oi-Man; Willson, Victor L.
2014-01-01
The authors compared the effects of using the true Multilevel Latent Growth Curve Model (MLGCM) with single-level regular and design-based Latent Growth Curve Models (LGCM) with or without the higher-level predictor on various criterion variables for multilevel longitudinal data. They found that random effect estimates were biased when the…
Quality Quandaries: Predicting a Population of Curves
Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip
2017-12-19
We present a random effects spline regression model based on splines that provides an integrated approach for analyzing functional data, i.e., curves, when the shape of the curves is not parametrically specified. An analysis using this model is presented that makes inferences about a population of curves as well as features of the curves.
Quality Quandaries: Predicting a Population of Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fugate, Michael Lynn; Hamada, Michael Scott; Weaver, Brian Phillip
We present a random effects spline regression model based on splines that provides an integrated approach for analyzing functional data, i.e., curves, when the shape of the curves is not parametrically specified. An analysis using this model is presented that makes inferences about a population of curves as well as features of the curves.
NASA Astrophysics Data System (ADS)
Chamidah, Nur; Rifada, Marisa
2016-03-01
There is significant of the coeficient correlation between weight and height of the children. Therefore, the simultaneous model estimation is better than partial single response approach. In this study we investigate the pattern of sex difference in growth curve of children from birth up to two years of age in Surabaya, Indonesia based on biresponse model. The data was collected in a longitudinal representative sample of the Surabaya population of healthy children that consists of two response variables i.e. weight (kg) and height (cm). While a predictor variable is age (month). Based on generalized cross validation criterion, the modeling result based on biresponse model by using local linear estimator for boy and girl growth curve gives optimal bandwidth i.e 1.41 and 1.56 and the determination coefficient (R2) i.e. 99.99% and 99.98%,.respectively. Both boy and girl curves satisfy the goodness of fit criterion i.e..the determination coefficient tends to one. Also, there is difference pattern of growth curve between boy and girl. The boy median growth curves is higher than those of girl curve.
NASA Astrophysics Data System (ADS)
Nanda, Tarun; Kumar, B. Ravi; Singh, Vishal
2017-11-01
Micromechanical modeling is used to predict material's tensile flow curve behavior based on microstructural characteristics. This research develops a simplified micromechanical modeling approach for predicting flow curve behavior of dual-phase steels. The existing literature reports on two broad approaches for determining tensile flow curve of these steels. The modeling approach developed in this work attempts to overcome specific limitations of the existing two approaches. This approach combines dislocation-based strain-hardening method with rule of mixtures. In the first step of modeling, `dislocation-based strain-hardening method' was employed to predict tensile behavior of individual phases of ferrite and martensite. In the second step, the individual flow curves were combined using `rule of mixtures,' to obtain the composite dual-phase flow behavior. To check accuracy of proposed model, four distinct dual-phase microstructures comprising of different ferrite grain size, martensite fraction, and carbon content in martensite were processed by annealing experiments. The true stress-strain curves for various microstructures were predicted with the newly developed micromechanical model. The results of micromechanical model matched closely with those of actual tensile tests. Thus, this micromechanical modeling approach can be used to predict and optimize the tensile flow behavior of dual-phase steels.
Precipitation frequency analysis based on regional climate simulations in Central Alberta
NASA Astrophysics Data System (ADS)
Kuo, Chun-Chao; Gan, Thian Yew; Hanrahan, Janel L.
2014-03-01
A Regional Climate Model (RCM), MM5 (the Fifth Generation Pennsylvania State University/National Center for Atmospheric Research mesoscale model), is used to simulate summer precipitation in Central Alberta. MM5 was set up with a one-way, three-domain nested framework, with domain resolutions of 27, 9, and 3 km, respectively, and forced with ERA-Interim reanalysis data of ECMWF (European Centre for Medium-Range Weather Forecasts). The objective is to develop high resolution, grid-based Intensity-Duration-Frequency (IDF) curves based on the simulated annual maximums of precipitation (AMP) data for durations ranging from 15-min to 24-h. The performance of MM5 was assessed in terms of simulated rainfall intensity, precipitable water, and 2-m air temperature. Next, the grid-based IDF curves derived from MM5 were compared to IDF curves derived from six RCMs of the North American Regional Climate Change Assessment Program (NARCCAP) set up with 50-km grids, driven with NCEP-DOE (National Centers for Environmental Prediction-Department of Energy) Reanalysis II data, and regional IDF curves derived from observed rain gauge data (RG-IDF). The analyzed results indicate that 6-h simulated precipitable water and 2-m temperature agree well with the ERA-Interim reanalysis data. However, compared to RG-IDF curves, IDF curves based on simulated precipitation data of MM5 are overestimated especially for IDF curves of 2-year return period. In contract, IDF curves developed from NARCCAP data suffer from under-estimation and differ more from RG-IDF curves than the MM5 IDF curves. The over-estimation of IDF curves of MM5 was corrected by a quantile-based, bias correction method. By dynamically downscale the ERA-Interim and after bias correction, it is possible to develop IDF curves useful for regions with limited or no rain gauge data. This estimation process can be further extended to predict future grid-based IDF curves subjected to possible climate change impacts based on climate change projections of GCMs (general circulation models) of IPCC (Intergovernmental Panel on Climate Change).
Equivalence of binormal likelihood-ratio and bi-chi-squared ROC curve models
Hillis, Stephen L.
2015-01-01
A basic assumption for a meaningful diagnostic decision variable is that there is a monotone relationship between it and its likelihood ratio. This relationship, however, generally does not hold for a decision variable that results in a binormal ROC curve. As a result, receiver operating characteristic (ROC) curve estimation based on the assumption of a binormal ROC-curve model produces improper ROC curves that have “hooks,” are not concave over the entire domain, and cross the chance line. Although in practice this “improperness” is usually not noticeable, sometimes it is evident and problematic. To avoid this problem, Metz and Pan proposed basing ROC-curve estimation on the assumption of a binormal likelihood-ratio (binormal-LR) model, which states that the decision variable is an increasing transformation of the likelihood-ratio function of a random variable having normal conditional diseased and nondiseased distributions. However, their development is not easy to follow. I show that the binormal-LR model is equivalent to a bi-chi-squared model in the sense that the families of corresponding ROC curves are the same. The bi-chi-squared formulation provides an easier-to-follow development of the binormal-LR ROC curve and its properties in terms of well-known distributions. PMID:26608405
NASA Astrophysics Data System (ADS)
Ritschel, Christoph; Ulbrich, Uwe; Névir, Peter; Rust, Henning W.
2017-12-01
For several hydrological modelling tasks, precipitation time series with a high (i.e. sub-daily) resolution are indispensable. The data are, however, not always available, and thus model simulations are used to compensate. A canonical class of stochastic models for sub-daily precipitation are Poisson cluster processes, with the original Bartlett-Lewis (OBL) model as a prominent representative. The OBL model has been shown to well reproduce certain characteristics found in observations. Our focus is on intensity-duration-frequency (IDF) relationships, which are of particular interest in risk assessment. Based on a high-resolution precipitation time series (5 min) from Berlin-Dahlem, OBL model parameters are estimated and IDF curves are obtained on the one hand directly from the observations and on the other hand from OBL model simulations. Comparing the resulting IDF curves suggests that the OBL model is able to reproduce the main features of IDF statistics across several durations but cannot capture rare events (here an event with a return period larger than 1000 years on the hourly timescale). In this paper, IDF curves are estimated based on a parametric model for the duration dependence of the scale parameter in the generalized extreme value distribution; this allows us to obtain a consistent set of curves over all durations. We use the OBL model to investigate the validity of this approach based on simulated long time series.
Modelling rating curves using remotely sensed LiDAR data
Nathanson, Marcus; Kean, Jason W.; Grabs, Thomas J.; Seibert, Jan; Laudon, Hjalmar; Lyon, Steve W.
2012-01-01
Accurate stream discharge measurements are important for many hydrological studies. In remote locations, however, it is often difficult to obtain stream flow information because of the difficulty in making the discharge measurements necessary to define stage-discharge relationships (rating curves). This study investigates the feasibility of defining rating curves by using a fluid mechanics-based model constrained with topographic data from an airborne LiDAR scanning. The study was carried out for an 8m-wide channel in the boreal landscape of northern Sweden. LiDAR data were used to define channel geometry above a low flow water surface along the 90-m surveyed reach. The channel topography below the water surface was estimated using the simple assumption of a flat streambed. The roughness for the modelled reach was back calculated from a single measurment of discharge. The topographic and roughness information was then used to model a rating curve. To isolate the potential influence of the flat bed assumption, a 'hybrid model' rating curve was developed on the basis of data combined from the LiDAR scan and a detailed ground survey. Whereas this hybrid model rating curve was in agreement with the direct measurements of discharge, the LiDAR model rating curve was equally in agreement with the medium and high flow measurements based on confidence intervals calculated from the direct measurements. The discrepancy between the LiDAR model rating curve and the low flow measurements was likely due to reduced roughness associated with unresolved submerged bed topography. Scanning during periods of low flow can help minimize this deficiency. These results suggest that combined ground surveys and LiDAR scans or multifrequency LiDAR scans that see 'below' the water surface (bathymetric LiDAR) could be useful in generating data needed to run such a fluid mechanics-based model. This opens a realm of possibility to remotely sense and monitor stream flows in channels in remote locations.
Unthank, Michael D.; Newson, Jeremy K.; Williamson, Tanja N.; Nelson, Hugh L.
2012-01-01
Flow- and load-duration curves were constructed from the model outputs of the U.S. Geological Survey's Water Availability Tool for Environmental Resources (WATER) application for streams in Kentucky. The WATER application was designed to access multiple geospatial datasets to generate more than 60 years of statistically based streamflow data for Kentucky. The WATER application enables a user to graphically select a site on a stream and generate an estimated hydrograph and flow-duration curve for the watershed upstream of that point. The flow-duration curves are constructed by calculating the exceedance probability of the modeled daily streamflows. User-defined water-quality criteria and (or) sampling results can be loaded into the WATER application to construct load-duration curves that are based on the modeled streamflow results. Estimates of flow and streamflow statistics were derived from TOPographically Based Hydrological MODEL (TOPMODEL) simulations in the WATER application. A modified TOPMODEL code, SDP-TOPMODEL (Sinkhole Drainage Process-TOPMODEL) was used to simulate daily mean discharges over the period of record for 5 karst and 5 non-karst watersheds in Kentucky in order to verify the calibrated model. A statistical evaluation of the model's verification simulations show that calibration criteria, established by previous WATER application reports, were met thus insuring the model's ability to provide acceptably accurate estimates of discharge at gaged and ungaged sites throughout Kentucky. Flow-duration curves are constructed in the WATER application by calculating the exceedence probability of the modeled daily flow values. The flow-duration intervals are expressed as a percentage, with zero corresponding to the highest stream discharge in the streamflow record. Load-duration curves are constructed by applying the loading equation (Load = Flow*Water-quality criterion) at each flow interval.
A novel model of magnetorheological damper with hysteresis division
NASA Astrophysics Data System (ADS)
Yu, Jianqiang; Dong, Xiaomin; Zhang, Zonglun
2017-10-01
Due to the complex nonlinearity of magnetorheological (MR) behavior, the modeling of MR dampers is a challenge. A simple and effective model of MR damper remains a work in progress. A novel model of MR damper is proposed with force-velocity hysteresis division method in this study. A typical hysteresis loop of MR damper can be simply divided into two novel curves with the division idea. One is the backbone curve and the other is the branch curve. The exponential-family functions which capturing the characteristics of the two curves can simplify the model and improve the identification efficiency. To illustrate and validate the novel phenomenological model with hysteresis division idea, a dual-end MR damper is designed and tested. Based on the experimental data, the characteristics of the novel curves are investigated. To simplify the parameters identification and obtain the reversibility, the maximum force part, the non-dimensional backbone part and the non-dimensional branch part are derived from the two curves. The maximum force part and the non-dimensional part are in multiplication type add-rule. The maximum force part is dependent on the current and maximum velocity. The non-dominated sorting genetic algorithm II (NSGA II) based on the design of experiments (DOE) is employed to identify the parameters of the normalized shape functions. Comparative analysis is conducted based on the identification results. The analysis shows that the novel model with few identification parameters has higher accuracy and better predictive ability.
Review Article: A comparison of flood and earthquake vulnerability assessment indicators
NASA Astrophysics Data System (ADS)
de Ruiter, Marleen C.; Ward, Philip J.; Daniell, James E.; Aerts, Jeroen C. J. H.
2017-07-01
In a cross-disciplinary study, we carried out an extensive literature review to increase understanding of vulnerability indicators used in the disciplines of earthquake- and flood vulnerability assessments. We provide insights into potential improvements in both fields by identifying and comparing quantitative vulnerability indicators grouped into physical and social categories. Next, a selection of index- and curve-based vulnerability models that use these indicators are described, comparing several characteristics such as temporal and spatial aspects. Earthquake vulnerability methods traditionally have a strong focus on object-based physical attributes used in vulnerability curve-based models, while flood vulnerability studies focus more on indicators applied to aggregated land-use classes in curve-based models. In assessing the differences and similarities between indicators used in earthquake and flood vulnerability models, we only include models that separately assess either of the two hazard types. Flood vulnerability studies could be improved using approaches from earthquake studies, such as developing object-based physical vulnerability curve assessments and incorporating time-of-the-day-based building occupation patterns. Likewise, earthquake assessments could learn from flood studies by refining their selection of social vulnerability indicators. Based on the lessons obtained in this study, we recommend future studies for exploring risk assessment methodologies across different hazard types.
The estimation of branching curves in the presence of subject-specific random effects.
Elmi, Angelo; Ratcliffe, Sarah J; Guo, Wensheng
2014-12-20
Branching curves are a technique for modeling curves that change trajectory at a change (branching) point. Currently, the estimation framework is limited to independent data, and smoothing splines are used for estimation. This article aims to extend the branching curve framework to the longitudinal data setting where the branching point varies by subject. If the branching point is modeled as a random effect, then the longitudinal branching curve framework is a semiparametric nonlinear mixed effects model. Given existing issues with using random effects within a smoothing spline, we express the model as a B-spline based semiparametric nonlinear mixed effects model. Simple, clever smoothness constraints are enforced on the B-splines at the change point. The method is applied to Women's Health data where we model the shape of the labor curve (cervical dilation measured longitudinally) before and after treatment with oxytocin (a labor stimulant). Copyright © 2014 John Wiley & Sons, Ltd.
The 1974 AVCR Young Scholar Paper: An Open-System Model of Learning
ERIC Educational Resources Information Center
Winn, William
1975-01-01
Rejecting the cybernetic model of the learner, the author offers an open-system model based on von Bertalanffy's equation for growth of the living organism. The model produces four learning curves, not just the logarithmic curve produced by the successive approximations of the cybernetic model. (Editor)
A Bayesian beta distribution model for estimating rainfall IDF curves in a changing climate
NASA Astrophysics Data System (ADS)
Lima, Carlos H. R.; Kwon, Hyun-Han; Kim, Jin-Young
2016-09-01
The estimation of intensity-duration-frequency (IDF) curves for rainfall data comprises a classical task in hydrology studies to support a variety of water resources projects, including urban drainage and the design of flood control structures. In a changing climate, however, traditional approaches based on historical records of rainfall and on the stationary assumption can be inadequate and lead to poor estimates of rainfall intensity quantiles. Climate change scenarios built on General Circulation Models offer a way to access and estimate future changes in spatial and temporal rainfall patterns at the daily scale at the utmost, which is not as fine temporal resolution as required (e.g. hours) to directly estimate IDF curves. In this paper we propose a novel methodology based on a four-parameter beta distribution to estimate IDF curves conditioned on the observed (or simulated) daily rainfall, which becomes the time-varying upper bound of the updated nonstationary beta distribution. The inference is conducted in a Bayesian framework that provides a better way to take into account the uncertainty in the model parameters when building the IDF curves. The proposed model is tested using rainfall data from four stations located in South Korea and projected climate change Representative Concentration Pathways (RCPs) scenarios 6 and 8.5 from the Met Office Hadley Centre HadGEM3-RA model. The results show that the developed model fits the historical data as good as the traditional Generalized Extreme Value (GEV) distribution but is able to produce future IDF curves that significantly differ from the historically based IDF curves. The proposed model predicts for the stations and RCPs scenarios analysed in this work an increase in the intensity of extreme rainfalls of short duration with long return periods.
A semiparametric separation curve approach for comparing correlated ROC data from multiple markers
Tang, Liansheng Larry; Zhou, Xiao-Hua
2012-01-01
In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360
Can hydraulic-modelled rating curves reduce uncertainty in high flow data?
NASA Astrophysics Data System (ADS)
Westerberg, Ida; Lam, Norris; Lyon, Steve W.
2017-04-01
Flood risk assessments rely on accurate discharge data records. Establishing a reliable rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. In this study we compared the uncertainty in discharge data that resulted from these two rating curve modelling approaches. We applied both methods to a Swedish catchment, accounting for uncertainties in the stage-discharge gauging and water-surface slope data for the hydraulic model and in the stage-discharge gauging data and rating-curve parameters for the traditional method. We focused our analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken. First results show that the hydraulically-modelled rating curves were more sensitive to uncertainties in the calibration measurements of discharge than water surface slope. The uncertainty of the hydraulically-modelled rating curves were lowest within the range of the three calibration stage-discharge gaugings (i.e. between median and two-times median flow) whereas uncertainties were higher outside of this range. For instance, at the highest observed stage of the 24-year stage record, the 90% uncertainty band was -15% to +40% of the official rating curve. Additional gaugings at high flows (i.e. four to five times median flow) would likely substantially reduce those uncertainties. These first results show the potential of the hydraulically-modelled curves, particularly where the calibration gaugings are of high quality and cover a wide range of flow conditions.
Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A.
2008-01-01
We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric-titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH) and dimethylsulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the Electrostatically Driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, MD simulations are run with the AMBER force field and the Generalized-Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethyl amine and propyl amine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach, and the titration curve in water calculated using the MD-based approach, have smooth shapes characteristic of the titration of weak multifunctional acids with small differences between the dissociation constants. Nevertheless, quantitative agreement between theoretically predicted and experimental titration curves is not achieved in all three solvents even with the MD-based approach which is manifested by a smaller pH range of the calculated titration curves with respect to the experimental curves. The poorer agreement obtained for water than for the non-aqueous solvents suggests a significant role of specific solvation in water, which cannot be accounted for by the mean-field solvation models. PMID:16509748
Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A
2006-03-09
We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH), and dimethyl sulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the electrostatically driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, molecular dynamics (MD) simulations are run with the AMBER force field and the generalized Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethylamine and propylamine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach and the titration curve in water calculated using the MD-based approach have smooth shapes characteristic of the titration of weak multifunctional acids with small differences between the dissociation constants. Nevertheless, quantitative agreement between theoretically predicted and experimental titration curves is not achieved in all three solvents even with the MD-based approach, which is manifested by a smaller pH range of the calculated titration curves with respect to the experimental curves. The poorer agreement obtained for water than for the nonaqueous solvents suggests a significant role of specific solvation in water, which cannot be accounted for by the mean-field solvation models.
Li, Yi; Chen, Yuren
2016-12-30
To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.
A mathematical function for the description of nutrient-response curve
Ahmadi, Hamed
2017-01-01
Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a) has biological interpretation, b) may be used to calculate reliable estimates of nutrient response relationships, and c) provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements. PMID:29161271
Statistical aspects of modeling the labor curve.
Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M
2015-06-01
In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account. Copyright © 2015 Elsevier Inc. All rights reserved.
Experimental study of a generic high-speed civil transport
NASA Technical Reports Server (NTRS)
Belton, Pamela S.; Campbell, Richard L.
1992-01-01
An experimental study of generic high-speed civil transport was conducted in the NASA Langley 8-ft Transonic Pressure Tunnel. The data base was obtained for the purpose of assessing the accuracy of various levels of computational analysis. Two models differing only in wingtip geometry were tested with and without flow-through nacelles. The baseline model has a curved or crescent wingtip shape, while the second model has a more conventional straight wingtip shape. The study was conducted at Mach numbers from 0.30 to 1.19. Force data were obtained on both the straight wingtip model and the curved wingtip model. Only the curved wingtip model was instrumented for measuring pressures. Selected longitudinal, lateral, and directional data are presented for both models. Selected pressure distributions for the curved wingtip model are also presented.
Rodríguez-Álvarez, María Xosé; Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Tahoces, Pablo G
2018-03-01
Prior to using a diagnostic test in a routine clinical setting, the rigorous evaluation of its diagnostic accuracy is essential. The receiver-operating characteristic curve is the measure of accuracy most widely used for continuous diagnostic tests. However, the possible impact of extra information about the patient (or even the environment) on diagnostic accuracy also needs to be assessed. In this paper, we focus on an estimator for the covariate-specific receiver-operating characteristic curve based on direct regression modelling and nonparametric smoothing techniques. This approach defines the class of generalised additive models for the receiver-operating characteristic curve. The main aim of the paper is to offer new inferential procedures for testing the effect of covariates on the conditional receiver-operating characteristic curve within the above-mentioned class. Specifically, two different bootstrap-based tests are suggested to check (a) the possible effect of continuous covariates on the receiver-operating characteristic curve and (b) the presence of factor-by-curve interaction terms. The validity of the proposed bootstrap-based procedures is supported by simulations. To facilitate the application of these new procedures in practice, an R-package, known as npROCRegression, is provided and briefly described. Finally, data derived from a computer-aided diagnostic system for the automatic detection of tumour masses in breast cancer is analysed.
Kaur, A; Takhar, P S; Smith, D M; Mann, J E; Brashears, M M
2008-10-01
A fractional differential equations (FDEs)-based theory involving 1- and 2-term equations was developed to predict the nonlinear survival and growth curves of foodborne pathogens. It is interesting to note that the solution of 1-term FDE leads to the Weibull model. Nonlinear regression (Gauss-Newton method) was performed to calculate the parameters of the 1-term and 2-term FDEs. The experimental inactivation data of Salmonella cocktail in ground turkey breast, ground turkey thigh, and pork shoulder; and cocktail of Salmonella, E. coli, and Listeria monocytogenes in ground beef exposed at isothermal cooking conditions of 50 to 66 degrees C were used for validation. To evaluate the performance of 2-term FDE in predicting the growth curves-growth of Salmonella typhimurium, Salmonella Enteritidis, and background flora in ground pork and boneless pork chops; and E. coli O157:H7 in ground beef in the temperature range of 22.2 to 4.4 degrees C were chosen. A program was written in Matlab to predict the model parameters and survival and growth curves. Two-term FDE was more successful in describing the complex shapes of microbial survival and growth curves as compared to the linear and Weibull models. Predicted curves of 2-term FDE had higher magnitudes of R(2) (0.89 to 0.99) and lower magnitudes of root mean square error (0.0182 to 0.5461) for all experimental cases in comparison to the linear and Weibull models. This model was capable of predicting the tails in survival curves, which was not possible using Weibull and linear models. The developed model can be used for other foodborne pathogens in a variety of food products to study the destruction and growth behavior.
Concentric Tube Robot Design and Optimization Based on Task and Anatomical Constraints
Bergeles, Christos; Gosline, Andrew H.; Vasilyev, Nikolay V.; Codd, Patrick J.; del Nido, Pedro J.; Dupont, Pierre E.
2015-01-01
Concentric tube robots are catheter-sized continuum robots that are well suited for minimally invasive surgery inside confined body cavities. These robots are constructed from sets of pre-curved superelastic tubes and are capable of assuming complex 3D curves. The family of 3D curves that the robot can assume depends on the number, curvatures, lengths and stiffnesses of the tubes in its tube set. The robot design problem involves solving for a tube set that will produce the family of curves necessary to perform a surgical procedure. At a minimum, these curves must enable the robot to smoothly extend into the body and to manipulate tools over the desired surgical workspace while respecting anatomical constraints. This paper introduces an optimization framework that utilizes procedureor patient-specific image-based anatomical models along with surgical workspace requirements to generate robot tube set designs. The algorithm searches for designs that minimize robot length and curvature and for which all paths required for the procedure consist of stable robot configurations. Two mechanics-based kinematic models are used. Initial designs are sought using a model assuming torsional rigidity. These designs are then refined using a torsionally-compliant model. The approach is illustrated with clinically relevant examples from neurosurgery and intracardiac surgery. PMID:26380575
Barbieri, Christopher E; Cha, Eugene K; Chromecki, Thomas F; Dunning, Allison; Lotan, Yair; Svatek, Robert S; Scherr, Douglas S; Karakiewicz, Pierre I; Sun, Maxine; Mazumdar, Madhu; Shariat, Shahrokh F
2012-03-01
• To employ decision curve analysis to determine the impact of nuclear matrix protein 22 (NMP22) on clinical decision making in the detection of bladder cancer using data from a prospective trial. • The study included 1303 patients at risk for bladder cancer who underwent cystoscopy, urine cytology and measurement of urinary NMP22 levels. • We constructed several prediction models to estimate risk of bladder cancer. The base model was generated using patient characteristics (age, gender, race, smoking and haematuria); cytology and NMP22 were added to the base model to determine effects on predictive accuracy. • Clinical net benefit was calculated by summing the benefits and subtracting the harms and weighting these by the threshold probability at which a patient or clinician would opt for cystoscopy. • In all, 72 patients were found to have bladder cancer (5.5%). In univariate analyses, NMP22 was the strongest predictor of bladder cancer presence (predictive accuracy 71.3%), followed by age (67.5%) and cytology (64.3%). • In multivariable prediction models, NMP22 improved the predictive accuracy of the base model by 8.2% (area under the curve 70.2-78.4%) and of the base model plus cytology by 4.2% (area under the curve 75.9-80.1%). • Decision curve analysis revealed that adding NMP22 to other models increased clinical benefit, particularly at higher threshold probabilities. • NMP22 is a strong, independent predictor of bladder cancer. • Addition of NMP22 improves the accuracy of standard predictors by a statistically and clinically significant margin. • Decision curve analysis suggests that integration of NMP22 into clinical decision making helps avoid unnecessary cystoscopies, with minimal increased risk of missing a cancer. © 2011 THE AUTHORS. BJU INTERNATIONAL © 2011 BJU INTERNATIONAL.
Brown, Marshall D.; Zhu, Kehao; Janes, Holly
2016-01-01
The decision curve is a graphical summary recently proposed for assessing the potential clinical impact of risk prediction biomarkers or risk models for recommending treatment or intervention. It was applied recently in an article in Journal of Clinical Oncology to measure the impact of using a genomic risk model for deciding on adjuvant radiation therapy for prostate cancer treated with radical prostatectomy. We illustrate the use of decision curves for evaluating clinical- and biomarker-based models for predicting a man’s risk of prostate cancer, which could be used to guide the decision to biopsy. Decision curves are grounded in a decision-theoretical framework that accounts for both the benefits of intervention and the costs of intervention to a patient who cannot benefit. Decision curves are thus an improvement over purely mathematical measures of performance such as the area under the receiver operating characteristic curve. However, there are challenges in using and interpreting decision curves appropriately. We caution that decision curves cannot be used to identify the optimal risk threshold for recommending intervention. We discuss the use of decision curves for miscalibrated risk models. Finally, we emphasize that a decision curve shows the performance of a risk model in a population in which every patient has the same expected benefit and cost of intervention. If every patient has a personal benefit and cost, then the curves are not useful. If subpopulations have different benefits and costs, subpopulation-specific decision curves should be used. As a companion to this article, we released an R software package called DecisionCurve for making decision curves and related graphics. PMID:27247223
Modeling Patterns of Activities using Activity Curves
Dawadi, Prafulla N.; Cook, Diane J.; Schmitter-Edgecombe, Maureen
2016-01-01
Pervasive computing offers an unprecedented opportunity to unobtrusively monitor behavior and use the large amount of collected data to perform analysis of activity-based behavioral patterns. In this paper, we introduce the notion of an activity curve, which represents an abstraction of an individual’s normal daily routine based on automatically-recognized activities. We propose methods to detect changes in behavioral routines by comparing activity curves and use these changes to analyze the possibility of changes in cognitive or physical health. We demonstrate our model and evaluate our change detection approach using a longitudinal smart home sensor dataset collected from 18 smart homes with older adult residents. Finally, we demonstrate how big data-based pervasive analytics such as activity curve-based change detection can be used to perform functional health assessment. Our evaluation indicates that correlations do exist between behavior and health changes and that these changes can be automatically detected using smart homes, machine learning, and big data-based pervasive analytics. PMID:27346990
Modeling Patterns of Activities using Activity Curves.
Dawadi, Prafulla N; Cook, Diane J; Schmitter-Edgecombe, Maureen
2016-06-01
Pervasive computing offers an unprecedented opportunity to unobtrusively monitor behavior and use the large amount of collected data to perform analysis of activity-based behavioral patterns. In this paper, we introduce the notion of an activity curve , which represents an abstraction of an individual's normal daily routine based on automatically-recognized activities. We propose methods to detect changes in behavioral routines by comparing activity curves and use these changes to analyze the possibility of changes in cognitive or physical health. We demonstrate our model and evaluate our change detection approach using a longitudinal smart home sensor dataset collected from 18 smart homes with older adult residents. Finally, we demonstrate how big data-based pervasive analytics such as activity curve-based change detection can be used to perform functional health assessment. Our evaluation indicates that correlations do exist between behavior and health changes and that these changes can be automatically detected using smart homes, machine learning, and big data-based pervasive analytics.
Linhart, S. Mike; Nania, Jon F.; Christiansen, Daniel E.; Hutchinson, Kasey J.; Sanders, Curtis L.; Archfield, Stacey A.
2013-01-01
A variety of individuals from water resource managers to recreational users need streamflow information for planning and decisionmaking at locations where there are no streamgages. To address this problem, two statistically based methods, the Flow Duration Curve Transfer method and the Flow Anywhere method, were developed for statewide application and the two physically based models, the Precipitation Runoff Modeling-System and the Soil and Water Assessment Tool, were only developed for application for the Cedar River Basin. Observed and estimated streamflows for the two methods and models were compared for goodness of fit at 13 streamgages modeled in the Cedar River Basin by using the Nash-Sutcliffe and the percent-bias efficiency values. Based on median and mean Nash-Sutcliffe values for the 13 streamgages the Precipitation Runoff Modeling-System and Soil and Water Assessment Tool models appear to have performed similarly and better than Flow Duration Curve Transfer and Flow Anywhere methods. Based on median and mean percent bias values, the Soil and Water Assessment Tool model appears to have generally overestimated daily mean streamflows, whereas the Precipitation Runoff Modeling-System model and statistical methods appear to have underestimated daily mean streamflows. The Flow Duration Curve Transfer method produced the lowest median and mean percent bias values and appears to perform better than the other models.
NASA Astrophysics Data System (ADS)
Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.
2016-09-01
Hydrologists and engineers may choose from a range of semidistributed rainfall-runoff models such as VIC, PDM, and TOPMODEL, all of which predict runoff from a distribution of watershed properties. However, these models are not easily compared to event-based data and are missing ready-to-use analytical expressions that are analogous to the SCS-CN method. The SCS-CN method is an event-based model that describes the runoff response with a rainfall-runoff curve that is a function of the cumulative storm rainfall and antecedent wetness condition. Here we develop an event-based probabilistic storage framework and distill semidistributed models into analytical, event-based expressions for describing the rainfall-runoff response. The event-based versions called VICx, PDMx, and TOPMODELx also are extended with a spatial description of the runoff concept of "prethreshold" and "threshold-excess" runoff, which occur, respectively, before and after infiltration exceeds a storage capacity threshold. For total storm rainfall and antecedent wetness conditions, the resulting ready-to-use analytical expressions define the source areas (fraction of the watershed) that produce runoff by each mechanism. They also define the probability density function (PDF) representing the spatial variability of runoff depths that are cumulative values for the storm duration, and the average unit area runoff, which describes the so-called runoff curve. These new event-based semidistributed models and the traditional SCS-CN method are unified by the same general expression for the runoff curve. Since the general runoff curve may incorporate different model distributions, it may ease the way for relating such distributions to land use, climate, topography, ecology, geology, and other characteristics.
Long-term hydrological simulation based on the Soil Conservation Service curve number
NASA Astrophysics Data System (ADS)
Mishra, Surendra Kumar; Singh, Vijay P.
2004-05-01
Presenting a critical review of daily flow simulation models based on the Soil Conservation Service curve number (SCS-CN), this paper introduces a more versatile model based on the modified SCS-CN method, which specializes into seven cases. The proposed model was applied to the Hemavati watershed (area = 600 km2) in India and was found to yield satisfactory results in both calibration and validation. The model conserved monthly and annual runoff volumes satisfactorily. A sensitivity analysis of the model parameters was performed, including the effect of variation in storm duration. Finally, to investigate the model components, all seven variants of the modified version were tested for their suitability.
NASA Astrophysics Data System (ADS)
He, G.; Zhu, H.; Xu, J.; Gao, K.; Zhu, D.
2017-09-01
The bionic research of shape is an important aspect of the research on bionic robot, and its implementation cannot be separated from the shape modeling and numerical simulation of the bionic object, which is tedious and time-consuming. In order to improve the efficiency of shape bionic design, the feet of animals living in soft soil and swamp environment are taken as bionic objects, and characteristic skeleton curve, section curve, joint rotation variable, position and other parameters are used to describe the shape and position information of bionic object’s sole, toes and flipper. The geometry modeling of the bionic object is established by using the parameterization of characteristic curves and variables. Based on this, the integration framework of parametric modeling and finite element modeling, dynamic analysis and post-processing of sinking process in soil is proposed in this paper. The examples of bionic ostrich foot and bionic duck foot are also given. The parametric modeling and integration technique can achieve rapid improved design based on bionic object, and it can also greatly improve the efficiency and quality of robot foot bionic design, and has important practical significance to improve the level of bionic design of robot foot’s shape and structure.
Estimating thermal performance curves from repeated field observations
Childress, Evan; Letcher, Benjamin H.
2017-01-01
Estimating thermal performance of organisms is critical for understanding population distributions and dynamics and predicting responses to climate change. Typically, performance curves are estimated using laboratory studies to isolate temperature effects, but other abiotic and biotic factors influence temperature-performance relationships in nature reducing these models' predictive ability. We present a model for estimating thermal performance curves from repeated field observations that includes environmental and individual variation. We fit the model in a Bayesian framework using MCMC sampling, which allowed for estimation of unobserved latent growth while propagating uncertainty. Fitting the model to simulated data varying in sampling design and parameter values demonstrated that the parameter estimates were accurate, precise, and unbiased. Fitting the model to individual growth data from wild trout revealed high out-of-sample predictive ability relative to laboratory-derived models, which produced more biased predictions for field performance. The field-based estimates of thermal maxima were lower than those based on laboratory studies. Under warming temperature scenarios, field-derived performance models predicted stronger declines in body size than laboratory-derived models, suggesting that laboratory-based models may underestimate climate change effects. The presented model estimates true, realized field performance, avoiding assumptions required for applying laboratory-based models to field performance, which should improve estimates of performance under climate change and advance thermal ecology.
NASA Astrophysics Data System (ADS)
He, Shiyuan; Wang, Lifan; Huang, Jianhua Z.
2018-04-01
With growing data from ongoing and future supernova surveys, it is possible to empirically quantify the shapes of SNIa light curves in more detail, and to quantitatively relate the shape parameters with the intrinsic properties of SNIa. Building such relationships is critical in controlling systematic errors associated with supernova cosmology. Based on a collection of well-observed SNIa samples accumulated in the past years, we construct an empirical SNIa light curve model using a statistical method called the functional principal component analysis (FPCA) for sparse and irregularly sampled functional data. Using this method, the entire light curve of an SNIa is represented by a linear combination of principal component functions, and the SNIa is represented by a few numbers called “principal component scores.” These scores are used to establish relations between light curve shapes and physical quantities such as intrinsic color, interstellar dust reddening, spectral line strength, and spectral classes. These relations allow for descriptions of some critical physical quantities based purely on light curve shape parameters. Our study shows that some important spectral feature information is being encoded in the broad band light curves; for instance, we find that the light curve shapes are correlated with the velocity and velocity gradient of the Si II λ6355 line. This is important for supernova surveys (e.g., LSST and WFIRST). Moreover, the FPCA light curve model is used to construct the entire light curve shape, which in turn is used in a functional linear form to adjust intrinsic luminosity when fitting distance models.
Experimental study of a generic high-speed civil transport: Tabulated data
NASA Technical Reports Server (NTRS)
Belton, Pamela S.; Campbell, Richard L.
1992-01-01
An experimental study of a generic high-speed civil transport was conducted in LaRC's 8-Foot Transonic Pressure Tunnel. The data base was obtained for the purpose of assessing the accuracy of various levels of computational analysis. Two models differing only in wing tip geometry were tested with and without flow-through nacelles. The baseline model has a curved or crescent wing tip shape while the second model has a more conventional straight wing tip shape. The study was conducted at Mach numbers from 0.30-1.19. Force data were obtained on both the straight and curved wing tip models. Only the curved wing tip model was instrumented for measuring pressures. Longitudinal and lateral-directional aerodynamic data are presented without analysis in tabulated form. Pressure coefficients for the curved wing tip model are also presented in tabulated form.
Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian
2016-01-01
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches.
Helmer, Markus; Kozyrev, Vladislav; Stephan, Valeska; Treue, Stefan; Geisel, Theo; Battaglia, Demian
2016-01-01
Tuning curves are the functions that relate the responses of sensory neurons to various values within one continuous stimulus dimension (such as the orientation of a bar in the visual domain or the frequency of a tone in the auditory domain). They are commonly determined by fitting a model e.g. a Gaussian or other bell-shaped curves to the measured responses to a small subset of discrete stimuli in the relevant dimension. However, as neuronal responses are irregular and experimental measurements noisy, it is often difficult to determine reliably the appropriate model from the data. We illustrate this general problem by fitting diverse models to representative recordings from area MT in rhesus monkey visual cortex during multiple attentional tasks involving complex composite stimuli. We find that all models can be well-fitted, that the best model generally varies between neurons and that statistical comparisons between neuronal responses across different experimental conditions are affected quantitatively and qualitatively by specific model choices. As a robust alternative to an often arbitrary model selection, we introduce a model-free approach, in which features of interest are extracted directly from the measured response data without the need of fitting any model. In our attentional datasets, we demonstrate that data-driven methods provide descriptions of tuning curve features such as preferred stimulus direction or attentional gain modulations which are in agreement with fit-based approaches when a good fit exists. Furthermore, these methods naturally extend to the frequent cases of uncertain model selection. We show that model-free approaches can identify attentional modulation patterns, such as general alterations of the irregular shape of tuning curves, which cannot be captured by fitting stereotyped conventional models. Finally, by comparing datasets across different conditions, we demonstrate effects of attention that are cell- and even stimulus-specific. Based on these proofs-of-concept, we conclude that our data-driven methods can reliably extract relevant tuning information from neuronal recordings, including cells whose seemingly haphazard response curves defy conventional fitting approaches. PMID:26785378
NASA Astrophysics Data System (ADS)
Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik
2017-11-01
To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.
Inversion of Surface-wave Dispersion Curves due to Low-velocity-layer Models
NASA Astrophysics Data System (ADS)
Shen, C.; Xia, J.; Mi, B.
2016-12-01
A successful inversion relies on exact forward modeling methods. It is a key step to accurately calculate multi-mode dispersion curves of a given model in high-frequency surface-wave (Rayleigh wave and Love wave) methods. For normal models (shear (S)-wave velocity increasing with depth), their theoretical dispersion curves completely match the dispersion spectrum that is generated based on wave equation. For models containing a low-velocity-layer, however, phase velocities calculated by existing forward-modeling algorithms (e.g. Thomson-Haskell algorithm, Knopoff algorithm, fast vector-transfer algorithm and so on) fail to be consistent with the dispersion spectrum at a high frequency range. They will approach a value that close to the surface-wave velocity of the low-velocity-layer under the surface layer, rather than that of the surface layer when their corresponding wavelengths are short enough. This phenomenon conflicts with the characteristics of surface waves, which results in an erroneous inverted model. By comparing the theoretical dispersion curves with simulated dispersion energy, we proposed a direct and essential solution to accurately compute surface-wave phase velocities due to low-velocity-layer models. Based on the proposed forward modeling technique, we can achieve correct inversion for these types of models. Several synthetic data proved the effectiveness of our method.
Fong, Youyi; Yu, Xuesong
2016-01-01
Many modern serial dilution assays are based on fluorescence intensity (FI) readouts. We study optimal transformation model choice for fitting five parameter logistic curves (5PL) to FI-based serial dilution assay data. We first develop a generalized least squares-pseudolikelihood type algorithm for fitting heteroscedastic logistic models. Next we show that the 5PL and log 5PL functions can approximate each other well. We then compare four 5PL models with different choices of log transformation and variance modeling through a Monte Carlo study and real data. Our findings are that the optimal choice depends on the intended use of the fitted curves. PMID:27642502
Long-term predictive capability of erosion models
NASA Technical Reports Server (NTRS)
Veerabhadra, P.; Buckley, D. H.
1983-01-01
A brief overview of long-term cavitation and liquid impingement erosion and modeling methods proposed by different investigators, including the curve-fit approach is presented. A table was prepared to highlight the number of variables necessary for each model in order to compute the erosion-versus-time curves. A power law relation based on the average erosion rate is suggested which may solve several modeling problems.
A Bayesian spawning habitat suitability model for American shad in southeastern United States rivers
Hightower, Joseph E.; Harris, Julianne E.; Raabe, Joshua K.; Brownell, Prescott; Drew, C. Ashton
2012-01-01
Habitat suitability index models for American shad Alosa sapidissima were developed by Stier and Crance in 1985. These models, which were based on a combination of published information and expert opinion, are often used to make decisions about hydropower dam operations and fish passage. The purpose of this study was to develop updated habitat suitability index models for spawning American shad in the southeastern United States, building on the many field and laboratory studies completed since 1985. We surveyed biologists who had knowledge about American shad spawning grounds, assembled a panel of experts to discuss important habitat variables, and used raw data from published and unpublished studies to develop new habitat suitability curves. The updated curves are based on resource selection functions, which can model habitat selectivity based on use and availability of particular habitats. Using field data collected in eight rivers from Virginia to Florida (Mattaponi, Pamunkey, Roanoke, Tar, Neuse, Cape Fear, Pee Dee, St. Johns), we obtained new curves for temperature, current velocity, and depth that were generally similar to the original models. Our new suitability function for substrate was also similar to the original pattern, except that sand (optimal in the original model) has a very low estimated suitability. The Bayesian approach that we used to develop habitat suitability curves provides an objective framework for updating the model as new studies are completed and for testing the model's applicability in other parts of the species' range.
Signal processing system for electrotherapy applications
NASA Astrophysics Data System (ADS)
Płaza, Mirosław; Szcześniak, Zbigniew
2017-08-01
The system of signal processing for electrotherapeutic applications is proposed in the paper. The system makes it possible to model the curve of threshold human sensitivity to current (Dalziel's curve) in full medium frequency range (1kHz-100kHz). The tests based on the proposed solution were conducted and their results were compared with those obtained according to the assumptions of High Tone Power Therapy method and referred to optimum values. Proposed system has high dynamics and precision of mapping the curve of threshold human sensitivity to current and can be used in all methods where threshold curves are modelled.
NASA Astrophysics Data System (ADS)
Chang, Ya-Ting; Chang, Li-Chiu; Chang, Fi-John
2005-04-01
To bridge the gap between academic research and actual operation, we propose an intelligent control system for reservoir operation. The methodology includes two major processes, the knowledge acquired and implemented, and the inference system. In this study, a genetic algorithm (GA) and a fuzzy rule base (FRB) are used to extract knowledge based on the historical inflow data with a design objective function and on the operating rule curves respectively. The adaptive network-based fuzzy inference system (ANFIS) is then used to implement the knowledge, to create the fuzzy inference system, and then to estimate the optimal reservoir operation. To investigate its applicability and practicability, the Shihmen reservoir, Taiwan, is used as a case study. For the purpose of comparison, a simulation of the currently used M-5 operating rule curve is also performed. The results demonstrate that (1) the GA is an efficient way to search the optimal input-output patterns, (2) the FRB can extract the knowledge from the operating rule curves, and (3) the ANFIS models built on different types of knowledge can produce much better performance than the traditional M-5 curves in real-time reservoir operation. Moreover, we show that the model can be more intelligent for reservoir operation if more information (or knowledge) is involved.
Fission yield calculation using toy model based on Monte Carlo simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jubaidah, E-mail: jubaidah@student.itb.ac.id; Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221; Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. Theremore » are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90« less
Fission yield calculation using toy model based on Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Jubaidah, Kurniadi, Rizal
2015-09-01
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (Rc), mean of left curve (μL) and mean of right curve (μR), deviation of left curve (σL) and deviation of right curve (σR). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Fault detection and diagnosis of photovoltaic systems
NASA Astrophysics Data System (ADS)
Wu, Xing
The rapid growth of the solar industry over the past several years has expanded the significance of photovoltaic (PV) systems. One of the primary aims of research in building-integrated PV systems is to improve the performance of the system's efficiency, availability, and reliability. Although much work has been done on technological design to increase a photovoltaic module's efficiency, there is little research so far on fault diagnosis for PV systems. Faults in a PV system, if not detected, may not only reduce power generation, but also threaten the availability and reliability, effectively the "security" of the whole system. In this paper, first a circuit-based simulation baseline model of a PV system with maximum power point tracking (MPPT) is developed using MATLAB software. MATLAB is one of the most popular tools for integrating computation, visualization and programming in an easy-to-use modeling environment. Second, data collection of a PV system at variable surface temperatures and insolation levels under normal operation is acquired. The developed simulation model of PV system is then calibrated and improved by comparing modeled I-V and P-V characteristics with measured I--V and P--V characteristics to make sure the simulated curves are close to those measured values from the experiments. Finally, based on the circuit-based simulation model, a PV model of various types of faults will be developed by changing conditions or inputs in the MATLAB model, and the I--V and P--V characteristic curves, and the time-dependent voltage and current characteristics of the fault modalities will be characterized for each type of fault. These will be developed as benchmark I-V or P-V, or prototype transient curves. If a fault occurs in a PV system, polling and comparing actual measured I--V and P--V characteristic curves with both normal operational curves and these baseline fault curves will aid in fault diagnosis.
Highway extraction from high resolution aerial photography using a geometric active contour model
NASA Astrophysics Data System (ADS)
Niu, Xutong
Highway extraction and vehicle detection are two of the most important steps in traffic-flow analysis from multi-frame aerial photographs. The traditional method of deriving traffic flow trajectories relies on manual vehicle counting from a sequence of aerial photographs, which is tedious and time-consuming. This research presents a new framework for semi-automatic highway extraction. The basis of the new framework is an improved geometric active contour (GAC) model. This novel model seeks to minimize an objective function that transforms a problem of propagation of regular curves into an optimization problem. The implementation of curve propagation is based on level set theory. By using an implicit representation of a two-dimensional curve, a level set approach can be used to deal with topological changes naturally, and the output is unaffected by different initial positions of the curve. However, the original GAC model, on which the new model is based, only incorporates boundary information into the curve propagation process. An error-producing phenomenon called leakage is inevitable wherever there is an uncertain weak edge. In this research, region-based information is added as a constraint into the original GAC model, thereby, giving this proposed method the ability of integrating both boundary and region-based information during the curve propagation. Adding the region-based constraint eliminates the leakage problem. This dissertation applies the proposed augmented GAC model to the problem of highway extraction from high-resolution aerial photography. First, an optimized stopping criterion is designed and used in the implementation of the GAC model. It effectively saves processing time and computations. Second, a seed point propagation framework is designed and implemented. This framework incorporates highway extraction, tracking, and linking into one procedure. A seed point is usually placed at an end node of highway segments close to the boundary of the image or at a position where possible blocking may occur, such as at an overpass bridge or near vehicle crowds. These seed points can be automatically propagated throughout the entire highway network. During the process, road center points are also extracted, which introduces a search direction for solving possible blocking problems. This new framework has been successfully applied to highway network extraction from a large orthophoto mosaic. In the process, vehicles on the highway extracted from mosaic were detected with an 83% success rate.
Flood damage curves for consistent global risk assessments
NASA Astrophysics Data System (ADS)
de Moel, Hans; Huizinga, Jan; Szewczyk, Wojtek
2016-04-01
Assessing potential damage of flood events is an important component in flood risk management. Determining direct flood damage is commonly done using depth-damage curves, which denote the flood damage that would occur at specific water depths per asset or land-use class. Many countries around the world have developed flood damage models using such curves which are based on analysis of past flood events and/or on expert judgement. However, such damage curves are not available for all regions, which hampers damage assessments in those regions. Moreover, due to different methodologies employed for various damage models in different countries, damage assessments cannot be directly compared with each other, obstructing also supra-national flood damage assessments. To address these problems, a globally consistent dataset of depth-damage curves has been developed. This dataset contains damage curves depicting percent of damage as a function of water depth as well as maximum damage values for a variety of assets and land use classes (i.e. residential, commercial, agriculture). Based on an extensive literature survey concave damage curves have been developed for each continent, while differentiation in flood damage between countries is established by determining maximum damage values at the country scale. These maximum damage values are based on construction cost surveys from multinational construction companies, which provide a coherent set of detailed building cost data across dozens of countries. A consistent set of maximum flood damage values for all countries was computed using statistical regressions with socio-economic World Development Indicators from the World Bank. Further, based on insights from the literature survey, guidance is also given on how the damage curves and maximum damage values can be adjusted for specific local circumstances, such as urban vs. rural locations, use of specific building material, etc. This dataset can be used for consistent supra-national scale flood damage assessments, and guide assessment in countries where no damage model is currently available.
He, Y J; Li, X T; Fan, Z Q; Li, Y L; Cao, K; Sun, Y S; Ouyang, T
2018-01-23
Objective: To construct a dynamic enhanced MR based predictive model for early assessing pathological complete response (pCR) to neoadjuvant therapy in breast cancer, and to evaluate the clinical benefit of the model by using decision curve. Methods: From December 2005 to December 2007, 170 patients with breast cancer treated with neoadjuvant therapy were identified and their MR images before neoadjuvant therapy and at the end of the first cycle of neoadjuvant therapy were collected. Logistic regression model was used to detect independent factors for predicting pCR and construct the predictive model accordingly, then receiver operating characteristic (ROC) curve and decision curve were used to evaluate the predictive model. Results: ΔArea(max) and Δslope(max) were independent predictive factors for pCR, OR =0.942 (95% CI : 0.918-0.967) and 0.961 (95% CI : 0.940-0.987), respectively. The area under ROC curve (AUC) for the constructed model was 0.886 (95% CI : 0.820-0.951). Decision curve showed that in the range of the threshold probability above 0.4, the predictive model presented increased net benefit as the threshold probability increased. Conclusions: The constructed predictive model for pCR is of potential clinical value, with an AUC>0.85. Meanwhile, decision curve analysis indicates the constructed predictive model has net benefit from 3 to 8 percent in the likely range of probability threshold from 80% to 90%.
Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.
Lee, Wen-Chung; Wu, Yun-Chun
2016-01-01
The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.
The Predicting Model of E-commerce Site Based on the Ideas of Curve Fitting
NASA Astrophysics Data System (ADS)
Tao, Zhang; Li, Zhang; Dingjun, Chen
On the basis of the idea of the second multiplication curve fitting, the number and scale of Chinese E-commerce site is analyzed. A preventing increase model is introduced in this paper, and the model parameters are solved by the software of Matlab. The validity of the preventing increase model is confirmed though the numerical experiment. The experimental results show that the precision of preventing increase model is ideal.
Ruan, J S; Prasad, P
1995-08-01
A skull-brain finite element model of the human head has been coupled with a multilink rigid body model of the Hybrid III dummy. The experimental coupled model is intended to represent anatomically a 50th percentile human to the extent the dummy and the skull-brain model represent a human. It has been verified by simulating several human cadaver head impact tests as well as dummy head 'impacts" during barrier crashes in an automotive environment. Skull-isostress and brain-isostrain response curves were established based on model calibration of experimental human cadaver tolerance data. The skull-isostress response curve agrees with the JARI Human Head Impact Tolerance Curve for skull fracture. The brain-isostrain response curve predicts a higher G level for concussion than does the JARI concussion curve and the Wayne State Tolerance Curve at the longer time duration range. Barrier crash simulations consist of belted dummies impacting an airbag, a hard and soft steering wheel hub, and no head contact with vehicle interior components. Head impact force, intracranial pressures and strains, skull stress, and head center-of-gravity acceleration were investigated as injury parameters. Head injury criterion (HIC) was also calculated along with these parameters. Preliminary results of the model simulations in those impact conditions are discussed.
NASA Astrophysics Data System (ADS)
Miranda Guedes, Rui
2018-02-01
Long-term creep of viscoelastic materials is experimentally inferred through accelerating techniques based on the time-temperature superposition principle (TTSP) or on the time-stress superposition principle (TSSP). According to these principles, a given property measured for short times at a higher temperature or higher stress level remains the same as that obtained for longer times at a lower temperature or lower stress level, except that the curves are shifted parallel to the horizontal axis, matching a master curve. These procedures enable the construction of creep master curves with short-term experimental tests. The Stepped Isostress Method (SSM) is an evolution of the classical TSSP method. Higher reduction of the required number of test specimens to obtain the master curve is achieved by the SSM technique, since only one specimen is necessary. The classical approach, using creep tests, demands at least one specimen per each stress level to produce a set of creep curves upon which TSSP is applied to obtain the master curve. This work proposes an analytical method to process the SSM raw data. The method is validated using numerical simulations to reproduce the SSM tests based on two different viscoelastic models. One model represents the viscoelastic behavior of a graphite/epoxy laminate and the other represents an adhesive based on epoxy resin.
NASA Astrophysics Data System (ADS)
Ramazani, Ali; Mukherjee, Krishnendu; Prahl, Ulrich; Bleck, Wolfgang
2012-10-01
The flow behavior of dual-phase (DP) steels is modeled on the finite-element method (FEM) framework on the microscale, considering the effect of the microstructure through the representative volume element (RVE) approach. Two-dimensional RVEs were created from microstructures of experimentally obtained DP steels with various ferrite grain sizes. The flow behavior of single phases was modeled through the dislocation-based work-hardening approach. The volume change during austenite-to-martensite transformation was modeled, and the resultant prestrained areas in the ferrite were considered to be the storage place of transformation-induced, geometrically necessary dislocations (GNDs). The flow curves of DP steels with varying ferrite grain sizes, but constant martensite fractions, were obtained from the literature. The flow curves of simulations that take into account the GND are in better agreement with those of experimental flow curves compared with those of predictions without consideration of the GND. The experimental results obeyed the Hall-Petch relationship between yield stress and flow stress and the simulations predicted this as well.
NASA Astrophysics Data System (ADS)
Nomoto, Ken&'Ichi; Tolstov, Alexey; Sorokina, Elena; Blinnikov, Sergei; Bersten, Melina; Suzuki, Tomoharu
2017-11-01
The physical origin of Type-I (hydrogen-less) superluminous supernovae (SLSNe-I), whose luminosities are 10 to 500 times higher than normal core-collapse supernovae, remains still unknown. Thanks to their brightness, SLSNe-I would be useful probes of distant Universe. For the power source of the light curves of SLSNe-I, radioactive-decays, magnetars, and circumstellar interactions have been proposed, although no definitive conclusions have been reached yet. Since most of light curve studies have been based on simplified semi-analytic models, we have constructed multi-color light curve models by means of detailed radiation hydrodynamical calculations for various mass of stars including very massive ones and large amount of mass loss. We compare the rising time, peak luminosity, width, and decline rate of the model light curves with observations of SLSNe-I and obtain constraints on their progenitors and explosion mechanisms. We particularly pay attention to the recently reported double peaks of the light curves. We discuss how to discriminate three models, relevant models parameters, their evolutionary origins, and implications for the early evolution of the Universe.
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1993-01-01
The objective of this study is to benchmark a four-engine clustered nozzle base flowfield with a computational fluid dynamics (CFD) model. The CFD model is a three-dimensional pressure-based, viscous flow formulation. An adaptive upwind scheme is employed for the spatial discretization. The upwind scheme is based on second and fourth order central differencing with adaptive artificial dissipation. Qualitative base flow features such as the reverse jet, wall jet, recompression shock, and plume-plume impingement have been captured. The computed quantitative flow properties such as the radial base pressure distribution, model centerline Mach number and static pressure variation, and base pressure characteristic curve agreed reasonably well with those of the measurement. Parametric study on the effect of grid resolution, turbulence model, inlet boundary condition and difference scheme on convective terms has been performed. The results showed that grid resolution had a strong influence on the accuracy of the base flowfield prediction.
Comparison of power curve monitoring methods
NASA Astrophysics Data System (ADS)
Cambron, Philippe; Masson, Christian; Tahan, Antoine; Torres, David; Pelletier, Francis
2017-11-01
Performance monitoring is an important aspect of operating wind farms. This can be done through the power curve monitoring (PCM) of wind turbines (WT). In the past years, important work has been conducted on PCM. Various methodologies have been proposed, each one with interesting results. However, it is difficult to compare these methods because they have been developed using their respective data sets. The objective of this actual work is to compare some of the proposed PCM methods using common data sets. The metric used to compare the PCM methods is the time needed to detect a change in the power curve. Two power curve models will be covered to establish the effect the model type has on the monitoring outcomes. Each model was tested with two control charts. Other methodologies and metrics proposed in the literature for power curve monitoring such as areas under the power curve and the use of statistical copulas have also been covered. Results demonstrate that model-based PCM methods are more reliable at the detecting a performance change than other methodologies and that the effectiveness of the control chart depends on the types of shift observed.
NASA Technical Reports Server (NTRS)
Mukkamala, R.; Cohen, R. J.; Mark, R. G.
2002-01-01
Guyton developed a popular approach for understanding the factors responsible for cardiac output (CO) regulation in which 1) the heart-lung unit and systemic circulation are independently characterized via CO and venous return (VR) curves, and 2) average CO and right atrial pressure (RAP) of the intact circulation are predicted by graphically intersecting the curves. However, this approach is virtually impossible to verify experimentally. We theoretically evaluated the approach with respect to a nonlinear, computational model of the pulsatile heart and circulation. We developed two sets of open circulation models to generate CO and VR curves, differing by the manner in which average RAP was varied. One set applied constant RAPs, while the other set applied pulsatile RAPs. Accurate prediction of intact, average CO and RAP was achieved only by intersecting the CO and VR curves generated with pulsatile RAPs because of the pulsatility and nonlinearity (e.g., systemic venous collapse) of the intact model. The CO and VR curves generated with pulsatile RAPs were also practically independent. This theoretical study therefore supports the validity of Guyton's graphical analysis.
NASA Astrophysics Data System (ADS)
Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia
2017-09-01
The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.
An efficient solid modeling system based on a hand-held 3D laser scan device
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming
2014-12-01
The hand-held 3D laser scanner sold in the market is appealing for its port and convenient to use, but price is expensive. To develop such a system based cheap devices using the same principles as the commercial systems is impossible. In this paper, a simple hand-held 3D laser scanner is developed based on a volume reconstruction method using cheap devices. Unlike convenient laser scanner to collect point cloud of an object surface, the proposed method only scan few key profile curves on the surface. Planar section curve network can be generated from these profile curves to construct a volume model of the object. The details of design are presented, and illustrated by the example of a complex shaped object.
Sulcal set optimization for cortical surface registration.
Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M
2010-04-15
Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.
Mixing-controlled reactive transport on travel times in heterogeneous media
NASA Astrophysics Data System (ADS)
Luo, J.; Cirpka, O.
2008-05-01
Modeling mixing-controlled reactive transport using traditional spatial discretization of the domain requires identifying the spatial distributions of hydraulic and reactive parameters including mixing-related quantities such as dispersivities and kinetic mass-transfer coefficients. In most applications, breakthrough curves of conservative and reactive compounds are measured at only a few locations and models are calibrated by matching these breakthrough curves, which is an ill posed inverse problem. By contrast, travel-time based transport models avoid costly aquifer characterization. By considering breakthrough curves measured on different scales, one can distinguish between mixing, which is a prerequisite for reactions, and spreading, which per se does not foster reactions. In the travel-time based framework, the breakthrough curve of a solute crossing an observation plane, or ending in a well, is interpreted as the weighted average of concentrations in an ensemble of non-interacting streamtubes, each of which is characterized by a distinct travel-time value. Mixing is described by longitudinal dispersion and/or kinetic mass transfer along individual streamtubes, whereas spreading is characterized by the distribution of travel times which also determines the weights associated to each stream tube. Key issues in using the travel-time based framework include the description of mixing mechanisms and the estimation of the travel-time distribution. In this work, we account for both apparent longitudinal dispersion and kinetic mass transfer as mixing mechanisms, thus generalizing the stochastic-convective model with or without inter-phase mass transfer and the advective-dispersive streamtube model. We present a nonparametric approach of determining the travel-time distribution, given a breakthrough curve integrated over an observation plane and estimated mixing parameters. The latter approach is superior to fitting parametric models in cases where the true travel-time distribution exhibits multiple peaks or long tails. It is demonstrated that there is freedom for the combinations of mixing parameters and travel-time distributions to fit conservative breakthrough curves and describe the tailing. Reactive transport cases with a bimolecular instantaneous irreversible reaction and a dual Michaelis-Menten problem demonstrate that the mixing introduced by local dispersion and mass transfer may be described by apparent mean mass transfer with coefficients evaluated by local breakthrough curves.
Energy transmission through a double-wall curved stiffened panel using Green's theorem
NASA Astrophysics Data System (ADS)
Ghosh, Subha; Bhattacharya, Partha
2015-04-01
It is a common practice in aerospace and automobile industries to use double wall panels as fuselage skins or in window panels to improve acoustic insulation. However, the scientific community is yet to develop a reliable prediction method for a suitable vibro-acoustic model for sound transmission through a curved double-wall panel. In this quest, the present work tries to delve into the modeling of energy transmission through a double-wall curved panel. Subsequently the radiation of sound power into the free field from the curved panel in the low to mid frequency range is also studied. In the developed model to simulate a stiffened aircraft fuselage configuration, the outer wall is provided with longitudinal stiffeners. A modal expansion theory based on Green's theorem is implemented to model the energy transmission through an acoustically coupled double-wall curved panel. An elemental radiator approach is implemented to calculate the radiated energy from the curved surface in to the free field. The developed model is first validated with various numerical models available. It has been observed in the present study that the radius of curvature of the surface has a prominent effect on the behavior of radiated sound power into the free field. Effect of the thickness of the air gap between the two curved surfaces on the sound power radiation has also been noted.
GPS/DR Error Estimation for Autonomous Vehicle Localization.
Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In
2015-08-21
Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.
GPS/DR Error Estimation for Autonomous Vehicle Localization
Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In
2015-01-01
Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997
NASA Astrophysics Data System (ADS)
Kreyca, J. F.; Falahati, A.; Kozeschnik, E.
2016-03-01
For industry, the mechanical properties of a material in form of flow curves are essential input data for finite element simulations. Current practice is to obtain flow curves experimentally and to apply fitting procedures to obtain constitutive equations that describe the material response to external loading as a function of temperature and strain rate. Unfortunately, the experimental procedure for characterizing flow curves is complex and expensive, which is why the prediction of flow-curves by computer modelling becomes increasingly important. In the present work, we introduce a state parameter based model that is capable of predicting the flow curves of an A6061 aluminium alloy in different heat-treatment conditions. The model is implemented in the thermo-kinetic software package MatCalc and takes into account precipitation kinetics, subgrain formation, dynamic recovery by spontaneous annihilation and dislocation climb. To validate the simulation results, a series of compression tests is performed on the thermo-mechanical simulator Gleeble 1500.
A six-parameter Iwan model and its application
NASA Astrophysics Data System (ADS)
Li, Yikun; Hao, Zhiming
2016-02-01
Iwan model is a practical tool to describe the constitutive behaviors of joints. In this paper, a six-parameter Iwan model based on a truncated power-law distribution with two Dirac delta functions is proposed, which gives a more comprehensive description of joints than the previous Iwan models. Its analytical expressions including backbone curve, unloading curves and energy dissipation are deduced. Parameter identification procedures and the discretization method are also provided. A model application based on Segalman et al.'s experiment works with bolted joints is carried out. Simulation effects of different numbers of Jenkins elements are discussed. The results indicate that the six-parameter Iwan model can be used to accurately reproduce the experimental phenomena of joints.
Nonlinear Curve-Fitting Program
NASA Technical Reports Server (NTRS)
Everhart, Joel L.; Badavi, Forooz F.
1989-01-01
Nonlinear optimization algorithm helps in finding best-fit curve. Nonlinear Curve Fitting Program, NLINEAR, interactive curve-fitting routine based on description of quadratic expansion of X(sup 2) statistic. Utilizes nonlinear optimization algorithm calculating best statistically weighted values of parameters of fitting function and X(sup 2) minimized. Provides user with such statistical information as goodness of fit and estimated values of parameters producing highest degree of correlation between experimental data and mathematical model. Written in FORTRAN 77.
Can contaminant transport models predict breakthrough?
Peng, Wei-Shyuan; Hampton, Duane R.; Konikow, Leonard F.; Kambham, Kiran; Benegar, Jeffery J.
2000-01-01
A solute breakthrough curve measured during a two-well tracer test was successfully predicted in 1986 using specialized contaminant transport models. Water was injected into a confined, unconsolidated sand aquifer and pumped out 125 feet (38.3 m) away at the same steady rate. The injected water was spiked with bromide for over three days; the outflow concentration was monitored for a month. Based on previous tests, the horizontal hydraulic conductivity of the thick aquifer varied by a factor of seven among 12 layers. Assuming stratified flow with small dispersivities, two research groups accurately predicted breakthrough with three-dimensional (12-layer) models using curvilinear elements following the arc-shaped flowlines in this test. Can contaminant transport models commonly used in industry, that use rectangular blocks, also reproduce this breakthrough curve? The two-well test was simulated with four MODFLOW-based models, MT3D (FD and HMOC options), MODFLOWT, MOC3D, and MODFLOW-SURFACT. Using the same 12 layers and small dispersivity used in the successful 1986 simulations, these models fit almost as accurately as the models using curvilinear blocks. Subtle variations in the curves illustrate differences among the codes. Sensitivities of the results to number and size of grid blocks, number of layers, boundary conditions, and values of dispersivity and porosity are briefly presented. The fit between calculated and measured breakthrough curves degenerated as the number of layers and/or grid blocks decreased, reflecting a loss of model predictive power as the level of characterization lessened. Therefore, the breakthrough curve for most field sites can be predicted only qualitatively due to limited characterization of the hydrogeology and contaminant source strength.
NASA Astrophysics Data System (ADS)
He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno
2018-03-01
This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.
A Novel Uncertainty Framework for Improving Discharge Data Quality Using Hydraulic Modelling.
NASA Astrophysics Data System (ADS)
Mansanarez, V.; Westerberg, I.; Lyon, S. W.; Lam, N.
2017-12-01
Flood risk assessments rely on accurate discharge data records. Establishing a reliable stage-discharge (SD) rating curve for calculating discharge from stage at a gauging station normally takes years of data collection efforts. Estimation of high flows is particularly difficult as high flows occur rarely and are often practically difficult to gauge. Hydraulically-modelled rating curves can be derived based on as few as two concurrent stage-discharge and water-surface slope measurements at different flow conditions. This means that a reliable rating curve can, potentially, be derived much faster than a traditional rating curve based on numerous stage-discharge gaugings. We introduce an uncertainty framework using hydraulic modelling for developing SD rating curves and estimating their uncertainties. The proposed framework incorporates information from both the hydraulic configuration (bed slope, roughness, vegetation) and the information available in the stage-discharge observation data (gaugings). This method provides a direct estimation of the hydraulic configuration (slope, bed roughness and vegetation roughness). Discharge time series are estimated propagating stage records through posterior rating curve results.We applied this novel method to two Swedish hydrometric stations, accounting for uncertainties in the gaugings for the hydraulic model. Results from these applications were compared to discharge measurements and official discharge estimations.Sensitivity analysis was performed. We focused analyses on high-flow uncertainty and the factors that could reduce this uncertainty. In particular, we investigated which data uncertainties were most important, and at what flow conditions the gaugings should preferably be taken.
The Chaotic Light Curves of Accreting Black Holes
NASA Technical Reports Server (NTRS)
Kazanas, Demosthenes
2007-01-01
We present model light curves for accreting Black Hole Candidates (BHC) based on a recently developed model of these sources. According to this model, the observed light curves and aperiodic variability of BHC are due to a series of soft photon injections at random (Poisson) intervals and the stochastic nature of the Comptonization process in converting these soft photons to the observed high energy radiation. The additional assumption of our model is that the Comptonization process takes place in an extended but non-uniform hot plasma corona surrounding the compact object. We compute the corresponding Power Spectral Densities (PSD), autocorrelation functions, time skewness of the light curves and time lags between the light curves of the sources at different photon energies and compare our results to observation. Our model reproduces the observed light curves well, in that it provides good fits to their overall morphology (as manifest by the autocorrelation and time skewness) and also to their PSDs and time lags, by producing most of the variability power at time scales 2 a few seconds, while at the same time allowing for shots of a few msec in duration, in accordance with observation. We suggest that refinement of this type of model along with spectral and phase lag information can be used to probe the structure of this class of high energy sources.
NASA Astrophysics Data System (ADS)
WANG, J.
2017-12-01
In stream water quality control, the total maximum daily load (TMDL) program is very effective. However, the load duration curves (LDC) of TMDL are difficult to be established because no sufficient observed flow and pollutant data can be provided in data-scarce watersheds in which no hydrological stations or consecutively long-term hydrological data are available. Although the point sources or a non-point sources of pollutants can be clarified easily with the aid of LDC, where does the pollutant come from and to where it will be transported in the watershed cannot be traced by LDC. To seek out the best management practices (BMPs) of pollutants in a watershed, and to overcome the limitation of LDC, we proposed to develop LDC based on a distributed hydrological model of SWAT for the water quality management in data scarce river basins. In this study, firstly, the distributed hydrological model of SWAT was established with the scarce-hydrological data. Then, the long-term daily flows were generated with the established SWAT model and rainfall data from the adjacent weather station. Flow duration curves (FDC) was then developed with the aid of generated daily flows by SWAT model. Considering the goal of water quality management, LDC curves of different pollutants can be obtained based on the FDC. With the monitored water quality data and the LDC curves, the water quality problems caused by the point or non-point source pollutants in different seasons can be ascertained. Finally, the distributed hydrological model of SWAT was employed again to tracing the spatial distribution and the origination of the pollutants of coming from what kind of agricultural practices and/or other human activities. A case study was conducted in the Jian-jiang river, a tributary of Yangtze river, of Duyun city, Guizhou province. Results indicate that this kind of method can realize the water quality management based on TMDL and find out the suitable BMPs for reducing pollutant in a watershed.
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-01-01
Background Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. Methods In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Results Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Conclusion Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided. PMID:19036144
Vickers, Andrew J; Cronin, Angel M; Elkin, Elena B; Gonen, Mithat
2008-11-26
Decision curve analysis is a novel method for evaluating diagnostic tests, prediction models and molecular markers. It combines the mathematical simplicity of accuracy measures, such as sensitivity and specificity, with the clinical applicability of decision analytic approaches. Most critically, decision curve analysis can be applied directly to a data set, and does not require the sort of external data on costs, benefits and preferences typically required by traditional decision analytic techniques. In this paper we present several extensions to decision curve analysis including correction for overfit, confidence intervals, application to censored data (including competing risk) and calculation of decision curves directly from predicted probabilities. All of these extensions are based on straightforward methods that have previously been described in the literature for application to analogous statistical techniques. Simulation studies showed that repeated 10-fold crossvalidation provided the best method for correcting a decision curve for overfit. The method for applying decision curves to censored data had little bias and coverage was excellent; for competing risk, decision curves were appropriately affected by the incidence of the competing risk and the association between the competing risk and the predictor of interest. Calculation of decision curves directly from predicted probabilities led to a smoothing of the decision curve. Decision curve analysis can be easily extended to many of the applications common to performance measures for prediction models. Software to implement decision curve analysis is provided.
An extended CFD model to predict the pumping curve in low pressure plasma etch chamber
NASA Astrophysics Data System (ADS)
Zhou, Ning; Wu, Yuanhao; Han, Wenbin; Pan, Shaowu
2014-12-01
Continuum based CFD model is extended with slip wall approximation and rarefaction effect on viscosity, in an attempt to predict the pumping flow characteristics in low pressure plasma etch chambers. The flow regime inside the chamber ranges from slip wall (Kn ˜ 0.01), and up to free molecular (Kn = 10). Momentum accommodation coefficient and parameters for Kn-modified viscosity are first calibrated against one set of measured pumping curve. Then the validity of this calibrated CFD models are demonstrated in comparison with additional pumping curves measured in chambers of different geometry configurations. More detailed comparison against DSMC model for flow conductance over slits with contraction and expansion sections is also discussed.
Mixed-order phase transition in a minimal, diffusion-based spin model.
Fronczak, Agata; Fronczak, Piotr
2016-07-01
In this paper we exactly solve, within the grand canonical ensemble, a minimal spin model with the hybrid phase transition. We call the model diffusion based because its Hamiltonian can be recovered from a simple dynamic procedure, which can be seen as an equilibrium statistical mechanics representation of a biased random walk. We outline the derivation of the phase diagram of the model, in which the triple point has the hallmarks of the hybrid transition: discontinuity in the average magnetization and algebraically diverging susceptibilities. At this point, two second-order transition curves meet in equilibrium with the first-order curve, resulting in a prototypical mixed-order behavior.
Bayesian Multiscale Modeling of Closed Curves in Point Clouds
Gu, Kelvin; Pati, Debdeep; Dunson, David B.
2014-01-01
Modeling object boundaries based on image or point cloud data is frequently necessary in medical and scientific applications ranging from detecting tumor contours for targeted radiation therapy, to the classification of organisms based on their structural information. In low-contrast images or sparse and noisy point clouds, there is often insufficient data to recover local segments of the boundary in isolation. Thus, it becomes critical to model the entire boundary in the form of a closed curve. To achieve this, we develop a Bayesian hierarchical model that expresses highly diverse 2D objects in the form of closed curves. The model is based on a novel multiscale deformation process. By relating multiple objects through a hierarchical formulation, we can successfully recover missing boundaries by borrowing structural information from similar objects at the appropriate scale. Furthermore, the model’s latent parameters help interpret the population, indicating dimensions of significant structural variability and also specifying a ‘central curve’ that summarizes the collection. Theoretical properties of our prior are studied in specific cases and efficient Markov chain Monte Carlo methods are developed, evaluated through simulation examples and applied to panorex teeth images for modeling teeth contours and also to a brain tumor contour detection problem. PMID:25544786
Analysis of mixed model in gear transmission based on ADAMS
NASA Astrophysics Data System (ADS)
Li, Xiufeng; Wang, Yabin
2012-09-01
The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.
NASA Astrophysics Data System (ADS)
Kumar, J.; Jain, A.; Srivastava, R.
2005-12-01
The identification of pollution sources in aquifers is an important area of research not only for the hydrologists but also for the local and Federal agencies and defense organizations. Once the data in terms of pollutant concentration measurements at observation wells become known, it is important to identify the polluting industry in order to implement punitive or remedial measures. Traditionally, hydrologists have relied on the conceptual methods for the identification of groundwater pollution sources. The problem of identification of groundwater pollution sources using the conceptual methods requires a thorough understanding of the groundwater flow and contaminant transport processes and inverse modeling procedures that are highly complex and difficult to implement. Recently, the soft computing techniques, such as artificial neural networks (ANNs) and genetic algorithms, have provided an attractive and easy to implement alternative to solve complex problems efficiently. Some researchers have used ANNs for the identification of pollution sources in aquifers. A major problem with most previous studies using ANNs has been the large size of the neural networks that are needed to model the inverse problem. The breakthrough curves at an observation well may consist of hundreds of concentration measurements, and presenting all of them to the input layer of an ANN not only results in humongous networks but also requires large amount of training and testing data sets to develop the ANN models. This paper presents the results of a study aimed at using certain characteristics of the breakthrough curves and ANNs for determining the distance of the pollution source from a given observation well. Two different neural network models are developed that differ in the manner of characterizing the breakthrough curves. The first ANN model uses five parameters, similar to the synthetic unit hydrograph parameters, to characterize the breakthrough curves. The five parameters employed are peak concentration, time to peak concentration, the widths of the breakthrough curves at 50% and 75% of the peak concentration, and the time base of the breakthrough curve. The second ANN model employs only the first four parameters leaving out the time base. The measurement of breakthrough curve at an observation well involves very high costs in sample collection at suitable time intervals and analysis for various contaminants. The receding portions of the breakthrough curves are normally very long and excluding the time base from modeling would result in considerable cost savings. The feed-forward multi-layer perceptron (MLP) type neural networks trained using the back-propagation algorithm, are employed in this study. The ANN models for the two approaches were developed using simulated data generated for conservative pollutant transport through a homogeneous aquifer. A new approach for ANN training using back-propagation is employed that considers two different error statistics to prevent over-training and under-training of the ANNs. The preliminary results indicate that the ANNs are able to identify the location of the pollution source very efficiently from both the methods of the breakthrough curves characterization.
Eutectic melting in the MgO-SiO2 system and its implication to Earth's lower mantle evolution
NASA Astrophysics Data System (ADS)
Baron, M. A.; Lord, O. T.; Myhill, R.; Thomson, A.; Wang, W.; Tronnes, R. G.; Walter, M. J.
2017-12-01
Eutectic melting curves in the system MgO-SiO2 have been experimentally studied at lower mantle pressures using laser-heated diamond anvil cell (LH-DAC) techniques. We investigated eutectic melting of bridgmanite plus periclase in the MgO-MgSiO3 binary and bridgmanite plus stishovite in the MgSiO3-SiO2 sub-system as the simplest models of natural peridotite and basalt. The eutectic melting have been detected on the basis of the thermal perturbations (i.e. melting plateau) during the experiment but also post-experimental textural and chemical analyses of the recovered samples. We also performed a suite of sub-solidus experiments in order to compare and bracket the eutectic melting experiments. The melting curve of model basalt occurs at lower temperatures, has a shallower dT/dP slope and slightly less curvature than the model peridotitic melting curve. Overall, melting temperatures detected in this study are in good agreement with previous experiments and ab initio simulations at 25 GPa (Liebske and Frost, 2012; de Koker et al., 2013). However, at higher pressures the measured eutectic melting curves are systematically lower in temperature than curves extrapolated on the basis of thermodynamic modelling of low-pressure experimental data, and those calculated from atomistic simulations. In turn, when comparing with previously published solidus curves obtained for natural basalt and peridotite (e.g. Fiquet et al., 2010; Andrault et al. 2011; Nomura et al. 2014; Hirose et al. 1999; Andrault et al. 2014 and Pradhan et al. 2015) the melting curves from this study are higher. However, the difference in temperature is less significant than previously though. Based on the comparison of the curvature of the model peridotite eutectic relative to an MgSiO3 melt adiabat we infer that crystallization in a global magma ocean would begin at 100 GPa rather than at the bottom of the mantle, allowing for an early basal melt layer. The model peridotite melting curve lies 500 K above the mantle geotherm at the core-mantle boundary, indicating that it will not be molten. The model basalt melting curve intersects the geotherm at the base of the mantle, and partial melting of subducted oceanic crust is therefore expected.
The Regulus occultation light curve and the real atmosphere of Venus
NASA Technical Reports Server (NTRS)
Veverka, J.; Wasserman, L.
1974-01-01
An inversion of the light curve observed during the July 7, 1959, occultation of Regulus by Venus leads to the conclusion that the light curve cannot be reconciled with models of the Venus atmosphere based on spacecraft observations. The event occurred in daylight and, under the subsequently difficult observation conditions, it seems likely that the Regulus occultation light curve is marred by a systematic errors in spite of the competence of the observers involved.
Greenland, S
1996-03-15
This paper presents an approach to back-projection (back-calculation) of human immunodeficiency virus (HIV) person-year infection rates in regional subgroups based on combining a log-linear model for subgroup differences with a penalized spline model for trends. The penalized spline approach allows flexible trend estimation but requires far fewer parameters than fully non-parametric smoothers, thus saving parameters that can be used in estimating subgroup effects. Use of reasonable prior curve to construct the penalty function minimizes the degree of smoothing needed beyond model specification. The approach is illustrated in application to acquired immunodeficiency syndrome (AIDS) surveillance data from Los Angeles County.
A Model for Hydraulic Properties Based on Angular Pores with Lognormal Size Distribution
NASA Astrophysics Data System (ADS)
Durner, W.; Diamantopoulos, E.
2014-12-01
Soil water retention and unsaturated hydraulic conductivity curves are mandatory for modeling water flow in soils. It is a common approach to measure few points of the water retention curve and to calculate the hydraulic conductivity curve by assuming that the soil can be represented as a bundle of capillary tubes. Both curves are then used to predict water flow at larger spatial scales. However, the predictive power of these curves is often very limited. This can be very easily illustrated if we measure the soil hydraulic properties (SHPs) for a drainage experiment and then use these properties to predict the water flow in the case of imbibition. Further complications arise from the incomplete wetting of water at the solid matrix which results in finite values of the contact angles between the solid-water-air interfaces. To address these problems we present a physically-based model for hysteretic SHPs. This model is based on bundles of angular pores. Hysteresis for individual pores is caused by (i) different snap-off pressures during filling and emptying of single angular pores and (ii) by different advancing and receding contact angles for fluids that are not perfectly wettable. We derive a model of hydraulic conductivity as a function of contact angle by assuming flow perpendicular to pore cross sections and present closed-form expressions for both the sample scale water retention and hydraulic conductivity function by assuming a log-normal statistical distribution of pore size. We tested the new model against drainage and imbibition experiments for various sandy materials which were conducted with various liquids of differing wettability. The model described both imbibition and drainage experiments very well by assuming a unique pore size distribution of the sample and a zero contact angle for the perfectly wetting liquid. Eventually, we see the possibility to relate the particle size distribution with a model which describes the SHPs.
A Simulation Study of Methods for Selecting Subgroup-Specific Doses in Phase I Trials
Morita, Satoshi; Thall, Peter F.; Takeda, Kentaro
2016-01-01
Summary Patient heterogeneity may complicate dose-finding in phase I clinical trials if the dose-toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively, it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem, we consider a generalization of the continual reassessment method (O’Quigley, et al., 1990) based on a hierarchical Bayesian dose-toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup-specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to three alternative approaches, based on non-hierarchical models, that make different types of assumptions about within-subgroup dose-toxicity curves. The simulations show that the hierarchical model-based method is recommended in settings where the dose-toxicity curves are exchangeable between subgroups. We present practical guidelines for application, and provide computer programs for trial simulation and conduct. PMID:28111916
Conduction and rectification in NbO x - and NiO-based metal-insulator-metal diodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osgood, Richard M.; Giardini, Stephen; Carlson, Joel
2016-09-01
Conduction and rectification in nanoantenna-coupled NbOx- and NiO-based metal-insulator-metal (MIM) diodes ('nanorectennas') are studied by comparing new theoretical predictions with the measured response of nanorectenna arrays. A new quantum mechanical model is reported and agrees with measurements of current-voltage (I-V) curves, over 10 orders of magnitude in current density, from [NbOx(native)-Nb2O5]- and NiO-based samples with oxide thicknesses in the range of 5-36 nm. The model, which introduces new physics and features, including temperature, electron effective mass, and image potential effects using the pseudobarrier technique, improves upon widely used earlier models, calculates the MIM diode's I-V curve, and predicts quantitatively themore » rectification responsivity of high frequency voltages generated in a coupled nanoantenna array by visible/near-infrared light. The model applies both at the higher frequencies, when high-energy photons are incident, and at lower frequencies, when the formula for classical rectification, involving derivatives of the I-V curve, may be used. The rectified low-frequency direct current is well-predicted in this work's model, but not by fitting the experimentally measured I-V curve with a polynomial or by using the older Simmons model (as shown herein). By fitting the measured I-V curves with our model, the barrier heights in Nb-(NbOx(native)-Nb2O5)-Pt and Ni-NiO-Ti/Ag diodes are found to be 0.41/0.77 and 0.38/0.39 eV, respectively, similar to literature reports, but with effective mass much lower than the free space value. The NbOx (native)-Nb2O5 dielectric properties improve, and the effective Pt-Nb2O5 barrier height increases as the oxide thickness increases. An observation of direct current of ~4 nA for normally incident, focused 514 nm continuous wave laser beams are reported, similar in magnitude to recent reports. This measured direct current is compared to the prediction for rectified direct current, given by the rectification responsivity, calculated from the I-V curve times input power.« less
Starspot detection and properties
NASA Astrophysics Data System (ADS)
Savanov, I. S.
2013-07-01
I review the currently available techniques for the starspots detection including the one-dimensional spot modelling of photometric light curves. Special attention will be paid to the modelling of photospheric activity based on the high-precision light curves obtained with space missions MOST, CoRoT, and Kepler. Physical spot parameters (temperature, sizes and variability time scales including short-term activity cycles) are discussed.
S-curve networks and an approximate method for estimating degree distributions of complex networks
NASA Astrophysics Data System (ADS)
Guo, Jin-Li
2010-12-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Brandon C.; Becker, Andrew C.; Sobolewska, Malgosia
2014-06-10
We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placingmore » them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.« less
Babu, K Suresh; Kumar, A Nanda; Kommi, Pradeep Babu; Krishnan, P Hari; Kumar, M Senthil; Sabapathy, R Senkutvan; Kumar, V Vijay
2017-08-01
To date, many orthodontist corrects malocclusion based on patients aesthetic concern and fails to correct the compensatory curves. This scenario is due to less insight on understanding relationship of compensatory curves and its correlation in treatment prognosis. The purpose of this study was to evaluate the correlation between the curve of Spee, curve of Monson and curve of Wilson, their influence on dentoskeletal morphology and their contribution to occlusal stability. This study included 104 non-orthodontic models. The study casts were subdivided into two groups, Group-I consist 52 non- orthodontic models with Class-I molar relationship and Group-II consist of 52 non- orthodontic models with Class-II molar relationship. Curve of Spee was measured with digital vernier caliper, curve of Monson estimated using specially made sphere (7″inch, 8″ inch and 9″inch) and curve of Wilson was evaluated using Cone Beam Computed Technology (CBCT). Mean value for curve of Spee obtained for Group I and Group II is 1.844 mm and 3.188 mm. For curve of Monson, the mean value obtained for Group I and Group-II is 7.65 inches and 7.40 inches. The mean degree obtained for the curve of Wilson for Group I and Group-II is 12.05 and 16.49. The result showed positive correlation between curve of Spee and curve of Wilson and no correlation between curve of Monson and curve of Wilson and no correlation between curve of Spee and curve of Monson. The Pearson correlation coefficient analysis from the study confirmed these results. The results showed positive correlation between curve of spee and curve of Wilson. The data found in this study can be applied clinically for Class I and Class II malocclusion patients on diagnosis and treatment planning.
The Use of Artificial Neural Networks for Forecasting the Electric Demand of Stand-Alone Consumers
NASA Astrophysics Data System (ADS)
Ivanin, O. A.; Direktor, L. B.
2018-05-01
The problem of short-term forecasting of electric power demand of stand-alone consumers (small inhabited localities) situated outside centralized power supply areas is considered. The basic approaches to modeling the electric power demand depending on the forecasting time frame and the problems set, as well as the specific features of such modeling, are described. The advantages and disadvantages of the methods used for the short-term forecast of the electric demand are indicated, and difficulties involved in the solution of the problem are outlined. The basic principles of arranging artificial neural networks are set forth; it is also shown that the proposed method is preferable when the input information necessary for prediction is lacking or incomplete. The selection of the parameters that should be included into the list of the input data for modeling the electric power demand of residential areas using artificial neural networks is validated. The structure of a neural network is proposed for solving the problem of modeling the electric power demand of residential areas. The specific features of generation of the training dataset are outlined. The results of test modeling of daily electric demand curves for some settlements of Kamchatka and Yakutia based on known actual electric demand curves are provided. The reliability of the test modeling has been validated. A high value of the deviation of the modeled curve from the reference curve obtained in one of the four reference calculations is explained. The input data and the predicted power demand curves for the rural settlement of Kuokuiskii Nasleg are provided. The power demand curves were modeled for four characteristic days of the year, and they can be used in the future for designing a power supply system for the settlement. To enhance the accuracy of the method, a series of measures based on specific features of a neural network's functioning are proposed.
NASA Astrophysics Data System (ADS)
Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.
2013-09-01
The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its clustering in 2D.
Analytical Problems and Suggestions in the Analysis of Behavioral Economic Demand Curves.
Yu, Jihnhee; Liu, Liu; Collins, R Lorraine; Vincent, Paula C; Epstein, Leonard H
2014-01-01
Behavioral economic demand curves (Hursh, Raslear, Shurtleff, Bauman, & Simmons, 1988) are innovative approaches to characterize the relationships between consumption of a substance and its price. In this article, we investigate common analytical issues in the use of behavioral economic demand curves, which can cause inconsistent interpretations of demand curves, and then we provide methodological suggestions to address those analytical issues. We first demonstrate that log transformation with different added values for handling zeros changes model parameter estimates dramatically. Second, demand curves are often analyzed using an overparameterized model that results in an inefficient use of the available data and a lack of assessment of the variability among individuals. To address these issues, we apply a nonlinear mixed effects model based on multivariate error structures that has not been used previously to analyze behavioral economic demand curves in the literature. We also propose analytical formulas for the relevant standard errors of derived values such as P max, O max, and elasticity. The proposed model stabilizes the derived values regardless of using different added increments and provides substantially smaller standard errors. We illustrate the data analysis procedure using data from a relative reinforcement efficacy study of simulated marijuana purchasing.
NASA Astrophysics Data System (ADS)
de Ruiter, Marleen; Ward, Philip; Daniell, James; Aerts, Jeroen
2017-04-01
In a cross-discipline study, an extensive literature review has been conducted to increase the understanding of vulnerability indicators used in both earthquake- and flood vulnerability assessments, and to provide insights into potential improvements of earthquake and flood vulnerability assessments. It identifies and compares indicators used to quantitatively assess earthquake and flood vulnerability, and discusses their respective differences and similarities. Indicators have been categorized into Physical- and Social categories, and further subdivided into (when possible) measurable and comparable indicators. Physical vulnerability indicators have been differentiated to exposed assets such as buildings and infrastructure. Social indicators are grouped in subcategories such as demographics, economics and awareness. Next, two different vulnerability model types have been described that use these indicators: index- and curve-based vulnerability models. A selection of these models (e.g. HAZUS) have been described, and compared on several characteristics such as temporal- and spatial aspects. It appears that earthquake vulnerability methods are traditionally strongly developed towards physical attributes at an object scale and used in vulnerability curve models, whereas flood vulnerability studies focus more on indicators applied to aggregated land-use scales. Flood risk studies could be improved using approaches from earthquake studies, such as incorporating more detailed lifeline and building indicators, and developing object-based vulnerability curve assessments of physical vulnerability, for example by defining building material based flood vulnerability curves. Related to this, is the incorporation of time of the day based building occupation patterns (at 2am most people will be at home while at 2pm most people will be in the office). Earthquake assessments could learn from flood studies when it comes to the refined selection of social vulnerability indicators. Based on the lessons obtained in this study, we recommend future studies to further explore cross-hazard studies.
Structural Acoustic Physics Based Modeling of Curved Composite Shells
2017-09-19
Results show that the finite element computational models accurately match analytical calculations, and that the composite material studied in this...products. 15. SUBJECT TERMS Finite Element Analysis, Structural Acoustics, Fiber-Reinforced Composites, Physics-Based Modeling 16. SECURITY...2 4 FINITE ELEMENT MODEL DESCRIPTION
A Multi-year Multi-passband CCD Photometric Study of the W UMa Binary EQ Tauri
NASA Astrophysics Data System (ADS)
Alton, K. B.
2009-12-01
A revised ephemeris and updated orbital period for EQ Tau have been determined from newly acquired (2007-2009) CCD-derived photometric data. A Roche-type model based on the Wilson-Devinney code produced simultaneous theoretical fits of light curve data in three passbands by invoking cold spots on the primary component. These new model fits, along with similar light curve data for EQ Tau collected during the previous six seasons (2000-2006), provided a rare opportunity to follow the seasonal appearance of star spots on a W UMa binary system over nine consecutive years. Fixed values for q, ?1,2, T1, T2, and i based upon the mean of eleven separately determined model fits produced for this system are hereafter proposed for future light curve modeling of EQ Tau. With the exception of the 2001 season all other light curves produced since then required a spotted solution to address the flux asymmetry exhibited by this binary system at Max I and Max II. At least one cold spot on the primary appears in seven out of twelve light curves for EQ Tau produced over the last nine years, whereas in six instances two cold spots on the primary star were invoked to improve the model fit. Solutions using a hot spot were less common and involved positioning a single spot on the primary constituent during the 2001-2002, 2002-2003, and 2005-2006 seasons.
del Moral, F; Vázquez, J A; Ferrero, J J; Willisch, P; Ramírez, R D; Teijeiro, A; López Medina, A; Andrade, B; Vázquez, J; Salvador, F; Medal, D; Salgado, M; Muñoz, V
2009-09-01
Modern radiotherapy uses complex treatments that necessitate more complex quality assurance procedures. As a continuous medium, GafChromic EBT films offer suitable features for such verification. However, its sensitometric curve is not fully understood in terms of classical theoretical models. In fact, measured optical densities and those predicted by the classical models differ significantly. This difference increases systematically with wider dose ranges. Thus, achieving the accuracy required for intensity-modulated radiotherapy (IMRT) by classical methods is not possible, plecluding their use. As a result, experimental parametrizations, such as polynomial fits, are replacing phenomenological expressions in modern investigations. This article focuses on identifying new theoretical ways to describe sensitometric curves and on evaluating the quality of fit for experimental data based on four proposed models. A whole mathematical formalism starting with a geometrical version of the classical theory is used to develop new expressions for the sensitometric curves. General results from the percolation theory are also used. A flat-bed-scanner-based method was chosen for the film analysis. Different tests were performed, such as consistency of the numeric results for the proposed model and double examination using data from independent researchers. Results show that the percolation-theory-based model provides the best theoretical explanation for the sensitometric behavior of GafChromic films. The different sizes of active centers or monomer crystals of the film are the basis of this model, allowing acquisition of information about the internal structure of the films. Values for the mean size of the active centers were obtained in accordance with technical specifications. In this model, the dynamics of the interaction between the active centers of GafChromic film and radiation is also characterized by means of its interaction cross-section value. The percolation model fulfills the accuracy requirements for quality-control procedures when large ranges of doses are used and offers a physical explanation for the film response.
The potential of artificial aging for modelling of natural aging processes of ballpoint ink.
Weyermann, Céline; Spengler, Bernhard
2008-08-25
Artificial aging has been used to reproduce natural aging processes in an accelerated pace. Questioned documents were exposed to light or high temperature in a well-defined manner in order to simulate an increased age. This may be used to study the aging processes or to date documents by reproducing their aging curve. Ink was studied especially because it is deposited on the paper when a document, such as a contract, is produced. Once on the paper, aging processes start through degradation of dyes, solvents drying and resins polymerisation. Modelling of dye's and solvent's aging was attempted. These processes, however, follow complex pathways, influenced by many factors which can be classified as three major groups: ink composition, paper type and storage conditions. The influence of these factors is such that different aging states can be obtained for an identical point in time. Storage conditions in particular are difficult to simulate, as they are dependent on environmental conditions (e.g. intensity and dose of light, temperature, air flow, humidity) and cannot be controlled in the natural aging of questioned documents. The problem therefore lies more in the variety of different conditions a questioned document might be exposed to during its natural aging, rather than in the simulation of such conditions in the laboratory. Nevertheless, a precise modelling of natural aging curves based on artificial aging curves is obtained when performed on the same paper and ink. A standard model for aging processes of ink on paper is therefore presented that is based on a fit of aging curves to a power law of solvent concentrations as a function of time. A mathematical transformation of artificial aging curves into modelled natural aging curves results in excellent overlap with data from real natural aging processes.
Composing chaotic music from the letter m
NASA Astrophysics Data System (ADS)
Sotiropoulos, Anastasios D.
Chaotic music is composed from a proposed iterative map depicting the letter m, relating the pitch, duration and loudness of successive steps. Each of the two curves of the letter m is based on the classical logistic map. Thus, the generating map is xn+1 = r xn(1/2 - xn) for xn between 0 and 1/2 defining the first curve, and xn+1 = r (xn - 1/2)(1 - xn) for xn between 1/2 and 1 representing the second curve. The parameter r which determines the height(s) of the letter m varies from 2 to 16, the latter value ensuring fully developed chaotic solutions for the whole letter m; r = 8 yielding full chaotic solutions only for its first curve. The m-model yields fixed points, bifurcation points and chaotic regions for each separate curve, as well as values of the parameter r greater than 8 which produce inter-fixed points, inter-bifurcation points and inter-chaotic regions from the interplay of the two curves. Based on this, music is composed from mapping the m- recurrence model solutions onto actual notes. The resulting musical score strongly depends on the sequence of notes chosen by the composer to define the musical range corresponding to the range of the chaotic mathematical solutions x from 0 to 1. Here, two musical ranges are used; one is the middle chromatic scale and the other is the seven- octaves range. At the composer's will and, for aesthetics, within the same composition, notes can be the outcome of different values of r and/or shifted in any octave. Compositions with endings of non-repeating note patterns result from values of r in the m-model that do not produce bifurcations. Scores of chaotic music composed from the m-model and the classical logistic model are presented.
4963 Kanroku: Asteroid with a possible precession of rotation axis
NASA Astrophysics Data System (ADS)
Sokova, Iraida A.; Marchini, Alessandro; Franco, Lorenzo; Papini, Riccardo; Salvaggio, Fabio; Palmas, Teodora; Sokov, Eugene N.; Garlitz, Joe; Knight, Carl R.; Bretton, Marc
2018-04-01
Based on photometric observations of 4963 Kanroku as part of a campaign to measure its light-curve, changes of the light-curve profile have been detected. These changes are of a periodic nature, i.e. the profiles change with a detected period P = 16.4032 h. Based on simulations of the shape of the asteroid and using observational data, we make the assumption that such changes of the light-curve of the asteroid could be caused by the existence of a precession force acting on the axis of rotation of the asteroid. Simulations of the 4963 Kanroku light-curve, taking into account the detected precession, and the parameters for the shape of the asteroid, the modeled light-curves are in good agreement with the light-curves obtained from the observation campaign. Thus, the detected precession force may indicate a possible satellite of the asteroid 4963 Kanroku.
NASA Astrophysics Data System (ADS)
Jough, Fooad Karimi Ghaleh; Şensoy, Serhan
2016-12-01
Different performance levels may be obtained for sideway collapse evaluation of steel moment frames depending on the evaluation procedure used to handle uncertainties. In this article, the process of representing modelling uncertainties, record to record (RTR) variations and cognitive uncertainties for moment resisting steel frames of various heights is discussed in detail. RTR uncertainty is used by incremental dynamic analysis (IDA), modelling uncertainties are considered through backbone curves and hysteresis loops of component, and cognitive uncertainty is presented in three levels of material quality. IDA is used to evaluate RTR uncertainty based on strong ground motion records selected by the k-means algorithm, which is favoured over Monte Carlo selection due to its time saving appeal. Analytical equations of the Response Surface Method are obtained through IDA results by the Cuckoo algorithm, which predicts the mean and standard deviation of the collapse fragility curve. The Takagi-Sugeno-Kang model is used to represent material quality based on the response surface coefficients. Finally, collapse fragility curves with the various sources of uncertainties mentioned are derived through a large number of material quality values and meta variables inferred by the Takagi-Sugeno-Kang fuzzy model based on response surface method coefficients. It is concluded that a better risk management strategy in countries where material quality control is weak, is to account for cognitive uncertainties in fragility curves and the mean annual frequency.
A new CFD based non-invasive method for functional diagnosis of coronary stenosis.
Xie, Xinzhou; Zheng, Minwen; Wen, Didi; Li, Yabing; Xie, Songyun
2018-03-22
Accurate functional diagnosis of coronary stenosis is vital for decision making in coronary revascularization. With recent advances in computational fluid dynamics (CFD), fractional flow reserve (FFR) can be derived non-invasively from coronary computed tomography angiography images (FFR CT ) for functional measurement of stenosis. However, the accuracy of FFR CT is limited due to the approximate modeling approach of maximal hyperemia conditions. To overcome this problem, a new CFD based non-invasive method is proposed. Instead of modeling maximal hyperemia condition, a series of boundary conditions are specified and those simulated results are combined to provide a pressure-flow curve for a stenosis. Then, functional diagnosis of stenosis is assessed based on parameters derived from the obtained pressure-flow curve. The proposed method is applied to both idealized and patient-specific models, and validated with invasive FFR in six patients. Results show that additional hemodynamic information about the flow resistances of a stenosis is provided, which cannot be directly obtained from anatomy information. Parameters derived from the simulated pressure-flow curve show a linear and significant correlations with invasive FFR (r > 0.95, P < 0.05). The proposed method can assess flow resistances by the pressure-flow curve derived parameters without modeling of maximal hyperemia condition, which is a new promising approach for non-invasive functional assessment of coronary stenosis.
NASA Astrophysics Data System (ADS)
Dabiri, M.; Ghafouri, M.; Rohani Raftar, H. R.; Björk, T.
2018-03-01
Methods to estimate the strain-life curve, which were divided into three categories: simple approximations, artificial neural network-based approaches and continuum damage mechanics models, were examined, and their accuracy was assessed in strain-life evaluation of a direct-quenched high-strength steel. All the prediction methods claim to be able to perform low-cycle fatigue analysis using available or easily obtainable material properties, thus eliminating the need for costly and time-consuming fatigue tests. Simple approximations were able to estimate the strain-life curve with satisfactory accuracy using only monotonic properties. The tested neural network-based model, although yielding acceptable results for the material in question, was found to be overly sensitive to the data sets used for training and showed an inconsistency in estimation of the fatigue life and fatigue properties. The studied continuum damage-based model was able to produce a curve detecting early stages of crack initiation. This model requires more experimental data for calibration than approaches using simple approximations. As a result of the different theories underlying the analyzed methods, the different approaches have different strengths and weaknesses. However, it was found that the group of parametric equations categorized as simple approximations are the easiest for practical use, with their applicability having already been verified for a broad range of materials.
NASA Astrophysics Data System (ADS)
Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine; Jalbert, Jonathan
2014-09-01
A rating curve is used to indirectly estimate the discharge in rivers based on water level measurements. The discharge values obtained from a rating curve include uncertainties related to the direct stage-discharge measurements (gaugings) used to build the curves, the quality of fit of the curve to these measurements and the constant changes in the river bed morphology. Moreover, the uncertainty of discharges estimated from a rating curve increases with the “age” of the rating curve. The level of uncertainty at a given point in time is therefore particularly difficult to assess. A “dynamic” method has been developed to compute rating curves while calculating associated uncertainties, thus making it possible to regenerate streamflow data with uncertainty estimates. The method is based on historical gaugings at hydrometric stations. A rating curve is computed for each gauging and a model of the uncertainty is fitted for each of them. The model of uncertainty takes into account the uncertainties in the measurement of the water level, the quality of fit of the curve, the uncertainty of gaugings and the increase of the uncertainty of discharge estimates with the age of the rating curve computed with a variographic analysis (Jalbert et al., 2011). The presented dynamic method can answer important questions in the field of hydrometry such as “How many gaugings a year are required to produce streamflow data with an average uncertainty of X%?” and “When and in what range of water flow rates should these gaugings be carried out?”. The Rocherousse hydrometric station (France, Haute-Durance watershed, 946 [km2]) is used as an example throughout the paper. Others stations are used to illustrate certain points.
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther
2015-01-01
Several key capabilities have been identified by the aerospace community as lacking in the material/models for composite materials currently available within commercial transient dynamic finite element codes such as LS-DYNA. Some of the specific desired features that have been identified include the incorporation of both plasticity and damage within the material model, the capability of using the material model to analyze the response of both three-dimensional solid elements and two dimensional shell elements, and the ability to simulate the response of composites composed with a variety of composite architectures, including laminates, weaves and braids. In addition, a need has been expressed to have a material model that utilizes tabulated experimentally based input to define the evolution of plasticity and damage as opposed to utilizing discrete input parameters (such as modulus and strength) and analytical functions based on curve fitting. To begin to address these needs, an orthotropic macroscopic plasticity based model suitable for implementation within LS-DYNA has been developed. Specifically, the Tsai-Wu composite failure model has been generalized and extended to a strain-hardening based orthotropic plasticity model with a non-associative flow rule. The coefficients in the yield function are determined based on tabulated stress-strain curves in the various normal and shear directions, along with selected off-axis curves. Incorporating rate dependence into the yield function is achieved by using a series of tabluated input curves, each at a different constant strain rate. The non-associative flow-rule is used to compute the evolution of the effective plastic strain. Systematic procedures have been developed to determine the values of the various coefficients in the yield function and the flow rule based on the tabulated input data. An algorithm based on the radial return method has been developed to facilitate the numerical implementation of the material model. The presented paper will present in detail the development of the orthotropic plasticity model and the procedures used to obtain the required material parameters. Methods in which a combination of actual testing and selective numerical testing can be combined to yield the appropriate input data for the model will be described. A specific laminated polymer matrix composite will be examined to demonstrate the application of the model.
The Kepler Light Curve of V344 LYR: Constraining the Thermal-Viscous Limit Cycle Instability
NASA Technical Reports Server (NTRS)
Cannizzo, J. K.; Still, M. D.; Howell, S. B.; Wood, M. A.; Smale, A. P.
2010-01-01
We present time dependent modeling based on the accretion disk limit cycle model for a 90 d light curve of the short period SU UMa-type dwarf nova V344 Lyr taken by Kepler. The unprecedented precision and cadence (1 minute) far surpass that generally available for long term light curves. The data encompass a super outburst, preceded by three normal (i.e., short) outbursts and followed by two normal outbursts. The main decay of the super outburst is nearly perfectly exponential, decaying at a rate approx.12 d/mag, while the much more rapid decays of the normal outbursts exhibit a faster-than-exponential shape. We show that the standard limit cycle model can account for the light curve, without the need for either the thermal-tidal instability or enhanced mass transfer.
Videodensitometric Methods for Cardiac Output Measurements
NASA Astrophysics Data System (ADS)
Mischi, Massimo; Kalker, Ton; Korsten, Erik
2003-12-01
Cardiac output is often measured by indicator dilution techniques, usually based on dye or cold saline injections. Developments of more stable ultrasound contrast agents (UCA) are leading to new noninvasive indicator dilution methods. However, several problems concerning the interpretation of dilution curves as detected by ultrasound transducers have arisen. This paper presents a method for blood flow measurements based on UCA dilution. Dilution curves are determined by real-time densitometric analysis of the video output of an ultrasound scanner and are automatically fitted by the Local Density Random Walk model. A new fitting algorithm based on multiple linear regression is developed. Calibration, that is, the relation between videodensity and UCA concentration, is modelled by in vitro experimentation. The flow measurement system is validated by in vitro perfusion of SonoVue contrast agent. The results show an accurate dilution curve fit and flow estimation with determination coefficient larger than 0.95 and 0.99, respectively.
A Method for Formulizing Disaster Evacuation Demand Curves Based on SI Model
Song, Yulei; Yan, Xuedong
2016-01-01
The prediction of evacuation demand curves is a crucial step in the disaster evacuation plan making, which directly affects the performance of the disaster evacuation. In this paper, we discuss the factors influencing individual evacuation decision making (whether and when to leave) and summarize them into four kinds: individual characteristics, social influence, geographic location, and warning degree. In the view of social contagion of decision making, a method based on Susceptible-Infective (SI) model is proposed to formulize the disaster evacuation demand curves to address both social influence and other factors’ effects. The disaster event of the “Tianjin Explosions” is used as a case study to illustrate the modeling results influenced by the four factors and perform the sensitivity analyses of the key parameters of the model. Some interesting phenomena are found and discussed, which is meaningful for authorities to make specific evacuation plans. For example, due to the lower social influence in isolated communities, extra actions might be taken to accelerate evacuation process in those communities. PMID:27735875
NASA Technical Reports Server (NTRS)
Brinson, Thomas E.; Kopasakis, George
2004-01-01
The Controls and Dynamics Technology Branch at NASA Glenn Research Center are interested in combining a solid oxide fuel cell (SOFC) to operate in conjunction with a gas turbine engine. A detailed engine model currently exists in the Matlab/Simulink environment. The idea is to incorporate a SOFC model within the turbine engine simulation and observe the hybrid system's performance. The fuel cell will be heated to its appropriate operating condition by the engine s combustor. Once the fuel cell is operating at its steady-state temperature, the gas burner will back down slowly until the engine is fully operating on the hot gases exhausted from the SOFC. The SOFC code is based on a steady-state model developed by The U.S. Department of Energy (DOE). In its current form, the DOE SOFC model exists in Microsoft Excel and uses Visual Basics to create an I-V (current-voltage) profile. For the project's application, the main issue with this model is that the gas path flow and fuel flow temperatures are used as input parameters instead of outputs. The objective is to create a SOFC model based on the DOE model that inputs the fuel cells flow rates and outputs temperature of the flow streams; therefore, creating a temperature profile as a function of fuel flow rate. This will be done by applying the First Law of Thermodynamics for a flow system to the fuel cell. Validation of this model will be done in two procedures. First, for a given flow rate the exit stream temperature will be calculated and compared to DOE SOFC temperature as a point comparison. Next, an I-V curve and temperature curve will be generated where the I-V curve will be compared with the DOE SOFC I-V curve. Matching I-V curves will suggest validation of the temperature curve because voltage is a function of temperature. Once the temperature profile is created and validated, the model will then be placed into the turbine engine simulation for system analysis.
NASA Astrophysics Data System (ADS)
Liu, Yi; Zhang, He; Liu, Siwei; Lin, Fuchang
2018-05-01
The J-A (Jiles-Atherton) model is widely used to describe the magnetization characteristics of magnetic cores in a low-frequency alternating field. However, this model is deficient in the quantitative analysis of the eddy current loss and residual loss in a high-frequency magnetic field. Based on the decomposition of magnetization intensity, an inverse J-A model is established which uses magnetic flux density B as an input variable. Static and dynamic core losses under high frequency excitation are separated based on the inverse J-A model. Optimized parameters of the inverse J-A model are obtained based on particle swarm optimization. The platform for the pulsed magnetization characteristic test is designed and constructed. The hysteresis curves of ferrite and Fe-based nanocrystalline cores at high magnetization rates are measured. The simulated and measured hysteresis curves are presented and compared. It is found that the inverse J-A model can be used to describe the magnetization characteristics at high magnetization rates and to separate the static loss and dynamic loss accurately.
A simplified model for glass formation
NASA Technical Reports Server (NTRS)
Uhlmann, D. R.; Onorato, P. I. K.; Scherer, G. W.
1979-01-01
A simplified model of glass formation based on the formal theory of transformation kinetics is presented, which describes the critical cooling rates implied by the occurrence of glassy or partly crystalline bodies. In addition, an approach based on the nose of the time-temperature-transformation (TTT) curve as an extremum in temperature and time has provided a relatively simple relation between the activation energy for viscous flow in the undercooled region and the temperature of the nose of the TTT curve. Using this relation together with the simplified model, it now seems possible to predict cooling rates using only the liquidus temperature, glass transition temperature, and heat of fusion.
Financial model calibration using consistency hints.
Abu-Mostafa, Y S
2001-01-01
We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market.
A Physiologically Based Kinetic Model of Rat and Mouse Gestation: Disposition of a Weak Acid
A physiologically based toxicokinetic model of gestation in the rat mouse has been developed. The model is superimposed on the normal growth curve for nonpregnant females. It describes the entire gestation period including organogenesis. The model consists of uterus, mammary tiss...
On a framework for generating PoD curves assisted by numerical simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subair, S. Mohamed, E-mail: prajagopal@iitm.ac.in; Agrawal, Shweta, E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in
2015-03-31
The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here wemore » develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.« less
On a framework for generating PoD curves assisted by numerical simulations
NASA Astrophysics Data System (ADS)
Subair, S. Mohamed; Agrawal, Shweta; Balasubramaniam, Krishnan; Rajagopal, Prabhu; Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar
2015-03-01
The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.
IGMtransmission: Transmission curve computation
NASA Astrophysics Data System (ADS)
Harrison, Christopher M.; Meiksin, Avery; Stock, David
2015-04-01
IGMtransmission is a Java graphical user interface that implements Monte Carlo simulations to compute the corrections to colors of high-redshift galaxies due to intergalactic attenuation based on current models of the Intergalactic Medium. The effects of absorption due to neutral hydrogen are considered, with particular attention to the stochastic effects of Lyman Limit Systems. Attenuation curves are produced, as well as colors for a wide range of filter responses and model galaxy spectra. Photometric filters are included for the Hubble Space Telescope, the Keck telescope, the Mt. Palomar 200-inch, the SUBARU telescope and UKIRT; alternative filter response curves and spectra may be readily uploaded.
Kinematic Methods of Designing Free Form Shells
NASA Astrophysics Data System (ADS)
Korotkiy, V. A.; Khmarova, L. I.
2017-11-01
The geometrical shell model is formed in light of the set requirements expressed through surface parameters. The shell is modelled using the kinematic method according to which the shell is formed as a continuous one-parameter set of curves. The authors offer a kinematic method based on the use of second-order curves with a variable eccentricity as a form-making element. Additional guiding ruled surfaces are used to control the designed surface form. The authors made a software application enabling to plot a second-order curve specified by a random set of five coplanar points and tangents.
NASA Astrophysics Data System (ADS)
Milecki, Andrzej; Pelic, Marcin
2016-10-01
This paper presents results of studies of an application of a new method of piezo bender actuators modelling. A special hysteresis simulation model was developed and is presented. The model is based on a geometrical deformation of main hysteresis loop. The piezoelectric effect is described and the history of the hysteresis modelling is briefly reviewed. Firstly, a simple model for main loop modelling is proposed. Then, a geometrical description of the non-saturated hysteresis is presented and its modelling method is introduced. The modelling makes use of the function describing the geometrical shape of the two hysteresis main curves, which can be defined theoretically or obtained by measurement. These main curves are stored in the memory and transformed geometrically in order to obtain the minor curves. Such model was prepared in the Matlab-Simulink software, but can be easily implemented using any programming language and applied in an on-line controller. In comparison to the other known simulation methods, the one presented in the paper is easy to understand, and uses simple arithmetical equations, allowing to quickly obtain the inversed model of hysteresis. The inversed model was further used for compensation of a non-saturated hysteresis of the piezo bender actuator and results have also been presented in the paper.
Localized Principal Component Analysis based Curve Evolution: A Divide and Conquer Approach
Appia, Vikram; Ganapathy, Balaji; Yezzi, Anthony; Faber, Tracy
2014-01-01
We propose a novel localized principal component analysis (PCA) based curve evolution approach which evolves the segmenting curve semi-locally within various target regions (divisions) in an image and then combines these locally accurate segmentation curves to obtain a global segmentation. The training data for our approach consists of training shapes and associated auxiliary (target) masks. The masks indicate the various regions of the shape exhibiting highly correlated variations locally which may be rather independent of the variations in the distant parts of the global shape. Thus, in a sense, we are clustering the variations exhibited in the training data set. We then use a parametric model to implicitly represent each localized segmentation curve as a combination of the local shape priors obtained by representing the training shapes and the masks as a collection of signed distance functions. We also propose a parametric model to combine the locally evolved segmentation curves into a single hybrid (global) segmentation. Finally, we combine the evolution of these semilocal and global parameters to minimize an objective energy function. The resulting algorithm thus provides a globally accurate solution, which retains the local variations in shape. We present some results to illustrate how our approach performs better than the traditional approach with fully global PCA. PMID:25520901
Zhang, Lei; Feng, Xiao; Wang, Xin; Liu, Changyong
2014-01-01
The nitrogen-containing austenitic stainless steel 316LN has been chosen as the material for nuclear main-pipe, which is one of the key parts in 3rd generation nuclear power plants. In this research, a constitutive model of nitrogen-containing austenitic stainless steel is developed. The true stress-true strain curves obtained from isothermal hot compression tests over a wide range of temperatures (900–1250°C) and strain rates (10−3–10 s−1), were employed to study the dynamic deformational behavior of and recrystallization in 316LN steels. The constitutive model is developed through multiple linear regressions performed on the experimental data and based on an Arrhenius-type equation and Zener-Hollomon theory. The influence of strain was incorporated in the developed constitutive equation by considering the effect of strain on the various material constants. The reliability and accuracy of the model is verified through the comparison of predicted flow stress curves and experimental curves. Possible reasons for deviation are also discussed based on the characteristics of modeling process. PMID:25375345
A Dirichlet process model for classifying and forecasting epidemic curves.
Nsoesie, Elaine O; Leman, Scotland C; Marathe, Madhav V
2014-01-09
A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997-2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods' performance was comparable. Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial.
Three-dimensional simulation of human teeth and its application in dental education and research.
Koopaie, Maryam; Kolahdouz, Sajad
2016-01-01
Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible.
Three-dimensional simulation of human teeth and its application in dental education and research
Koopaie, Maryam; Kolahdouz, Sajad
2016-01-01
Background: A comprehensive database, comprising geometry and properties of human teeth, is needed for dentistry education and dental research. The aim of this study was to create a three-dimensional model of human teeth to improve the dental E-learning and dental research. Methods: In this study, a cross-section picture of the three-dimensional model of the teeth was used. CT-Scan images were used in the first method. The space between the cross- sectional images was about 200 to 500 micrometers. Hard tissue margin was detected in each image by Matlab (R2009b), as image processing software. The images were transferred to Solidworks 2015 software. Tooth border curve was fitted on B-spline curves, using the least square-curve fitting algorithm. After transferring all curves for each tooth to Solidworks, the surface was created based on the surface fitting technique. This surface was meshed in Meshlab-v132 software, and the optimization of the surface was done based on the remeshing technique. The mechanical properties of the teeth were applied to the dental model. Results: This study presented a methodology for communication between CT-Scan images and the finite element and training software through which modeling and simulation of the teeth were performed. In this study, cross-sectional images were used for modeling. According to the findings, the cost and time were reduced compared to other studies. Conclusion: The three-dimensional model method presented in this study facilitated the learning of the dental students and dentists. Based on the three-dimensional model proposed in this study, designing and manufacturing the implants and dental prosthesis are possible. PMID:28491836
A Bézier-Spline-based Model for the Simulation of Hysteresis in Variably Saturated Soil
NASA Astrophysics Data System (ADS)
Cremer, Clemens; Peche, Aaron; Thiele, Luisa-Bianca; Graf, Thomas; Neuweiler, Insa
2017-04-01
Most transient variably saturated flow models neglect hysteresis in the p_c-S-relationship (Beven, 2012). Such models tend to inadequately represent matrix potential and saturation distribution. Thereby, when simulating flow and transport processes, fluid and solute fluxes might be overestimated (Russo et al., 1989). In this study, we present a simple, computationally efficient and easily applicable model that enables to adequately describe hysteresis in the p_c-S-relationship for variably saturated flow. This model can be seen as an extension to the existing play-type model (Beliaev and Hassanizadeh, 2001), where scanning curves are simplified as vertical lines between main imbibition and main drainage curve. In our model, we use continuous linear and Bézier-Spline-based functions. We show the successful validation of the model by numerically reproducing a physical experiment by Gillham, Klute and Heermann (1976) describing primary drainage and imbibition in a vertical soil column. With a deviation of 3%, the simple Bézier-Spline-based model performs significantly better that the play-type approach, which deviates by 30% from the experimental results. Finally, we discuss the realization of physical experiments in order to extend the model to secondary scanning curves and in order to determine scanning curve steepness. {Literature} Beven, K.J. (2012). Rainfall-Runoff-Modelling: The Primer. John Wiley and Sons. Russo, D., Jury, W. A., & Butters, G. L. (1989). Numerical analysis of solute transport during transient irrigation: 1. The effect of hysteresis and profile heterogeneity. Water Resources Research, 25(10), 2109-2118. https://doi.org/10.1029/WR025i010p02109. Beliaev, A.Y. & Hassanizadeh, S.M. (2001). A Theoretical Model of Hysteresis and Dynamic Effects in the Capillary Relation for Two-phase Flow in Porous Media. Transport in Porous Media 43: 487. doi:10.1023/A:1010736108256. Gillham, R., Klute, A., & Heermann, D. (1976). Hydraulic properties of a porous medium: Measurement and empirical representation. Soil Science Society of America Journal, 40(2), 203-207.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boissonnade, A; Hossain, Q; Kimball, J
Since the mid-l980's, assessment of the wind and tornado risks at the Department of Energy (DOE) high and moderate hazard facilities has been based on the straight wind/tornado hazard curves given in UCRL-53526 (Coats, 1985). These curves were developed using a methodology that utilized a model, developed by McDonald, for severe winds at sub-tornado wind speeds and a separate model, developed by Fujita, for tornado wind speeds. For DOE sites not covered in UCRL-53526, wind and tornado hazard assessments are based on the criteria outlined in DOE-STD-1023-95 (DOE, 1996), utilizing the methodology in UCRL-53526; Subsequent to the publication of UCRL53526,more » in a study sponsored by the Nuclear Regulatory Commission (NRC), the Pacific Northwest Laboratory developed tornado wind hazard curves for the contiguous United States, NUREG/CR-4461 (Ramsdell, 1986). Because of the different modeling assumptions and underlying data used to develop the tornado wind information, the wind speeds at specified exceedance levels, at a given location, based on the methodology in UCRL-53526, are different than those based on the methodology in NUREG/CR-4461. In 1997, Lawrence Livermore National Laboratory (LLNL) was funded by the DOE to review the current methodologies for characterizing tornado wind hazards and to develop a state-of-the-art wind/tornado characterization methodology based on probabilistic hazard assessment techniques and current historical wind data. This report describes the process of developing the methodology and the database of relevant tornado information needed to implement the methodology. It also presents the tornado wind hazard curves obtained from the application of the method to DOE sites throughout the contiguous United States.« less
Gómez, N N; Venette, R C; Gould, J R; Winograd, D F
2009-02-01
Predictions of survivorship are critical to quantify the probability of establishment by an alien invasive species, but survival curves rarely distinguish between the effects of temperature on development versus senescence. We report chronological and physiological age-based survival curves for a potentially invasive noctuid, recently described as Copitarsia corruda Pogue & Simmons, collected from Peru and reared on asparagus at six constant temperatures between 9.7 and 34.5 degrees C. Copitarsia spp. are not known to occur in the United States but are routinely intercepted at ports of entry. Chronological age survival curves differ significantly among temperatures. Survivorship at early age after hatch is greatest at lower temperatures and declines as temperature increases. Mean longevity was 220 (+/-13 SEM) days at 9.7 degrees C. Physiological age survival curves constructed with developmental base temperature (7.2 degrees C) did not correspond to those constructed with a senescence base temperature (5.9 degrees C). A single degree day survival curve with an appropriate temperature threshold based on senescence adequately describes survivorship under non-stress temperature conditions (5.9-24.9 degrees C).
A new method for the automatic interpretation of Schlumberger and Wenner sounding curves
Zohdy, A.A.R.
1989-01-01
A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solares, Santiago D.
This study introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretationmore » of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tappingmode imaging, for both of which the force curves exhibit the expected features. Lastly, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.« less
NASA Astrophysics Data System (ADS)
Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang
2018-03-01
A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.
Interaction Analysis of Longevity Interventions Using Survival Curves.
Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim
2018-01-06
A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans . We find that interactions are generally weak even when the standard analysis indicates otherwise.
Interaction Analysis of Longevity Interventions Using Survival Curves
Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G.; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim
2018-01-01
A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans. We find that interactions are generally weak even when the standard analysis indicates otherwise. PMID:29316622
Blough, M M; Waggener, R G; Payne, W H; Terry, J A
1998-09-01
A model for calculating mammographic spectra independent of measured data and fitting parameters is presented. This model is based on first principles. Spectra were calculated using various target and filter combinations such as molybdenum/molybdenum, molybdenum/rhodium, rhodium/rhodium, and tungsten/aluminum. Once the spectra were calculated, attenuation curves were calculated and compared to measured attenuation curves. The attenuation curves were calculated and measured using aluminum alloy 1100 or high purity aluminum filtration. Percent differences were computed between the measured and calculated attenuation curves resulting in an average of 5.21% difference for tungsten/aluminum, 2.26% for molybdenum/molybdenum, 3.35% for rhodium/rhodium, and 3.18% for molybdenum/rhodium. Calculated spectra were also compared to measured spectra from the Food and Drug Administration [Fewell and Shuping, Handbook of Mammographic X-ray Spectra (U.S. Government Printing Office, Washington, D.C., 1979)] and a comparison will also be presented.
A population model for a long-lived, resprouting chaparral shrub: Adenostoma fasciculatum
Stohlgren, Thomas J.; Rundel, Philip W.
1986-01-01
Extensive stands of Adenostoma fasciculatum H.&A. (chamise) in the chaparral of California are periodically rejuvenated by fire. A population model based on size-specific demographic characteristics (thinning and fire-caused mortality) was developed to generate probable age distributions within size classes and survivorship curves for typical stands. The model was modified to assess the long term effects of different mortality rates on age distributions. Under observed mean mortality rates (28.7%), model output suggests some shrubs can survive more than 23 fires. A 10% increase in mortality rate by size class slightly shortened the survivorship curve, while a 10% decrease in mortality rate by size class greatly elongated the curve. This approach may be applicable to other long-lived plant species with complex life histories.
Virus Neutralisation: New Insights from Kinetic Neutralisation Curves
Magnus, Carsten
2013-01-01
Antibodies binding to the surface of virions can lead to virus neutralisation. Different theories have been proposed to determine the number of antibodies that must bind to a virion for neutralisation. Early models are based on chemical binding kinetics. Applying these models lead to very low estimates of the number of antibodies needed for neutralisation. In contrast, according to the more conceptual approach of stoichiometries in virology a much higher number of antibodies is required for virus neutralisation by antibodies. Here, we combine chemical binding kinetics with (virological) stoichiometries to better explain virus neutralisation by antibody binding. This framework is in agreement with published data on the neutralisation of the human immunodeficiency virus. Knowing antibody reaction constants, our model allows us to estimate stoichiometrical parameters from kinetic neutralisation curves. In addition, we can identify important parameters that will make further analysis of kinetic neutralisation curves more valuable in the context of estimating stoichiometries. Our model gives a more subtle explanation of kinetic neutralisation curves in terms of single-hit and multi-hit kinetics. PMID:23468602
Beyond Rating Curves: Time Series Models for in-Stream Turbidity Prediction
NASA Astrophysics Data System (ADS)
Wang, L.; Mukundan, R.; Zion, M.; Pierson, D. C.
2012-12-01
The New York City Department of Environmental Protection (DEP) manages New York City's water supply, which is comprised of over 20 reservoirs and supplies over 1 billion gallons of water per day to more than 9 million customers. DEP's "West of Hudson" reservoirs located in the Catskill Mountains are unfiltered per a renewable filtration avoidance determination granted by the EPA. While water quality is usually pristine, high volume storm events occasionally cause the reservoirs to become highly turbid. A logical strategy for turbidity control is to temporarily remove the turbid reservoirs from service. While effective in limiting delivery of turbid water and reducing the need for in-reservoir alum flocculation, this strategy runs the risk of negatively impacting water supply reliability. Thus, it is advantageous for DEP to understand how long a particular turbidity event will affect their system. In order to understand the duration, intensity and total load of a turbidity event, predictions of future in-stream turbidity values are important. Traditionally, turbidity predictions have been carried out by applying streamflow observations/forecasts to a flow-turbidity rating curve. However, predictions from rating curves are often inaccurate due to inter- and intra-event variability in flow-turbidity relationships. Predictions can be improved by applying an autoregressive moving average (ARMA) time series model in combination with a traditional rating curve. Since 2003, DEP and the Upstate Freshwater Institute have compiled a relatively consistent set of 15-minute turbidity observations at various locations on Esopus Creek above Ashokan Reservoir. Using daily averages of this data and streamflow observations at nearby USGS gauges, flow-turbidity rating curves were developed via linear regression. Time series analysis revealed that the linear regression residuals may be represented using an ARMA(1,2) process. Based on this information, flow-turbidity regressions with ARMA(1,2) errors were fit to the observations. Preliminary model validation exercises at a 30-day forecast horizon show that the ARMA error models generally improve the predictive skill of the linear regression rating curves. Skill seems to vary based on the ambient hydrologic conditions at the onset of the forecast. For example, ARMA error model forecasts issued before a high flow/turbidity event do not show significant improvements over the rating curve approach. However, ARMA error model forecasts issued during the "falling limb" of the hydrograph are significantly more accurate than rating curves for both single day and accumulated event predictions. In order to assist in reservoir operations decisions associated with turbidity events and general water supply reliability, DEP has initiated design of an Operations Support Tool (OST). OST integrates a reservoir operations model with 2D hydrodynamic water quality models and a database compiling near-real-time data sources and hydrologic forecasts. Currently, OST uses conventional flow-turbidity rating curves and hydrologic forecasts for predictive turbidity inputs. Given the improvements in predictive skill over traditional rating curves, the ARMA error models are currently being evaluated as an addition to DEP's Operations Support Tool.
Modeling streamflow from coupled airborne laser scanning and acoustic Doppler current profiler data
Norris, Lam; Kean, Jason W.; Lyon, Steve
2016-01-01
The rating curve enables the translation of water depth into stream discharge through a reference cross-section. This study investigates coupling national scale airborne laser scanning (ALS) and acoustic Doppler current profiler (ADCP) bathymetric survey data for generating stream rating curves. A digital terrain model was defined from these data and applied in a physically based 1-D hydraulic model to generate rating curves for a regularly monitored location in northern Sweden. Analysis of the ALS data showed that overestimation of the streambank elevation could be adjusted with a root mean square error (RMSE) block adjustment using a higher accuracy manual topographic survey. The results of our study demonstrate that the rating curve generated from the vertically corrected ALS data combined with ADCP data had lower errors (RMSE = 0.79 m3/s) than the empirical rating curve (RMSE = 1.13 m3/s) when compared to streamflow measurements. We consider these findings encouraging as hydrometric agencies can potentially leverage national-scale ALS and ADCP instrumentation to reduce the cost and effort required for maintaining and establishing rating curves at gauging station sites similar to the Röån River.
DeSmitt, Holly J; Domire, Zachary J
2016-12-01
Biomechanical models are sensitive to the choice of model parameters. Therefore, determination of accurate subject specific model parameters is important. One approach to generate these parameters is to optimize the values such that the model output will match experimentally measured strength curves. This approach is attractive as it is inexpensive and should provide an excellent match to experimentally measured strength. However, given the problem of muscle redundancy, it is not clear that this approach generates accurate individual muscle forces. The purpose of this investigation is to evaluate this approach using simulated data to enable a direct comparison. It is hypothesized that the optimization approach will be able to recreate accurate muscle model parameters when information from measurable parameters is given. A model of isometric knee extension was developed to simulate a strength curve across a range of knee angles. In order to realistically recreate experimentally measured strength, random noise was added to the modeled strength. Parameters were solved for using a genetic search algorithm. When noise was added to the measurements the strength curve was reasonably recreated. However, the individual muscle model parameters and force curves were far less accurate. Based upon this examination, it is clear that very different sets of model parameters can recreate similar strength curves. Therefore, experimental variation in strength measurements has a significant influence on the results. Given the difficulty in accurately recreating individual muscle parameters, it may be more appropriate to perform simulations with lumped actuators representing similar muscles.
Conduction and rectification in NbO{sub x}- and NiO-based metal-insulator-metal diodes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osgood, Richard M., E-mail: richard.m.osgood.civ@mail.mil; Giardini, Stephen; Carlson, Joel
2016-09-15
Conduction and rectification in nanoantenna-coupled NbO{sub x}- and NiO-based metal-insulator-metal (MIM) diodes (“nanorectennas”) are studied by comparing new theoretical predictions with the measured response of nanorectenna arrays. A new quantum mechanical model is reported and agrees with measurements of current–voltage (I–V) curves, over 10 orders of magnitude in current density, from [NbO{sub x}(native)-Nb{sub 2}O{sub 5}]- and NiO-based samples with oxide thicknesses in the range of 5–36 nm. The model, which introduces new physics and features, including temperature, electron effective mass, and image potential effects using the pseudobarrier technique, improves upon widely used earlier models, calculates the MIM diode's I–V curve, andmore » predicts quantitatively the rectification responsivity of high frequency voltages generated in a coupled nanoantenna array by visible/near-infrared light. The model applies both at the higher frequencies, when high-energy photons are incident, and at lower frequencies, when the formula for classical rectification, involving derivatives of the I–V curve, may be used. The rectified low-frequency direct current is well-predicted in this work's model, but not by fitting the experimentally measured I–V curve with a polynomial or by using the older Simmons model (as shown herein). By fitting the measured I–V curves with our model, the barrier heights in Nb-(NbO{sub x}(native)-Nb{sub 2}O{sub 5})-Pt and Ni-NiO-Ti/Ag diodes are found to be 0.41/0.77 and 0.38/0.39 eV, respectively, similar to literature reports, but with effective mass much lower than the free space value. The NbO{sub x} (native)-Nb{sub 2}O{sub 5} dielectric properties improve, and the effective Pt-Nb{sub 2}O{sub 5} barrier height increases as the oxide thickness increases. An observation of direct current of ∼4 nA for normally incident, focused 514 nm continuous wave laser beams are reported, similar in magnitude to recent reports. This measured direct current is compared to the prediction for rectified direct current, given by the rectification responsivity, calculated from the I–V curve times input power.« less
Bayesian hierarchical functional data analysis via contaminated informative priors.
Scarpa, Bruno; Dunson, David B
2009-09-01
A variety of flexible approaches have been proposed for functional data analysis, allowing both the mean curve and the distribution about the mean to be unknown. Such methods are most useful when there is limited prior information. Motivated by applications to modeling of temperature curves in the menstrual cycle, this article proposes a flexible approach for incorporating prior information in semiparametric Bayesian analyses of hierarchical functional data. The proposed approach is based on specifying the distribution of functions as a mixture of a parametric hierarchical model and a nonparametric contamination. The parametric component is chosen based on prior knowledge, while the contamination is characterized as a functional Dirichlet process. In the motivating application, the contamination component allows unanticipated curve shapes in unhealthy menstrual cycles. Methods are developed for posterior computation, and the approach is applied to data from a European fecundability study.
A Software Tool for the Rapid Analysis of the Sintering Behavior of Particulate Bodies
2017-11-01
bounded by a region that the user selects via cross hairs . Future plot analysis features, such as more complicated curve fitting and modeling functions...German RM. Grain growth behavior of tungsten heavy alloys based on the master sintering curve concept. Metallurgical and Materials Transactions A
The X-Ray Light Curve of the Very Luminous Supernova SN 1978K in NGC 1313
NASA Astrophysics Data System (ADS)
Schlegel, Eric M.; Petre, R.; Colbert, E. J. M.
1996-01-01
We present the 0.5-2.0 keV light curve of the X-ray luminous supernova SN 1978K in NGC 1313, based on six ROSAT observations spanning 1990 July to t994 July. SN 1978K is one of a few supernovae or supernova remnants that are very luminous (˜1039-1040 ergs s-1) in the X-ray, optical, and radio bands, and the first, at a supernova age of 10-20 yr, for which sufficient data exist to create an X-ray light curve. The X-ray flux is approximately constant over the 4 yr sampled by our observations, which were obtained 12-16 yr after the initial explosion. Three models exist to explain the large X-ray luminosity: pulsar input, a reverse shock running back into the expanding debris of the supernova, and the outgoing shock crushing of cloudlets in the debris field. Based upon calculations of Chevalier & Fransson, a pulsar cannot provide sufficient energy to produce the soft X-ray luminosity. Based upon the models and the light curve to date, it is not possible to discern the evolutionary phase of the supernova.
Hydrologic impacts of climate change and urbanization in Las Vegas Wash Watershed, Nevada
In this study, a cell-based model for the Las Vegas Wash (LVW) Watershed in Clark County, Nevada, was developed by combining the traditional hydrologic modeling methods (Thornthwaite’s water balance model and the Soil Conservation Survey’s Curve Number method) with the pixel-base...
Simplified gas sensor model based on AlGaN/GaN heterostructure Schottky diode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Subhashis, E-mail: subhashis.ds@gmail.com; Majumdar, S.; Kumar, R.
2015-08-28
Physics based modeling of AlGaN/GaN heterostructure Schottky diode gas sensor has been investigated for high sensitivity and linearity of the device. Here the surface and heterointerface properties are greatly exploited. The dependence of two dimensional electron gas (2DEG) upon the surface charges is mainly utilized. The simulation of Schottky diode has been done in Technology Computer Aided Design (TCAD) tool and I-V curves are generated, from the I-V curves 76% response has been recorded in presence of 500 ppm gas at a biasing voltage of 0.95 Volt.
New limb-darkening coefficients for modeling binary star light curves
NASA Technical Reports Server (NTRS)
Van Hamme, W.
1993-01-01
We present monochromatic, passband-specific, and bolometric limb-darkening coefficients for a linear as well as nonlinear logarithmic and square root limb-darkening laws. These coefficients, including the bolometric ones, are needed when modeling binary star light curves with the latest version of the Wilson-Devinney light curve progam. We base our calculations on the most recent ATLAS stellar atmosphere models for solar chemical composition stars with a wide range of effective temperatures and surface gravitites. We examine how well various limb-darkening approximations represent the variation of the emerging specific intensity across a stellar surface as computed according to the model. For binary star light curve modeling purposes, we propose the use of a logarithmic or a square root law. We design our tables in such a manner that the relative quality of either law with respect to another can be easily compared. Since the computation of bolometric limb-darkening coefficients first requires monochromatic coefficients, we also offer tables of these coefficients (at 1221 wavelength values between 9.09 nm and 160 micrometer) and tables of passband-specific coefficients for commonly used photometric filters.
NASA Astrophysics Data System (ADS)
von Paris, P.; Gratier, P.; Bordé, P.; Selsis, F.
2016-03-01
Context. Basic atmospheric properties, such as albedo and heat redistribution between day- and nightsides, have been inferred for a number of planets using observations of secondary eclipses and thermal phase curves. Optical phase curves have not yet been used to constrain these atmospheric properties consistently. Aims: We model previously published phase curves of CoRoT-1b, TrES-2b, and HAT-P-7b, and infer albedos and recirculation efficiencies. These are then compared to previous estimates based on secondary eclipse data. Methods: We use a physically consistent model to construct optical phase curves. This model takes Lambertian reflection, thermal emission, ellipsoidal variations, and Doppler boosting, into account. Results: CoRoT-1b shows a non-negligible scattering albedo (0.11 < AS < 0.3 at 95% confidence) as well as small day-night temperature contrasts, which are indicative of moderate to high re-distribution of energy between dayside and nightside. These values are contrary to previous secondary eclipse and phase curve analyses. In the case of HAT-P-7b, model results suggest a relatively high scattering albedo (AS ≈ 0.3). This confirms previous phase curve analysis; however, it is in slight contradiction to values inferred from secondary eclipse data. For TrES-2b, both approaches yield very similar estimates of albedo and heat recirculation. Discrepancies between recirculation and albedo values as inferred from secondary eclipse and optical phase curve analyses might be interpreted as a hint that optical and IR observations probe different atmospheric layers, hence temperatures.
Comparison of BRDF-Predicted and Observed Light Curves of GEO Satellites
NASA Astrophysics Data System (ADS)
Ceniceros, A.; Dao, P.; Gaylor, D.; Rast, R.; Anderson, J.; Pinon, E., III
Although the amount of light received by sensors on the ground from Resident Space Objects (RSOs) in geostationary orbit (GEO) is small, information can still be extracted in the form of light curves (temporal brightness or apparent magnitude). Previous research has shown promising results in determining RSO characteristics such as shape, size, reflectivity, and attitude by processing simulated light curve data with various estimation algorithms. These simulated light curves have been produced using one of several existing analytic Bidirectional Reflectance Distribution Function (BRDF) models. These BRDF models have generally come from researchers in computer graphics and machine vision and have not been shown to be realistic for telescope observations of RSOs in GEO. While BRDFs have been used for SSA analysis and characterization, there is a lack of research on the validation of BRDFs with regards to real data. In this paper, we compared telescope data provided by the Air Force Research Laboratory (AFRL) with predicted light curves from the Ashikhmin-Premoze BRDF and two additional popular illumination models, Ashikhmin-Shirley and Cook-Torrance. We computed predicted light curves based on two line mean elements (TLEs), shape model, attitude profile, observing ground station location, observation time and BRDF. The predicted light curves were then compared with AFRL telescope data. The selected BRDFS provided accurate apparent magnitude trends and behavior, but uncertainties due to lack of attitude information and deficiencies in our satellite model prevented us from obtaining a better match to the real data. The current findings present a foundation for ample future research.
Teich, Andrew F; Qian, Ning
2010-03-01
Orientation adaptation and perceptual learning change orientation tuning curves of V1 cells. Adaptation shifts tuning curve peaks away from the adapted orientation, reduces tuning curve slopes near the adapted orientation, and increases the responses on the far flank of tuning curves. Learning an orientation discrimination task increases tuning curve slopes near the trained orientation. These changes have been explained previously in a recurrent model (RM) of orientation selectivity. However, the RM generates only complex cells when they are well tuned, so that there is currently no model of orientation plasticity for simple cells. In addition, some feedforward models, such as the modified feedforward model (MFM), also contain recurrent cortical excitation, and it is unknown whether they can explain plasticity. Here, we compare plasticity in the MFM, which simulates simple cells, and a recent modification of the RM (MRM), which displays a continuum of simple-to-complex characteristics. Both pre- and postsynaptic-based modifications of the recurrent and feedforward connections in the models are investigated. The MRM can account for all the learning- and adaptation-induced plasticity, for both simple and complex cells, while the MFM cannot. The key features from the MRM required for explaining plasticity are broadly tuned feedforward inputs and sharpening by a Mexican hat intracortical interaction profile. The mere presence of recurrent cortical interactions in feedforward models like the MFM is insufficient; such models have more rigid tuning curves. We predict that the plastic properties must be absent for cells whose orientation tuning arises from a feedforward mechanism.
Quantifying and Reducing Curve-Fitting Uncertainty in Isc
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
2015-06-14
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark; Duck, Benjamin; Emery, Keith
Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less
Solares, Santiago D.
2015-11-26
This study introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretationmore » of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tappingmode imaging, for both of which the force curves exhibit the expected features. Lastly, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.« less
Solares, Santiago D
2015-01-01
This paper introduces a quasi-3-dimensional (Q3D) viscoelastic model and software tool for use in atomic force microscopy (AFM) simulations. The model is based on a 2-dimensional array of standard linear solid (SLS) model elements. The well-known 1-dimensional SLS model is a textbook example in viscoelastic theory but is relatively new in AFM simulation. It is the simplest model that offers a qualitatively correct description of the most fundamental viscoelastic behaviors, namely stress relaxation and creep. However, this simple model does not reflect the correct curvature in the repulsive portion of the force curve, so its application in the quantitative interpretation of AFM experiments is relatively limited. In the proposed Q3D model the use of an array of SLS elements leads to force curves that have the typical upward curvature in the repulsive region, while still offering a very low computational cost. Furthermore, the use of a multidimensional model allows for the study of AFM tips having non-ideal geometries, which can be extremely useful in practice. Examples of typical force curves are provided for single- and multifrequency tapping-mode imaging, for both of which the force curves exhibit the expected features. Finally, a software tool to simulate amplitude and phase spectroscopy curves is provided, which can be easily modified to implement other controls schemes in order to aid in the interpretation of AFM experiments.
Numerical integration techniques for curved-element discretizations of molecule-solvent interfaces.
Bardhan, Jaydeep P; Altman, Michael D; Willis, David J; Lippow, Shaun M; Tidor, Bruce; White, Jacob K
2007-07-07
Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, here methods were developed to model several important surface formulations using exact surface discretizations. Following and refining Zauhar's work [J. Comput.-Aided Mol. Des. 9, 149 (1995)], two classes of curved elements were defined that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. Numerical integration techniques are presented that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, a set of calculations are presented that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planar-triangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute-solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that the methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online as supplemental material.
NASA Astrophysics Data System (ADS)
Massoudieh, A.; Dentz, M.; Le Borgne, T.
2017-12-01
In heterogeneous media, the velocity distribution and the spatial correlation structure of velocity for solute particles determine the breakthrough curves and how they evolve as one moves away from the solute source. The ability to predict such evolution can help relating the spatio-statistical hydraulic properties of the media to the transport behavior and travel time distributions. While commonly used non-local transport models such as anomalous dispersion and classical continuous time random walk (CTRW) can reproduce breakthrough curve successfully by adjusting the model parameter values, they lack the ability to relate model parameters to the spatio-statistical properties of the media. This in turns limits the transferability of these models. In the research to be presented, we express concentration or flux of solutes as a distribution over their velocity. We then derive an integrodifferential equation that governs the evolution of the particle distribution over velocity at given times and locations for a particle ensemble, based on a presumed velocity correlation structure and an ergodic cross-sectional velocity distribution. This way, the spatial evolution of breakthrough curves away from the source is predicted based on cross-sectional velocity distribution and the connectivity, which is expressed by the velocity transition probability density. The transition probability is specified via a copula function that can help construct a joint distribution with a given correlation and given marginal velocities. Using this approach, we analyze the breakthrough curves depending on the velocity distribution and correlation properties. The model shows how the solute transport behavior evolves from ballistic transport at small spatial scales to Fickian dispersion at large length scales relative to the velocity correlation length.
On the reconstruction of the surface structure of the spotted stars
NASA Astrophysics Data System (ADS)
Kolbin, A. I.; Shimansky, V. V.; Sakhibullin, N. A.
2013-07-01
We have developed and tested a light-curve inversion technique for photometric mapping of spotted stars. The surface of a spotted star is partitioned into small area elements, over which a search is carried out for the intensity distribution providing the best agreement between the observed and model light curves within a specified uncertainty. We have tested mapping techniques based on the use of both a single light curve and several light curves obtained in different photometric bands. Surface reconstruction artifacts due to the ill-posed nature of the problem have been identified.
NASA Astrophysics Data System (ADS)
Perrault, Matthieu; Gueguen, Philippe; Aldea, Alexandru; Demetriu, Sorin
2013-12-01
The lack of knowledge concerning modelling existing buildings leads to signifiant variability in fragility curves for single or grouped existing buildings. This study aims to investigate the uncertainties of fragility curves, with special consideration of the single-building sigma. Experimental data and simplified models are applied to the BRD tower in Bucharest, Romania, a RC building with permanent instrumentation. A three-step methodology is applied: (1) adjustment of a linear MDOF model for experimental modal analysis using a Timoshenko beam model and based on Anderson's criteria, (2) computation of the structure's response to a large set of accelerograms simulated by SIMQKE software, considering twelve ground motion parameters as intensity measurements (IM), and (3) construction of the fragility curves by comparing numerical interstory drift with the threshold criteria provided by the Hazus methodology for the slight damage state. By introducing experimental data into the model, uncertainty is reduced to 0.02 considering S d ( f 1) as seismic intensity IM and uncertainty related to the model is assessed at 0.03. These values must be compared with the total uncertainty value of around 0.7 provided by the Hazus methodology.
Awareness, persuasion, and adoption: Enriching the Bass model
NASA Astrophysics Data System (ADS)
Colapinto, Cinzia; Sartori, Elena; Tolotti, Marco
2014-02-01
In the context of diffusion of innovations, we propose a probabilistic model based on interacting populations connected through new communication channels. The potential adopters are heterogeneous in the connectivity levels and in their taste for innovation. The proposed framework can model the different stages of the adoption dynamics. In particular, the adoption curve is the result of a micro-founded decision process following the awareness phase. Eventually, we recover stylized facts pointed out by the extant literature in the field, such as delayed adoptions and non-monotonic adoption curves.
NASA Astrophysics Data System (ADS)
Eric, H.
1982-12-01
The liquidus curves of the Sn-Te and Sn-SnS systems were evaluated by the regular associated solution model (RAS). The main assumption of this theory is the existence of species A, B and associated complexes AB in the liquid phase. Thermodynamic properties of the binary A-B system are derived by ternary regular solution equations. Calculations based on this model for the Sn-Te and Sn-SnS systems are in agreement with published data.
Uncertainty estimation with bias-correction for flow series based on rating curve
NASA Astrophysics Data System (ADS)
Shao, Quanxi; Lerat, Julien; Podger, Geoff; Dutta, Dushmanta
2014-03-01
Streamflow discharge constitutes one of the fundamental data required to perform water balance studies and develop hydrological models. A rating curve, designed based on a series of concurrent stage and discharge measurements at a gauging location, provides a way to generate complete discharge time series with a reasonable quality if sufficient measurement points are available. However, the associated uncertainty is frequently not available even though it has a significant impact on hydrological modelling. In this paper, we identify the discrepancy of the hydrographers' rating curves used to derive the historical discharge data series and proposed a modification by bias correction which is also in the form of power function as the traditional rating curve. In order to obtain the uncertainty estimation, we propose a further both-side Box-Cox transformation to stabilize the regression residuals as close to the normal distribution as possible, so that a proper uncertainty can be attached for the whole discharge series in the ensemble generation. We demonstrate the proposed method by applying it to the gauging stations in the Flinders and Gilbert rivers in north-west Queensland, Australia.
Model-Based IN SITU Parameter Estimation of Ultrasonic Guided Waves in AN Isotropic Plate
NASA Astrophysics Data System (ADS)
Hall, James S.; Michaels, Jennifer E.
2010-02-01
Most ultrasonic systems employing guided waves for flaw detection require information such as dispersion curves, transducer locations, and expected propagation loss. Degraded system performance may result if assumed parameter values do not accurately reflect the actual environment. By characterizing the propagating environment in situ at the time of test, potentially erroneous a priori estimates are avoided and performance of ultrasonic guided wave systems can be improved. A four-part model-based algorithm is described in the context of previous work that estimates model parameters whereby an assumed propagation model is used to describe the received signals. This approach builds upon previous work by demonstrating the ability to estimate parameters for the case of single mode propagation. Performance is demonstrated on signals obtained from theoretical dispersion curves, finite element modeling, and experimental data.
A Multilevel Latent Growth Curve Approach to Predicting Student Proficiency
ERIC Educational Resources Information Center
Choi, Kilchan; Goldschmidt, Pete
2012-01-01
Value-added models and growth-based accountability aim to evaluate school's performance based on student growth in learning. The current focus is on linking the results from value-added models to the ones from growth-based accountability systems including Adequate Yearly Progress decisions mandated by No Child Left Behind. We present a new…
Simple calculation of ab initio melting curves: Application to aluminum.
Robert, Grégory; Legrand, Philippe; Arnault, Philippe; Desbiens, Nicolas; Clérouin, Jean
2015-03-01
We present a simple, fast, and promising method to compute the melting curves of materials with ab initio molecular dynamics. It is based on the two-phase thermodynamic model of Lin et al [J. Chem. Phys. 119, 11792 (2003)] and its improved version given by Desjarlais [Phys. Rev. E 88, 062145 (2013)]. In this model, the velocity autocorrelation function is utilized to calculate the contribution of the nuclei motion to the entropy of the solid and liquid phases. It is then possible to find the thermodynamic conditions of equal Gibbs free energy between these phases, defining the melting curve. The first benchmark on the face-centered cubic melting curve of aluminum from 0 to 300 GPa demonstrates how to obtain an accuracy of 5%-10%, comparable to the most sophisticated methods, for a much lower computational cost.
Evaluation of a Stochastic Inactivation Model for Heat-Activated Spores of Bacillus spp. ▿
Corradini, Maria G.; Normand, Mark D.; Eisenberg, Murray; Peleg, Micha
2010-01-01
Heat activates the dormant spores of certain Bacillus spp., which is reflected in the “activation shoulder” in their survival curves. At the same time, heat also inactivates the already active and just activated spores, as well as those still dormant. A stochastic model based on progressively changing probabilities of activation and inactivation can describe this phenomenon. The model is presented in a fully probabilistic discrete form for individual and small groups of spores and as a semicontinuous deterministic model for large spore populations. The same underlying algorithm applies to both isothermal and dynamic heat treatments. Its construction does not require the assumption of the activation and inactivation kinetics or knowledge of their biophysical and biochemical mechanisms. A simplified version of the semicontinuous model was used to simulate survival curves with the activation shoulder that are reminiscent of experimental curves reported in the literature. The model is not intended to replace current models to predict dynamic inactivation but only to offer a conceptual alternative to their interpretation. Nevertheless, by linking the survival curve's shape to probabilities of events at the individual spore level, the model explains, and can be used to simulate, the irregular activation and survival patterns of individual and small groups of spores, which might be involved in food poisoning and spoilage. PMID:20453137
Modeling of a Robust Confidence Band for the Power Curve of a Wind Turbine.
Hernandez, Wilmar; Méndez, Alfredo; Maldonado-Correa, Jorge L; Balleteros, Francisco
2016-12-07
Having an accurate model of the power curve of a wind turbine allows us to better monitor its operation and planning of storage capacity. Since wind speed and direction is of a highly stochastic nature, the forecasting of the power generated by the wind turbine is of the same nature as well. In this paper, a method for obtaining a robust confidence band containing the power curve of a wind turbine under test conditions is presented. Here, the confidence band is bound by two curves which are estimated using parametric statistical inference techniques. However, the observations that are used for carrying out the statistical analysis are obtained by using the binning method, and in each bin, the outliers are eliminated by using a censorship process based on robust statistical techniques. Then, the observations that are not outliers are divided into observation sets. Finally, both the power curve of the wind turbine and the two curves that define the robust confidence band are estimated using each of the previously mentioned observation sets.
Modeling of a Robust Confidence Band for the Power Curve of a Wind Turbine
Hernandez, Wilmar; Méndez, Alfredo; Maldonado-Correa, Jorge L.; Balleteros, Francisco
2016-01-01
Having an accurate model of the power curve of a wind turbine allows us to better monitor its operation and planning of storage capacity. Since wind speed and direction is of a highly stochastic nature, the forecasting of the power generated by the wind turbine is of the same nature as well. In this paper, a method for obtaining a robust confidence band containing the power curve of a wind turbine under test conditions is presented. Here, the confidence band is bound by two curves which are estimated using parametric statistical inference techniques. However, the observations that are used for carrying out the statistical analysis are obtained by using the binning method, and in each bin, the outliers are eliminated by using a censorship process based on robust statistical techniques. Then, the observations that are not outliers are divided into observation sets. Finally, both the power curve of the wind turbine and the two curves that define the robust confidence band are estimated using each of the previously mentioned observation sets. PMID:27941604
Cross-country transferability of multi-variable damage models
NASA Astrophysics Data System (ADS)
Wagenaar, Dennis; Lüdtke, Stefan; Kreibich, Heidi; Bouwer, Laurens
2017-04-01
Flood damage assessment is often done with simple damage curves based only on flood water depth. Additionally, damage models are often transferred in space and time, e.g. from region to region or from one flood event to another. Validation has shown that depth-damage curve estimates are associated with high uncertainties, particularly when applied in regions outside the area where the data for curve development was collected. Recently, progress has been made with multi-variable damage models created with data-mining techniques, i.e. Bayesian Networks and random forest. However, it is still unknown to what extent and under which conditions model transfers are possible and reliable. Model validations in different countries will provide valuable insights into the transferability of multi-variable damage models. In this study we compare multi-variable models developed on basis of flood damage datasets from Germany as well as from The Netherlands. Data from several German floods was collected using computer aided telephone interviews. Data from the 1993 Meuse flood in the Netherlands is available, based on compensations paid by the government. The Bayesian network and random forest based models are applied and validated in both countries on basis of the individual datasets. A major challenge was the harmonization of the variables between both datasets due to factors like differences in variable definitions, and regional and temporal differences in flood hazard and exposure characteristics. Results of model validations and comparisons in both countries are discussed, particularly in respect to encountered challenges and possible solutions for an improvement of model transferability.
Kuesten, Carla; Bi, Jian
2018-06-03
Conventional drivers of liking analysis was extended with a time dimension into temporal drivers of liking (TDOL) based on functional data analysis methodology and non-additive models for multiple-attribute time-intensity (MATI) data. The non-additive models, which consider both direct effects and interaction effects of attributes to consumer overall liking, include Choquet integral and fuzzy measure in the multi-criteria decision-making, and linear regression based on variance decomposition. Dynamics of TDOL, i.e., the derivatives of the relative importance functional curves were also explored. Well-established R packages 'fda', 'kappalab' and 'relaimpo' were used in the paper for developing TDOL. Applied use of these methods shows that the relative importance of MATI curves offers insights for understanding the temporal aspects of consumer liking for fruit chews.
NASA Astrophysics Data System (ADS)
Engeland, K.; Steinsland, I.; Petersen-Øverleir, A.; Johansen, S.
2012-04-01
The aim of this study is to assess the uncertainties in streamflow simulations when uncertainties in both observed inputs (precipitation and temperature) and streamflow observations used in the calibration of the hydrological model are explicitly accounted for. To achieve this goal we applied the elevation distributed HBV model operating on daily time steps to a small catchment in high elevation in Southern Norway where the seasonal snow cover is important. The uncertainties in precipitation inputs were quantified using conditional simulation. This procedure accounts for the uncertainty related to the density of the precipitation network, but neglects uncertainties related to measurement bias/errors and eventual elevation gradients in precipitation. The uncertainties in temperature inputs were quantified using a Bayesian temperature interpolation procedure where the temperature lapse rate is re-estimated every day. The uncertainty in the lapse rate was accounted for whereas the sampling uncertainty related to network density was neglected. For every day a random sample of precipitation and temperature inputs were drawn to be applied as inputs to the hydrologic model. The uncertainties in observed streamflow were assessed based on the uncertainties in the rating curve model. A Bayesian procedure was applied to estimate the probability for rating curve models with 1 to 3 segments and the uncertainties in their parameters. This method neglects uncertainties related to errors in observed water levels. Note that one rating curve was drawn to make one realisation of a whole time series of streamflow, thus the rating curve errors lead to a systematic bias in the streamflow observations. All these uncertainty sources were linked together in both calibration and evaluation of the hydrologic model using a DREAM based MCMC routine. Effects of having less information (e.g. missing one streamflow measurement for defining the rating curve or missing one precipitation station) was also investigated.
The dark matter distribution of NGC 5921
NASA Astrophysics Data System (ADS)
Ali, Israa Abdulqasim Mohammed; Hashim, Norsiah; Abidin, Zamri Zainal
2018-04-01
We used the neutral atomic hydrogen data of the Very Large Array for the spiral galaxy NGC 5921 with z = 0.0045 at the distance of 22.4 Mpc, to investigate the nature of dark matter. The investigation was based on two theories, namely, dark matter and Modified Newtonian Dynamics (MOND). We presented the kinematic analysis of the rotation curve with two models of dark matter, namely, the Burkert and NFW profiles. The results revealed that the NFW halo model can reproduce the observed rotation curve, with χ 2_{red}≈ 1, while the Burkert model is unable to fit the observation data. Therefore, the dark matter density profile of NGC 5921 can be presented as a cuspy halo. We also tried to investigate the observed rotation curve of NGC 5921 with MOND, along with the possible assumption on baryonic matter and distance. We note that MOND is still incapable of mimicking the rotation curve with the observed data of the galaxy.
Fractal continuum model for tracer transport in a porous medium.
Herrera-Hernández, E C; Coronado, M; Hernández-Coronado, H
2013-12-01
A model based on the fractal continuum approach is proposed to describe tracer transport in fractal porous media. The original approach has been extended to treat tracer transport and to include systems with radial and uniform flow, which are cases of interest in geoscience. The models involve advection due to the fluid motion in the fractal continuum and dispersion whose mathematical expression is taken from percolation theory. The resulting advective-dispersive equations are numerically solved for continuous and for pulse tracer injection. The tracer profile and the tracer breakthrough curve are evaluated and analyzed in terms of the fractal parameters. It has been found in this work that anomalous transport frequently appears, and a condition on the fractal parameter values to predict when sub- or superdiffusion might be expected has been obtained. The fingerprints of fractality on the tracer breakthrough curve in the explored parameter window consist of an early tracer breakthrough and long tail curves for the spherical and uniform flow cases, and symmetric short tailed curves for the radial flow case.
A 3-D enlarged cell technique (ECT) for elastic wave modelling of a curved free surface
NASA Astrophysics Data System (ADS)
Wei, Songlin; Zhou, Jianyang; Zhuang, Mingwei; Liu, Qing Huo
2016-09-01
The conventional finite-difference time-domain (FDTD) method for elastic waves suffers from the staircasing error when applied to model a curved free surface because of its structured grid. In this work, an improved, stable and accurate 3-D FDTD method for elastic wave modelling on a curved free surface is developed based on the finite volume method and enlarged cell technique (ECT). To achieve a sufficiently accurate implementation, a finite volume scheme is applied to the curved free surface to remove the staircasing error; in the mean time, to achieve the same stability as the FDTD method without reducing the time step increment, the ECT is introduced to preserve the solution stability by enlarging small irregular cells into adjacent cells under the condition of conservation of force. This method is verified by several 3-D numerical examples. Results show that the method is stable at the Courant stability limit for a regular FDTD grid, and has much higher accuracy than the conventional FDTD method.
A high throughput MATLAB program for automated force-curve processing using the AdG polymer model.
O'Connor, Samantha; Gaddis, Rebecca; Anderson, Evan; Camesano, Terri A; Burnham, Nancy A
2015-02-01
Research in understanding biofilm formation is dependent on accurate and representative measurements of the steric forces related to brush on bacterial surfaces. A MATLAB program to analyze force curves from an AFM efficiently, accurately, and with minimal user bias has been developed. The analysis is based on a modified version of the Alexander and de Gennes (AdG) polymer model, which is a function of equilibrium polymer brush length, probe radius, temperature, separation distance, and a density variable. Automating the analysis reduces the amount of time required to process 100 force curves from several days to less than 2min. The use of this program to crop and fit force curves to the AdG model will allow researchers to ensure proper processing of large amounts of experimental data and reduce the time required for analysis and comparison of data, thereby enabling higher quality results in a shorter period of time. Copyright © 2014 Elsevier B.V. All rights reserved.
Can low-resolution airborne laser scanning data be used to model stream rating curves?
Lyon, Steve; Nathanson, Marcus; Lam, Norris; Dahlke, Helen; Rutzinger, Martin; Kean, Jason W.; Laudon, Hjalmar
2015-01-01
This pilot study explores the potential of using low-resolution (0.2 points/m2) airborne laser scanning (ALS)-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2) ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries). This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.
NASA Astrophysics Data System (ADS)
Graur, Or; Zurek, David R.; Rest, Armin; Seitenzahl, Ivo R.; Shappee, Benjamin J.; Fisher, Robert; Guillochon, James; Shara, Michael M.; Riess, Adam G.
2018-06-01
The late-time light curves of Type Ia supernovae (SNe Ia), observed >900 days after explosion, present the possibility of a new diagnostic for SN Ia progenitor and explosion models. First, however, we must discover what physical process (or processes) leads to the slow-down of the light curve relative to a pure 56Co decay, as observed in SNe 2011fe, 2012cg, and 2014J. We present Hubble Space Telescope observations of SN 2015F, taken ≈600–1040 days past maximum light. Unlike those of the three other SNe Ia, the light curve of SN 2015F remains consistent with being powered solely by the radioactive decay of 56Co. We fit the light curves of these four SNe Ia in a consistent manner and measure possible correlations between the light-curve stretch—a proxy for the intrinsic luminosity of the SN—and the parameters of the physical model used in the fit. We propose a new, late-time Phillips-like correlation between the stretch of the SNe and the shape of their late-time light curves, which we parameterize as the difference between their pseudo-bolometric luminosities at 600 and 900 days: ΔL 900 = log(L 600/L 900). Our analysis is based on only four SNe, so a larger sample is required to test the validity of this correlation. If true, this model-independent correlation provides a new way to test which physical process lies behind the slow-down of SN Ia light curves >900 days after explosion, and, ultimately, fresh constraints on the various SN Ia progenitor and explosion models.
A technique for measuring the quality of an elliptically bent pentaerythritol [PET(002)] crystal
Haugh, M. J.; Jacoby, K. D.; Barrios, M. A.; ...
2016-08-23
Here, we present a technique for determining the X-ray spectral quality from each region of an elliptically curved PET(002) crystal. The investigative technique utilizes the shape of the crystal rocking curve which changes significantly as the radius of curvature changes. This unique quality information enables the spectroscopist to verify where in the spectral range that the spectrometer performance is satisfactory and where there are regions that would show spectral distortion. A collection of rocking curve measurements for elliptically curved PET(002) has been built up in our X-ray laboratory. The multi-lamellar model from the XOP software has been used as amore » guide and corrections were applied to the model based upon measurements. But, the measurement of RI at small radius of curvature shows an anomalous behavior; the multi-lamellar model fails to show this behavior. The effect of this anomalous RI behavior on an X-ray spectrometer calibration is calculated. It is compared to the multi-lamellar model calculation which is completely inadequate for predicting RI for this range of curvature and spectral energies.« less
A technique for measuring the quality of an elliptically bent pentaerythritol [PET(002)] crystal
NASA Astrophysics Data System (ADS)
Haugh, M. J.; Jacoby, K. D.; Barrios, M. A.; Thorn, D.; Emig, J. A.; Schneider, M. B.
2016-11-01
We present a technique for determining the X-ray spectral quality from each region of an elliptically curved PET(002) crystal. The investigative technique utilizes the shape of the crystal rocking curve which changes significantly as the radius of curvature changes. This unique quality information enables the spectroscopist to verify where in the spectral range that the spectrometer performance is satisfactory and where there are regions that would show spectral distortion. A collection of rocking curve measurements for elliptically curved PET(002) has been built up in our X-ray laboratory. The multi-lamellar model from the XOP software has been used as a guide and corrections were applied to the model based upon measurements. But, the measurement of RI at small radius of curvature shows an anomalous behavior; the multi-lamellar model fails to show this behavior. The effect of this anomalous RI behavior on an X-ray spectrometer calibration is calculated. It is compared to the multi-lamellar model calculation which is completely inadequate for predicting RI for this range of curvature and spectral energies.
A Dirichlet process model for classifying and forecasting epidemic curves
2014-01-01
Background A forecast can be defined as an endeavor to quantitatively estimate a future event or probabilities assigned to a future occurrence. Forecasting stochastic processes such as epidemics is challenging since there are several biological, behavioral, and environmental factors that influence the number of cases observed at each point during an epidemic. However, accurate forecasts of epidemics would impact timely and effective implementation of public health interventions. In this study, we introduce a Dirichlet process (DP) model for classifying and forecasting influenza epidemic curves. Methods The DP model is a nonparametric Bayesian approach that enables the matching of current influenza activity to simulated and historical patterns, identifies epidemic curves different from those observed in the past and enables prediction of the expected epidemic peak time. The method was validated using simulated influenza epidemics from an individual-based model and the accuracy was compared to that of the tree-based classification technique, Random Forest (RF), which has been shown to achieve high accuracy in the early prediction of epidemic curves using a classification approach. We also applied the method to forecasting influenza outbreaks in the United States from 1997–2013 using influenza-like illness (ILI) data from the Centers for Disease Control and Prevention (CDC). Results We made the following observations. First, the DP model performed as well as RF in identifying several of the simulated epidemics. Second, the DP model correctly forecasted the peak time several days in advance for most of the simulated epidemics. Third, the accuracy of identifying epidemics different from those already observed improved with additional data, as expected. Fourth, both methods correctly classified epidemics with higher reproduction numbers (R) with a higher accuracy compared to epidemics with lower R values. Lastly, in the classification of seasonal influenza epidemics based on ILI data from the CDC, the methods’ performance was comparable. Conclusions Although RF requires less computational time compared to the DP model, the algorithm is fully supervised implying that epidemic curves different from those previously observed will always be misclassified. In contrast, the DP model can be unsupervised, semi-supervised or fully supervised. Since both methods have their relative merits, an approach that uses both RF and the DP model could be beneficial. PMID:24405642
Empirical constraints on closure temperatures from a single diffusion coefficient
NASA Astrophysics Data System (ADS)
Lee, J. K. W.
The elucidation of thermal histories by geochronological and isotopic means is based fundamentally on solid-state diffusion and the concept of closure temperatures. Because diffusion is thermally activated, an analytical solution of the closure temperature (Tc*) can only be obtained if the diffusion coefficient D of the diffusion process is measured at two or more different temperatures. If the diffusion coefficient is known at only one temperature, however, the true closure temperature (Tc*) cannot be calculated analytically because there exist an infinite number of possible (apparent) closure temperatures (Tc) which can be generated by this single datum. By introducing further empirical constraints to limit the range of possible closure temperatures, however, mathematical analysis of a modified form of the closure temperature equation shows that it is possible to make both qualitative and quantitative estimates of Tc* given knowledge of only one diffusion coefficient DM measured at one temperature TM. Qualitative constraints of the true closure temperature Tc* are obtained from the shapes of curves on a graph of the apparent Tc (Tc) vs. activation energy E, in which each curve is based on a single diffusion coefficient measurement DM at temperature TM. Using a realistic range of E, the concavity of the curve shows whether TM is less than, approximately equal to, or greater than Tc*. Quantitative estimates are obtained by considering two dimensionless parameters [
Classifying low flow hydrological regimes at a regional scale
NASA Astrophysics Data System (ADS)
Kirkby, M. J.; Gallart, F.; Kjeldsen, T. R.; Irvine, B. J.; Froebrich, J.; Lo Porto, A.; de Girolamo, A.; Mirage Team
2011-12-01
The paper uses a simple water balance model that partitions the precipitation between actual evapotranspiration, quick flow and delayed flow, and has sufficient complexity to capture the essence of climate and vegetation controls on this partitioning. Using this model, monthly flow duration curves have been constructed from climate data across Europe to address the relative frequency of ecologically critical low flow stages in semi-arid rivers, when flow commonly persists only in disconnected pools in the river bed. The hydrological model is based on a dynamic partitioning of precipitation to estimate water available for evapotranspiration and plant growth and for residual runoff. The duration curve for monthly flows has then been analysed to give an estimate of bankfull flow based on recurrence interval. Arguing from observed ratios of cross-sectional areas at flood and low flows, hydraulic geometry suggests that disconnected flow under "pool" conditions is approximately 0.1% of bankfull flow. Flow duration curves define a measure of bankfull discharge on the basis of frequency. The corresponding frequency for pools is then read from the duration curve, using this (0.1%) ratio to estimate pool discharge from bank full discharge. The flow duration curve then provides an estimate of the frequency of poorly connected pool conditions, corresponding to this discharge, that constrain survival of river-dwelling arthropods and fish. The methodology has here been applied across Europe at 15 km resolution, and the potential is demonstrated for applying the methodology under alternative climatic scenarios.
2008-10-01
the standard model characterization procedure is based on creep and recovery tests, where loading and unloading occurs at a fast rate of 1.0 MPa/s...σ − g[ǫ] and on d̊g[ǫ] dǫ = E, where g̊ is defined as the equilibrium stress g[ ] for extremely fast loading. For this case, the stress-strain curves...Strain S tr es s Strain Rate Slow Strain Rate Medium Strain Rate Fast Plastic Flow Fully Established Figure 2.10: Stress Strain Curve Schematic
Nanomechanical properties of phospholipid microbubbles.
Buchner Santos, Evelyn; Morris, Julia K; Glynos, Emmanouil; Sboros, Vassilis; Koutsos, Vasileios
2012-04-03
This study uses atomic force microscopy (AFM) force-deformation (F-Δ) curves to investigate for the first time the Young's modulus of a phospholipid microbubble (MB) ultrasound contrast agent. The stiffness of the MBs was calculated from the gradient of the F-Δ curves, and the Young's modulus of the MB shell was calculated by employing two different mechanical models based on the Reissner and elastic membrane theories. We found that the relatively soft phospholipid-based MBs behave inherently differently to stiffer, polymer-based MBs [Glynos, E.; Koutsos, V.; McDicken, W. N.; Moran, C. M.; Pye, S. D.; Ross, J. A.; Sboros, V. Langmuir2009, 25 (13), 7514-7522] and that elastic membrane theory is the most appropriate of the models tested for evaluating the Young's modulus of the phospholipid shell, agreeing with values available for living cell membranes, supported lipid bilayers, and synthetic phospholipid vesicles. Furthermore, we show that AFM F-Δ curves in combination with a suitable mechanical model can assess the shell properties of phospholipid MBs. The "effective" Young's modulus of the whole bubble was also calculated by analysis using Hertz theory. This analysis yielded values which are in agreement with results from studies which used Hertz theory to analyze similar systems such as cells.
Feasibility of Rapid Multitracer PET Tumor Imaging
NASA Astrophysics Data System (ADS)
Kadrmas, D. J.; Rust, T. C.
2005-10-01
Positron emission tomography (PET) can characterize different aspects of tumor physiology using various tracers. PET scans are usually performed using only one tracer since there is no explicit signal for distinguishing multiple tracers. We tested the feasibility of rapidly imaging multiple PET tracers using dynamic imaging techniques, where the signals from each tracer are separated based upon differences in tracer half-life, kinetics, and distribution. Time-activity curve populations for FDG, acetate, ATSM, and PTSM were simulated using appropriate compartment models, and noisy dual-tracer curves were computed by shifting and adding the single-tracer curves. Single-tracer components were then estimated from dual-tracer data using two methods: principal component analysis (PCA)-based fits of single-tracer components to multitracer data, and parallel multitracer compartment models estimating single-tracer rate parameters from multitracer time-activity curves. The PCA analysis found that there is information content present for separating multitracer data, and that tracer separability depends upon tracer kinetics, injection order and timing. Multitracer compartment modeling recovered rate parameters for individual tracers with good accuracy but somewhat higher statistical uncertainty than single-tracer results when the injection delay was >10 min. These approaches to processing rapid multitracer PET data may potentially provide a new tool for characterizing multiple aspects of tumor physiology in vivo.
On high-pressure melting of tantalum
NASA Astrophysics Data System (ADS)
Luo, Sheng-Nian; Swift, Damian C.
2007-01-01
The issues related to high-pressure melting of Ta are discussed within the context of diamond-anvil cell (DAC) and shock wave experiments, theoretical calculations and common melting models. The discrepancies between the extrapolations of the DAC melting curve and the melting point inferred from shock wave experiments, cannot be reconciled either by superheating or solid-solid phase transition. The failure to reproduce low-pressure DAC melting curve by melting models such as dislocation-mediated melting and the Lindemann law, and molecular dynamics and quantum mechanics-based calculations, undermines their predictions at moderate and high pressures. Despite claims to the contrary, the melting curve of Ta (as well as Mo and W) remains inconclusive at high pressures.
Saturation of the junction voltage in GaN-based laser diodes
NASA Astrophysics Data System (ADS)
Feng, M. X.; Liu, J. P.; Zhang, S. M.; Liu, Z. S.; Jiang, D. S.; Li, Z. C.; Wang, F.; Li, D. Y.; Zhang, L. Q.; Wang, H.; Yang, H.
2013-05-01
Saturation of the junction voltage in GaN-based laser diodes (LDs) is studied. It is found that there is a bump above the lasing transition in the I(dV/dI)-I curve, instead of a dip as that for GaAs-based LDs. The bump in I(dV/dI)-I curve moves to higher currents along with the lasing threshold. A model considering ambipolar conduction and electron overflow into p-AlGaN cladding layer due to poor carrier confinement in active region is used to explain the anomaly. The characteristic temperature of GaN-based LD is obtained by fitting threshold currents determined from I(dV/dI)-I curves. Moreover, it is found that GaN-based LDs show characteristics with a nonlinear series resistance, which may be due to the electron overflow into p-AlGaN cladding layer and the enhanced activation of Mg acceptors.
NASA Astrophysics Data System (ADS)
Kock, B. E.
2008-12-01
The increased availability and understanding of agent-based modeling technology and techniques provides a unique opportunity for water resources modelers, allowing them to go beyond traditional behavioral approaches from neoclassical economics, and add rich cognition to social-hydrological models. Agent-based models provide for an individual focus, and the easier and more realistic incorporation of learning, memory and other mechanisms for increased cognitive sophistication. We are in an age of global change impacting complex water resources systems, and social responses are increasingly recognized as fundamentally adaptive and emergent. In consideration of this, water resources models and modelers need to better address social dynamics in a manner beyond the capabilities of neoclassical economics theory and practice. However, going beyond the unitary curve requires unique levels of engagement with stakeholders, both to elicit the richer knowledge necessary for structuring and parameterizing agent-based models, but also to make sure such models are appropriately used. With the aim of encouraging epistemological and methodological convergence in the agent-based modeling of water resources, we have developed a water resources-specific cognitive model and an associated collaborative modeling process. Our cognitive model emphasizes efficiency in architecture and operation, and capacity to adapt to different application contexts. We describe a current application of this cognitive model and modeling process in the Arkansas Basin of Colorado. In particular, we highlight the potential benefits of, and challenges to, using more sophisticated cognitive models in agent-based water resources models.
In-plane nuclear field formation investigated in single self-assembled quantum dots
NASA Astrophysics Data System (ADS)
Yamamoto, S.; Matsusaki, R.; Kaji, R.; Adachi, S.
2018-02-01
We studied the formation mechanism of the in-plane nuclear field in single self-assembled In0.75Al0.25As /Al0.3Ga0.7As quantum dots. The Hanle curves with an anomalously large width and hysteretic behavior at the critical transverse magnetic field were observed in many single quantum dots grown in the same sample. In order to explain the anomalies in the Hanle curve indicating the formation of a large nuclear field perpendicular to the photo-injected electron spin polarization, we propose a new model based on the current phenomenological model for dynamic nuclear spin polarization. The model includes the effects of the nuclear quadrupole interaction and the sign inversion between in-plane and out-of-plane components of nuclear g factors, and the model calculations reproduce successfully the characteristics of the observed anomalies in the Hanle curves.
Karimi, Mohammad Taghi; Ebrahimi, Mohammad Hossein; Mohammadi, Ali; McGarry, Anthony
2017-03-01
Scoliosis is a lateral curvature in the normally straight vertical line of the spine, and the curvature can be moderate to severe. Different treatment can be used based on severity and age of subjects, but most common treatment for this disease is using orthosis. To design orthosis types of force arrangement can be varied, from transverse loads to vertical loads or combination of them. But it is not well introduced how orthoses control scoliotic curve and how to achieve the maximum correction based on force configurations and magnitude. Therefore, it was aimed to determine the effect of various loads configurations and magnitudes on curve correction of a degenerative scoliotic subject. A scoliotic subject participated in this study. The CT-Scan of the subject was used to produce 3D model of spine. The 3D model of spine was produced by Mimics software and the finite element analysis and deformation of scoliotic curve of the spine under seven different forces and in three different conditions was determined by ABAQUS software. The Cobb angle in scoliosis curve decreased significantly by applying forces. In each condition depends on different forces, different corrections have been achieved. It can be concluded that the configurations of the force application mentioned in this study is effective to decrease the scoliosis curve. Although it is a case study, it can be used for a vast number of subjects to predict the correction of scoliosis curve before orthotic treatment. Moreover, it is recommended that this method and the outputs can be compared with clinical findings.
Anstey, Chris M
2005-06-01
Currently, three strong ion models exist for the determination of plasma pH. Mathematically, they vary in their treatment of weak acids, and this study was designed to determine whether any significant differences exist in the simulated performance of these models. The models were subjected to a "metabolic" stress either in the form of variable strong ion difference and fixed weak acid effect, or vice versa, and compared over the range 25 < or = Pco(2) < or = 135 Torr. The predictive equations for each model were iteratively solved for pH at each Pco(2) step, and the results were plotted as a series of log(Pco(2))-pH titration curves. The results were analyzed for linearity by using ordinary least squares regression and for collinearity by using correlation. In every case, the results revealed a linear relationship between log(Pco(2)) and pH over the range 6.8 < or = pH < or = 7.8, and no significant difference between the curve predictions under metabolic stress. The curves were statistically collinear. Ultimately, their clinical utility will be determined both by acceptance of the strong ion framework for describing acid-base physiology and by the ease of measurement of the independent model parameters.
Linking the Climate and Thermal Phase Curve of 55 Cancri e
NASA Astrophysics Data System (ADS)
Hammond, Mark; Pierrehumbert, Raymond T.
2017-11-01
The thermal phase curve of 55 Cancri e is the first measurement of the temperature distribution of a tidally locked super-Earth, but raises a number of puzzling questions about the planet’s climate. The phase curve has a high amplitude and peak offset, suggesting that it has a significant eastward hot-spot shift as well as a large day-night temperature contrast. We use a general circulation model to model potential climates, and investigate the relation between bulk atmospheric composition and the magnitude of these seemingly contradictory features. We confirm theoretical models of tidally locked circulation are consistent with our numerical model of 55 Cnc e, and rule out certain atmospheric compositions based on their thermodynamic properties. Our best-fitting atmosphere has a significant hot-spot shift and day-night contrast, although these are not as large as the observed phase curve. We discuss possible physical processes that could explain the observations, and show that night-side cloud formation from species such as SiO from a day-side magma ocean could potentially increase the phase curve amplitude and explain the observations. We conclude that the observations could be explained by an optically thick atmosphere with a low mean molecular weight, a surface pressure of several bars, and a strong eastward circulation, with night-side cloud formation a possible explanation for the difference between our model and the observations.
A cost-performance model for ground-based optical communications receiving telescopes
NASA Technical Reports Server (NTRS)
Lesh, J. R.; Robinson, D. L.
1986-01-01
An analytical cost-performance model for a ground-based optical communications receiving telescope is presented. The model considers costs of existing telescopes as a function of diameter and field of view. This, coupled with communication performance as a function of receiver diameter and field of view, yields the appropriate telescope cost versus communication performance curve.
N.N. G& #243; mez; R.C. Venette; J.R. Gould; D.F. Winograd
2009-01-01
Predictions of survivorship are critical to quantify the probability of establishment by an alien invasive species, but survival curves rarely distinguish between the effects of temperature on development versus senescence. We report chronological and physiological age-based survival curves for a potentially invasive noctuid, recently described as Copitarsia...
Techniques for estimating magnitude and frequency of floods on streams in Indiana
Glatfelter, D.R.
1984-01-01
A rainfall-runoff model was tlsed to synthesize long-term peak data at 11 gaged locations on small streams. Flood-frequency curves developed from the long-term synthetic data were combined with curves based on short-term observed data to provide weighted estimates of flood magnitude and frequency at the rainfall-runoff stations.
The application of depletion curves for parameterization of subgrid variability of snow
C. H. Luce; D. G. Tarboton
2004-01-01
Parameterization of subgrid-scale variability in snow accumulation and melt is important for improvements in distributed snowmelt modelling. We have taken the approach of using depletion curves that relate fractional snowcovered area to element-average snow water equivalent to parameterize the effect of snowpack heterogeneity within a physically based mass and energy...
NASA Technical Reports Server (NTRS)
Koschny, D.; Gritsevich, M.; Barentsen, G.
2011-01-01
Different authors have produced models for the physical properties of meteoroids based on the shape of a meteor's light curve, typically from short observing campaigns. We here analyze the height profiles and light curves of approx.200 double-station meteors from the Leonids and Perseids using data from the Virtual Meteor Observatory, to demonstrate that with this web-based meteor database it is possible to analyze very large datasets from different authors in a consistent way. We compute the average heights for begin point, maximum luminosity, and end heights for Perseids and Leonids. We also compute the skew of the light curve, usually called the F-parameter. The results compare well with other author's data. We display the average light curve in a novel way to assess the light curve shape in addition to using the F-parameter. While the Perseids show a peaked light curve, the average Leonid light curve has a more flat peak. This indicates that the particle distribution of Leonid meteors can be described by a Gaussian distribution; the Perseids can be described with a power law. The skew for Leonids is smaller than for Perseids, indicating that the Leonids are more fragile than the Perseids.
Academic Medicine's Critical Role in the "Third Curve" of Health Care.
Paz, Harold L
2016-05-01
Over the last several years, the health care landscape has changed at an unprecedented rate due to new economic and regulatory forces ushered in by the Affordable Care Act and the introduction of innovative technologies, such as personalized medicine, that are poised to open the door to consumer-driven health care. Tremendous pressure exists on academic health centers to rapidly evolve clinically while not abandoning their unique academic mission. The convergence of personalized medicine, new digital technologies, and changes in health professionals' scope of practice alongside new payment structures will accelerate the move to a patient-centered health system. In this Commentary, the author argues that these new tools and resources must be embraced to improve the health of patients. With the traditional, fee-for-service model of care as "Curve I" and the post-Flexner era of population-based medicine as "Curve II," the author identifies the emergence of "Curve III," which is characterized by patient-centered, consumer-directed models of care. As the old models of health care undergo transition and the impact of technology and analytics grow, future practitioners must be trained to embrace this change and function effectively in the "third curve" of consumer-driven health care.
Caccamo, M; Ferguson, J D; Veerkamp, R F; Schadt, I; Petriglieri, R; Azzaro, G; Pozzebon, A; Licitra, G
2014-01-01
As part of a larger project aiming to develop management evaluation tools based on results from test-day (TD) models, the objective of this study was to examine the effect of physical composition of total mixed rations (TMR) tested quarterly from March 2006 through December 2008 on milk, fat, and protein yield curves for 25 herds in Ragusa, Sicily. A random regression sire-maternal grandsire model was used to estimate variance components for milk, fat, and protein yields fitted on a full data set, including 241,153 TD records from 9,809 animals in 42 herds recorded from 1995 through 2008. The model included parity, age at calving, year at calving, and stage of pregnancy as fixed effects. Random effects were herd × test date, sire and maternal grandsire additive genetic effect, and permanent environmental effect modeled using third-order Legendre polynomials. Model fitting was carried out using ASREML. Afterward, for the 25 herds involved in the study, 9 particle size classes were defined based on the proportions of TMR particles on the top (19-mm) and middle (8-mm) screen of the Penn State Particle Separator. Subsequently, the model with estimated variance components was used to examine the influence of TMR particle size class on milk, fat, and protein yield curves. An interaction was included with the particle size class and days in milk. The effect of the TMR particle size class was modeled using a ninth-order Legendre polynomial. Lactation curves were predicted from the model while controlling for TMR chemical composition (crude protein content of 15.5%, neutral detergent fiber of 40.7%, and starch of 19.7% for all classes), to have pure estimates of particle distribution not confounded by nutrient content of TMR. We found little effect of class of particle proportions on milk yield and fat yield curves. Protein yield was greater for sieve classes with 10.4 to 17.4% of TMR particles retained on the top (19-mm) sieve. Optimal distributions different from those recommended may reflect regional differences based on climate and types and quality of forages fed. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Sabry, A H; W Hasan, W Z; Ab Kadir, M Z A; Radzi, M A M; Shafie, S
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system's modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model.
W. Hasan, W. Z.
2018-01-01
The power system always has several variations in its profile due to random load changes or environmental effects such as device switching effects when generating further transients. Thus, an accurate mathematical model is important because most system parameters vary with time. Curve modeling of power generation is a significant tool for evaluating system performance, monitoring and forecasting. Several numerical techniques compete to fit the curves of empirical data such as wind, solar, and demand power rates. This paper proposes a new modified methodology presented as a parametric technique to determine the system’s modeling equations based on the Bode plot equations and the vector fitting (VF) algorithm by fitting the experimental data points. The modification is derived from the familiar VF algorithm as a robust numerical method. This development increases the application range of the VF algorithm for modeling not only in the frequency domain but also for all power curves. Four case studies are addressed and compared with several common methods. From the minimal RMSE, the results show clear improvements in data fitting over other methods. The most powerful features of this method is the ability to model irregular or randomly shaped data and to be applied to any algorithms that estimating models using frequency-domain data to provide state-space or transfer function for the model. PMID:29351554
NASA Astrophysics Data System (ADS)
Mandel, Kaisey; Kirshner, R. P.; Narayan, G.; Wood-Vasey, W. M.; Friedman, A. S.; Hicken, M.
2010-01-01
I have constructed a comprehensive statistical model for Type Ia supernova light curves spanning optical through near infrared data simultaneously. The near infrared light curves are found to be excellent standard candles (sigma(MH) = 0.11 +/- 0.03 mag) that are less vulnerable to systematic error from dust extinction, a major confounding factor for cosmological studies. A hierarchical statistical framework incorporates coherently multiple sources of randomness and uncertainty, including photometric error, intrinsic supernova light curve variations and correlations, dust extinction and reddening, peculiar velocity dispersion and distances, for probabilistic inference with Type Ia SN light curves. Inferences are drawn from the full probability density over individual supernovae and the SN Ia and dust populations, conditioned on a dataset of SN Ia light curves and redshifts. To compute probabilistic inferences with hierarchical models, I have developed BayeSN, a Markov Chain Monte Carlo algorithm based on Gibbs sampling. This code explores and samples the global probability density of parameters describing individual supernovae and the population. I have applied this hierarchical model to optical and near infrared data of over 100 nearby Type Ia SN from PAIRITEL, the CfA3 sample, and the literature. Using this statistical model, I find that SN with optical and NIR data have a smaller residual scatter in the Hubble diagram than SN with only optical data. The continued study of Type Ia SN in the near infrared will be important for improving their utility as precise and accurate cosmological distance indicators.
How are flood risk estimates affected by the choice of return-periods?
NASA Astrophysics Data System (ADS)
Ward, P. J.; de Moel, H.; Aerts, J. C. J. H.
2011-12-01
Flood management is more and more adopting a risk based approach, whereby flood risk is the product of the probability and consequences of flooding. One of the most common approaches in flood risk assessment is to estimate the damage that would occur for floods of several exceedance probabilities (or return periods), to plot these on an exceedance probability-loss curve (risk curve) and to estimate risk as the area under the curve. However, there is little insight into how the selection of the return-periods (which ones and how many) used to calculate risk actually affects the final risk calculation. To gain such insights, we developed and validated an inundation model capable of rapidly simulating inundation extent and depth, and dynamically coupled this to an existing damage model. The method was applied to a section of the River Meuse in the southeast of the Netherlands. Firstly, we estimated risk based on a risk curve using yearly return periods from 2 to 10 000 yr (€ 34 million p.a.). We found that the overall risk is greatly affected by the number of return periods used to construct the risk curve, with over-estimations of annual risk between 33% and 100% when only three return periods are used. In addition, binary assumptions on dike failure can have a large effect (a factor two difference) on risk estimates. Also, the minimum and maximum return period considered in the curve affects the risk estimate considerably. The results suggest that more research is needed to develop relatively simple inundation models that can be used to produce large numbers of inundation maps, complementary to more complex 2-D-3-D hydrodynamic models. It also suggests that research into flood risk could benefit by paying more attention to the damage caused by relatively high probability floods.
NASA Astrophysics Data System (ADS)
Novac, D.; Pantelimon, D.; Popescu, E.
2010-08-01
The Index Tests have been used for many years to obtain the optimized cam corellation between wicket gates and runner blades for double regulated turbines (Kaplan, bulb). The cam is based on homologous model tests and is verified by site measurements, as model tests generally do not reproduce the exact intake configuration. Index Tests have also a considerable importance for checking of the relative efficiency curve of all type of turbines and can demonstrate if the prototype efficiency curve at plant condition has the shape expected from the test of the homologues model. During the Index Tests measurements the influence of all losses at multiple points of turbine operation can be proved. This publication deals with an overview on the Index Tests made after modernization of large bulb units in Iron Gates II - Romania. These field tests, together with the comparative, fully homologous tests for the new hydraulic shape of the runner blades have confirmed the smooth operational behavior and the guaranteed performance. Over the whole "guaranteed operating range" for H = 8m, the characteristic of the Kaplan curve (enveloping curve to the proppeler curves), agreed very well to the predicted efficiency curve from the hydraulic prototype hill chart. The new cam correlation have been determined for different head and realised in the governor, normally based on model tests. The guaranteed, maximum turbine output for H = 7,8m is specified with 32, 5 MW. The maximum measured turbine output during the Index Tests on cam operation was 35,704 MW at the net head of 7,836 m. This coresponds to 35,458 MW for the specified head H= 7, 8 m. All these important improvements ensure a significant increase of annual energy production without any change of the civil construction and without increasing the runner diameter. Also the possibility to increase the turbine rated output is evident.
Mo, Shaobo; Dai, Weixing; Xiang, Wenqiang; Li, Qingguo; Wang, Renjie; Cai, Guoxiang
2018-05-03
The objective of this study was to summarize the clinicopathological and molecular features of synchronous colorectal peritoneal metastases (CPM). We then combined clinical and pathological variables associated with synchronous CPM into a nomogram and confirmed its utilities using decision curve analysis. Synchronous metastatic colorectal cancer (mCRC) patients who received primary tumor resection and underwent KRAS, NRAS, and BRAF gene mutation detection at our center from January 2014 to September 2015 were included in this retrospective study. An analysis was performed to investigate the clinicopathological and molecular features for independent risk factors of synchronous CPM and to subsequently develop a nomogram for synchronous CPM based on multivariate logistic regression. Model performance was quantified in terms of calibration and discrimination. We studied the utility of the nomogram using decision curve analysis. In total, 226 patients were diagnosed with synchronous mCRC, of whom 50 patients (22.1%) presented with CPM. After uni- and multivariate analysis, a nomogram was built based on tumor site, histological type, age, and T4 status. The model had good discrimination with an area under the curve (AUC) at 0.777 (95% CI 0.703-0.850) and adequate calibration. By decision curve analysis, the model was shown to be relevant between thresholds of 0.10 and 0.66. Synchronous CPM is more likely to happen to patients with age ≤60, right-sided primary lesions, signet ring cell cancer or T4 stage. This is the first nomogram to predict synchronous CPM. To ensure generalizability, this model needs to be externally validated. Copyright © 2018 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
ONODA, Tomoaki; YAMAMOTO, Ryuta; SAWAMURA, Kyohei; MURASE, Harutaka; NAMBO, Yasuo; INOUE, Yoshinobu; MATSUI, Akira; MIYAKE, Takeshi; HIRAI, Nobuhiro
2014-01-01
ABSTRACT We propose an approach of estimating individual growth curves based on the birthday information of Japanese Thoroughbred horses, with considerations of the seasonal compensatory growth that is a typical characteristic of seasonal breeding animals. The compensatory growth patterns appear during only the winter and spring seasons in the life of growing horses, and the meeting point between winter and spring depends on the birthday of each horse. We previously developed new growth curve equations for Japanese Thoroughbreds adjusting for compensatory growth. Based on the equations, a parameter denoting the birthday information was added for the modeling of the individual growth curves for each horse by shifting the meeting points in the compensatory growth periods. A total of 5,594 and 5,680 body weight and age measurements of Thoroughbred colts and fillies, respectively, and 3,770 withers height and age measurements of both sexes were used in the analyses. The results of predicted error difference and Akaike Information Criterion showed that the individual growth curves using birthday information better fit to the body weight and withers height data than not using them. The individual growth curve for each horse would be a useful tool for the feeding managements of young Japanese Thoroughbreds in compensatory growth periods. PMID:25013356
Laplacian scale-space behavior of planar curve corners.
Zhang, Xiaohong; Qu, Ying; Yang, Dan; Wang, Hongxing; Kymer, Jeff
2015-11-01
Scale-space behavior of corners is important for developing an efficient corner detection algorithm. In this paper, we analyze the scale-space behavior with the Laplacian of Gaussian (LoG) operator on a planar curve which constructs Laplacian Scale Space (LSS). The analytical expression of a Laplacian Scale-Space map (LSS map) is obtained, demonstrating the Laplacian Scale-Space behavior of the planar curve corners, based on a newly defined unified corner model. With this formula, some Laplacian Scale-Space behavior is summarized. Although LSS demonstrates some similarities to Curvature Scale Space (CSS), there are still some differences. First, no new extreme points are generated in the LSS. Second, the behavior of different cases of a corner model is consistent and simple. This makes it easy to trace the corner in a scale space. At last, the behavior of LSS is verified in an experiment on a digital curve.
Semi-empirical master curve concept describing the rate capability of lithium insertion electrodes
NASA Astrophysics Data System (ADS)
Heubner, C.; Seeba, J.; Liebmann, T.; Nickol, A.; Börner, S.; Fritsch, M.; Nikolowski, K.; Wolter, M.; Schneider, M.; Michaelis, A.
2018-03-01
A simple semi-empirical master curve concept, describing the rate capability of porous insertion electrodes for lithium-ion batteries, is proposed. The model is based on the evaluation of the time constants of lithium diffusion in the liquid electrolyte and the solid active material. This theoretical approach is successfully verified by comprehensive experimental investigations of the rate capability of a large number of porous insertion electrodes with various active materials and design parameters. It turns out, that the rate capability of all investigated electrodes follows a simple master curve governed by the time constant of the rate limiting process. We demonstrate that the master curve concept can be used to determine optimum design criteria meeting specific requirements in terms of maximum gravimetric capacity for a desired rate capability. The model further reveals practical limits of the electrode design, attesting the empirically well-known and inevitable tradeoff between energy and power density.
Corrected confidence bands for functional data using principal components.
Goldsmith, J; Greven, S; Crainiceanu, C
2013-03-01
Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.
Corrected Confidence Bands for Functional Data Using Principal Components
Goldsmith, J.; Greven, S.; Crainiceanu, C.
2014-01-01
Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003
ODE/IM correspondence and the Argyres-Douglas theory
NASA Astrophysics Data System (ADS)
Ito, Katsushi; Shu, Hongfei
2017-08-01
We study the quantum spectral curve of the Argyres-Douglas theories in the Nekrasov-Sahashvili limit of the Omega-background. Using the ODE/IM correspondence we investigate the quantum integrable model corresponding to the quantum spectral curve. We show that the models for the A 2 N -type theories are non-unitary coset models ( A 1)1 × ( A 1) L /( A 1) L+1 at the fractional level L=2/2N+1-2 , which appear in the study of the 4d/2d correspondence of N = 2 superconformal field theories. Based on the WKB analysis, we clarify the relation between the Y-functions and the quantum periods and study the exact Bohr-Sommerfeld quantization condition for the quantum periods. We also discuss the quantum spectral curves for the D and E type theories.
A UNIVERSAL DECLINE LAW OF CLASSICAL NOVAE. IV. V838 HER (1991): A VERY MASSIVE WHITE DWARF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kato, Mariko; Hachisu, Izumi; Cassatella, Angelo, E-mail: mariko@educ.cc.keio.ac.j, E-mail: hachisu@ea.c.u-tokyo.ac.j, E-mail: cassatella@fis.uniroma3.i
2009-10-20
We present a unified model of optical and ultraviolet (UV) light curves for one of the fastest classical novae, V838 Herculis (Nova Herculis 1991), and estimate its white dwarf (WD) mass. Based on an optically thick wind theory of nova outbursts, we model the optical light curves with free-free emission and the UV 1455 A light curves with blackbody emission. Our models of 1.35 +- 0.02 M {sub sun} WD simultaneously reproduce the optical and UV 1455 A observations. The mass lost by the wind is DELTAM {sub wind} approx 2 x 10{sup -6} M {sub sun}. We provide newmore » determinations of the reddening, E(B - V) = 0.53 +- 0.05, and of the distance, 2.7 +- 0.5 kpc.« less
Crispin, Alexander; Strahwald, Brigitte; Cheney, Catherine; Mansmann, Ulrich
2018-06-04
Quality control, benchmarking, and pay for performance (P4P) require valid indicators and statistical models allowing adjustment for differences in risk profiles of the patient populations of the respective institutions. Using hospital remuneration data for measuring quality and modelling patient risks has been criticized by clinicians. Here we explore the potential of prediction models for 30- and 90-day mortality after colorectal cancer surgery based on routine data. Full census of a major statutory health insurer. Surgical departments throughout the Federal Republic of Germany. 4283 and 4124 insurants with major surgery for treatment of colorectal cancer during 2013 and 2014, respectively. Age, sex, primary and secondary diagnoses as well as tumor locations as recorded in the hospital remuneration data according to §301 SGB V. 30- and 90-day mortality. Elixhauser comorbidities, Charlson conditions, and Charlson scores were generated from the ICD-10 diagnoses. Multivariable prediction models were developed using a penalized logistic regression approach (logistic ridge regression) in a derivation set (patients treated in 2013). Calibration and discrimination of the models were assessed in an internal validation sample (patients treated in 2014) using calibration curves, Brier scores, receiver operating characteristic curves (ROC curves) and the areas under the ROC curves (AUC). 30- and 90-day mortality rates in the learning-sample were 5.7 and 8.4%, respectively. The corresponding values in the validation sample were 5.9% and once more 8.4%. Models based on Elixhauser comorbidities exhibited the highest discriminatory power with AUC values of 0.804 (95% CI: 0.776 -0.832) and 0.805 (95% CI: 0.782-0.828) for 30- and 90-day mortality. The Brier scores for these models were 0.050 (95% CI: 0.044-0.056) and 0.067 (95% CI: 0.060-0.074) and similar to the models based on Charlson conditions. Regardless of the model, low predicted probabilities were well calibrated, while higher predicted values tended to be overestimates. The reasonable results regarding discrimination and calibration notwithstanding, models based on hospital remuneration data may not be helpful for P4P. Routine data do not offer information regarding a wide range of quality indicators more useful than mortality. As an alternative, models based on clinical registries may allow a wider, more valid perspective. © Georg Thieme Verlag KG Stuttgart · New York.
Updated Intensity - Duration - Frequency Curves Under Different Future Climate Scenarios
NASA Astrophysics Data System (ADS)
Ragno, E.; AghaKouchak, A.
2016-12-01
Current infrastructure design procedures rely on the use of Intensity - Duration - Frequency (IDF) curves retrieved under the assumption of temporal stationarity, meaning that occurrences of extreme events are expected to be time invariant. However, numerous studies have observed more severe extreme events over time. Hence, the stationarity assumption for extreme analysis may not be appropriate in a warming climate. This issue raises concerns regarding the safety and resilience of the existing and future infrastructures. Here we employ historical and projected (RCP 8.5) CMIP5 runs to investigate IDF curves of 14 urban areas across the United States. We first statistically assess changes in precipitation extremes using an energy-based test for equal distributions. Then, through a Bayesian inference approach for stationary and non-stationary extreme value analysis, we provide updated IDF curves based on climatic model projections. This presentation summarizes the projected changes in statistics of extremes. We show that, based on CMIP5 simulations, extreme precipitation events in some urban areas can be 20% more severe in the future, even when projected annual mean precipitation is expected to remain similar to the ground-based climatology.
Structure of S-shaped growth in innovation diffusion
NASA Astrophysics Data System (ADS)
Shimogawa, Shinsuke; Shinno, Miyuki; Saito, Hiroshi
2012-05-01
A basic question on innovation diffusion is why the growth curve of the adopter population in a large society is often S shaped. From macroscopic, microscopic, and mesoscopic viewpoints, the growth of the adopter population is observed as the growth curve, individual adoptions, and differences among individual adoptions, respectively. The S shape can be explained if an empirical model of the growth curve can be deduced from models of microscopic and mesoscopic structures. However, even the structure of growth curve has not been revealed yet because long-term extrapolations by proposed models of S-shaped curves are unstable and it has been very difficult to predict the long-term growth and final adopter population. This paper studies the S-shaped growth from the viewpoint of social regularities. Simple methods to analyze power laws enable us to extract the structure of the growth curve directly from the growth data of recent basic telecommunication services. This empirical model of growth curve is singular at the inflection point and a logarithmic function of time after this point, which explains the unstable extrapolations obtained using previously proposed models and the difficulty in predicting the final adopter population. Because the empirical S curve can be expressed in terms of two power laws of the regularity found in social performances of individuals, we propose the hypothesis that the S shape represents the heterogeneity of the adopter population, and the heterogeneity parameter is distributed under the regularity in social performances of individuals. This hypothesis is so powerful as to yield models of microscopic and mesoscopic structures. In the microscopic model, each potential adopter adopts the innovation when the information accumulated by the learning about the innovation exceeds a threshold. The accumulation rate of information is heterogeneous among the adopter population, whereas the threshold is a constant, which is the opposite of previously proposed models. In the mesoscopic model, flows of innovation information incoming to individuals are organized as dimorphic and partially clustered. These microscopic and mesoscopic models yield the empirical model of the S curve and explain the S shape as representing the regularities of information flows generated through a social self-organization. To demonstrate the validity and importance of the hypothesis, the models of three level structures are applied to reveal the mechanism determining and differentiating diffusion speeds. The empirical model of S curves implies that the coefficient of variation of the flow rates determines the diffusion speed for later adopters. Based on this property, a model describing the inside of information flow clusters can be given, which provides a formula interconnecting the diffusion speed, cluster populations, and a network topological parameter of the flow clusters. For two recent basic telecommunication services in Japan, the formula represents the variety of speeds in different areas and enables us to explain speed gaps between urban and rural areas and between the two services. Furthermore, the formula provides a method to estimate the final adopter population.
NASA Astrophysics Data System (ADS)
Uysal, G.; Sensoy, A.; Yavuz, O.; Sorman, A. A.; Gezgin, T.
2012-04-01
Effective management of a controlled reservoir system where it involves multiple and sometimes conflicting objectives is a complex problem especially in real time operations. Yuvacık Dam Reservoir, located in the Marmara region of Turkey, is built to supply annual demand of 142 hm3 water for Kocaeli city requires such a complex management strategy since it has relatively small (51 hm3) effective capacity. On the other hand, the drainage basin is fed by both rainfall and snowmelt since the elevation ranges between 80 - 1548 m. Excessive water must be stored behind the radial gates between February and May in terms of sustainability especially for summer and autumn periods. Moreover, the downstream channel physical conditions constraint the spillway releases up to 100 m3/s although the spillway is large enough to handle major floods. Thus, this situation makes short term release decisions the challenging task. Long term water supply curves, based on historical inflows and annual water demand, are in conflict with flood regulation (control) levels, based on flood attenuation and routing curves, for this reservoir. A guide curve, that is generated using both water supply and flood control of downstream channel, generally corresponds to upper elevation of conservation pool for simulation of a reservoir. However, sometimes current operation necessitates exceeding this target elevation. Since guide curves can be developed as a function of external variables, the water potential of a basin can be an indicator to explain current conditions and decide on the further strategies. Besides, releases with respect to guide curve are managed and restricted by user-defined rules. Although the managers operate the reservoir due to several variable conditions and predictions, still the simulation model using variable guide curve is an urgent need to test alternatives quickly. To that end, using HEC-ResSim, the several variable guide curves are defined to meet the requirements by taking inflow, elevation, precipitation and snow water equivalent into consideration to propose alternative simulations as a decision support system. After that, the releases are subjected to user-defined rules. Thus, previous year reservoir simulations are compared with observed reservoir levels and releases. Hypothetical flood scenarios are tested in case of different storm event timing and sizing. Numerical weather prediction data of Mesoscale Model 5 (MM5) can be used for temperature and precipitation forecasts that will form the inputs for a hydrological model. The estimated flows can be used for real time short term decisions for reservoir simulation based on variable guide curve and user defined rules.
Stepwise kinetic equilibrium models of quantitative polymerase chain reaction.
Cobbs, Gary
2012-08-16
Numerous models for use in interpreting quantitative PCR (qPCR) data are present in recent literature. The most commonly used models assume the amplification in qPCR is exponential and fit an exponential model with a constant rate of increase to a select part of the curve. Kinetic theory may be used to model the annealing phase and does not assume constant efficiency of amplification. Mechanistic models describing the annealing phase with kinetic theory offer the most potential for accurate interpretation of qPCR data. Even so, they have not been thoroughly investigated and are rarely used for interpretation of qPCR data. New results for kinetic modeling of qPCR are presented. Two models are presented in which the efficiency of amplification is based on equilibrium solutions for the annealing phase of the qPCR process. Model 1 assumes annealing of complementary targets strands and annealing of target and primers are both reversible reactions and reach a dynamic equilibrium. Model 2 assumes all annealing reactions are nonreversible and equilibrium is static. Both models include the effect of primer concentration during the annealing phase. Analytic formulae are given for the equilibrium values of all single and double stranded molecules at the end of the annealing step. The equilibrium values are then used in a stepwise method to describe the whole qPCR process. Rate constants of kinetic models are the same for solutions that are identical except for possibly having different initial target concentrations. Analysis of qPCR curves from such solutions are thus analyzed by simultaneous non-linear curve fitting with the same rate constant values applying to all curves and each curve having a unique value for initial target concentration. The models were fit to two data sets for which the true initial target concentrations are known. Both models give better fit to observed qPCR data than other kinetic models present in the literature. They also give better estimates of initial target concentration. Model 1 was found to be slightly more robust than model 2 giving better estimates of initial target concentration when estimation of parameters was done for qPCR curves with very different initial target concentration. Both models may be used to estimate the initial absolute concentration of target sequence when a standard curve is not available. It is argued that the kinetic approach to modeling and interpreting quantitative PCR data has the potential to give more precise estimates of the true initial target concentrations than other methods currently used for analysis of qPCR data. The two models presented here give a unified model of the qPCR process in that they explain the shape of the qPCR curve for a wide variety of initial target concentrations.
A Cellular Automata-based Model for Simulating Restitution Property in a Single Heart Cell.
Sabzpoushan, Seyed Hojjat; Pourhasanzade, Fateme
2011-01-01
Ventricular fibrillation is the cause of the most sudden mortalities. Restitution is one of the specific properties of ventricular cell. The recent findings have clearly proved the correlation between the slope of restitution curve with ventricular fibrillation. This; therefore, mandates the modeling of cellular restitution to gain high importance. A cellular automaton is a powerful tool for simulating complex phenomena in a simple language. A cellular automaton is a lattice of cells where the behavior of each cell is determined by the behavior of its neighboring cells as well as the automata rule. In this paper, a simple model is depicted for the simulation of the property of restitution in a single cardiac cell using cellular automata. At first, two state variables; action potential and recovery are introduced in the automata model. In second, automata rule is determined and then recovery variable is defined in such a way so that the restitution is developed. In order to evaluate the proposed model, the generated restitution curve in our study is compared with the restitution curves from the experimental findings of valid sources. Our findings indicate that the presented model is not only capable of simulating restitution in cardiac cell, but also possesses the capability of regulating the restitution curve.
Modelling of creep curves of Ni3Ge single crystals
NASA Astrophysics Data System (ADS)
Starenchenko, V. A.; Starenchenko, S. V.; Pantyukhova, O. D.; Solov'eva, Yu V.
2015-01-01
In this paper the creep model of alloys with L12 superstructure is presented. The creep model is based on the idea of the mechanisms superposition connected with the different elementary deformation processes. Some of them are incident to the ordered structure L12 (anomalous mechanisms), others are typical to pure metals with the fcc structure (normal mechanisms): the accumulation of thermal APBs by means of the intersection of moving dislocations; the formation of APB tubes; the multiplication of superdislocations; the movement of single dislocations; the accumulation of point defects, such as vacancies and interstitial atoms; the accumulation APBs at the climb of edge dislocations. This model takes into account the experimental facts of the wetting antiphase boundaries and emergence of the disordered phase within the ordered phase. The calculations of the creep curves are performed under different conditions. This model describes different kinds of the creep curves and demonstrates the important meaning of the deformation superlocalisation leading to the inverse creep. The experimental and theoretical results coincide rather well.
Bayesian Inference and Application of Robust Growth Curve Models Using Student's "t" Distribution
ERIC Educational Resources Information Center
Zhang, Zhiyong; Lai, Keke; Lu, Zhenqiu; Tong, Xin
2013-01-01
Despite the widespread popularity of growth curve analysis, few studies have investigated robust growth curve models. In this article, the "t" distribution is applied to model heavy-tailed data and contaminated normal data with outliers for growth curve analysis. The derived robust growth curve models are estimated through Bayesian…
Experimental constraints on melting temperatures in the MgO-SiO2 system at lower mantle pressures
NASA Astrophysics Data System (ADS)
Baron, Marzena A.; Lord, Oliver T.; Myhill, Robert; Thomson, Andrew R.; Wang, Weiwei; Trønnes, Reidar G.; Walter, Michael J.
2017-08-01
Eutectic melting curves in the system MgO-SiO2 have been experimentally determined at lower mantle pressures using laser-heated diamond anvil cell (LH-DAC) techniques. We investigated eutectic melting of bridgmanite plus periclase in the MgO-MgSiO3 binary, and melting of bridgmanite plus stishovite in the MgSiO3-SiO2 binary, as analogues for natural peridotite and basalt, respectively. The melting curve of model basalt occurs at lower temperatures, has a shallower dT / dP slope and slightly less curvature than the model peridotitic melting curve. Overall, melting temperatures detected in this study are in good agreement with previous experiments and ab initio simulations at ∼25 GPa (Liebske and Frost, 2012; de Koker et al., 2013). However, at higher pressures the measured eutectic melting curves are systematically lower in temperature than curves extrapolated on the basis of thermodynamic modelling of low-pressure experimental data, and those calculated from atomistic simulations. We find that our data are inconsistent with previously computed melting temperatures and melt thermodynamic properties of the SiO2 endmember, and indicate a maximum in short-range ordering in MgO-SiO2 melts close to Mg2SiO4 composition. The curvature of the model peridotite eutectic relative to an MgSiO3 melt adiabat indicates that crystallization in a global magma ocean would begin at ∼100 GPa rather than at the bottom of the mantle, allowing for an early basal melt layer. The model peridotite melting curve lies ∼ 500 K above the mantle geotherm at the core-mantle boundary, indicating that it will not be molten unless the addition of other components reduces the solidus sufficiently. The model basalt melting curve intersects the geotherm at the base of the mantle, and partial melting of subducted oceanic crust is expected.
The effect of the inner-hair-cell mediated transduction on the shape of neural tuning curves
NASA Astrophysics Data System (ADS)
Altoè, Alessandro; Pulkki, Ville; Verhulst, Sarah
2018-05-01
The inner hair cells of the mammalian cochlea transform the vibrations of their stereocilia into releases of neurotransmitter at the ribbon synapses, thereby controlling the activity of the afferent auditory fibers. The mechanical-to-neural transduction is a highly nonlinear process and it introduces differences between the frequency-tuning of the stereocilia and that of the afferent fibers. Using a computational model of the inner hair cell that is based on in vitro data, we estimated that smaller vibrations of the stereocilia are necessary to drive the afferent fibers above threshold at low (≤0.5 kHz) than at high (≥4 kHz) driving frequencies. In the base of the cochlea, the transduction process affects the low-frequency tails of neural tuning curves. In particular, it introduces differences between the frequency-tuning of the stereocilia and that of the auditory fibers resembling those between basilar membrane velocity and auditory fibers tuning curves in the chinchilla base. For units with a characteristic frequency between 1 and 4 kHz, the transduction process yields shallower neural than stereocilia tuning curves as the characteristic frequency decreases. This study proposes that transduction contributes to the progressive broadening of neural tuning curves from the base to the apex.
Bayesian inference of Calibration curves: application to archaeomagnetism
NASA Astrophysics Data System (ADS)
Lanos, P.
2003-04-01
The range of errors that occur at different stages of the archaeomagnetic calibration process are modelled using a Bayesian hierarchical model. The archaeomagnetic data obtained from archaeological structures such as hearths, kilns or sets of bricks and tiles, exhibit considerable experimental errors and are typically more or less well dated by archaeological context, history or chronometric methods (14C, TL, dendrochronology, etc.). They can also be associated with stratigraphic observations which provide prior relative chronological information. The modelling we describe in this paper allows all these observations, on materials from a given period, to be linked together, and the use of penalized maximum likelihood for smoothing univariate, spherical or three-dimensional time series data allows representation of the secular variation of the geomagnetic field over time. The smooth curve we obtain (which takes the form of a penalized natural cubic spline) provides an adaptation to the effects of variability in the density of reference points over time. Since our model takes account of all the known errors in the archaeomagnetic calibration process, we are able to obtain a functional highest-posterior-density envelope on the new curve. With this new posterior estimate of the curve available to us, the Bayesian statistical framework then allows us to estimate the calendar dates of undated archaeological features (such as kilns) based on one, two or three geomagnetic parameters (inclination, declination and/or intensity). Date estimates are presented in much the same way as those that arise from radiocarbon dating. In order to illustrate the model and inference methods used, we will present results based on German archaeomagnetic data recently published by a German team.
NASA Astrophysics Data System (ADS)
Yuste, S. B.; Abad, E.; Baumgaertner, A.
2016-07-01
We address the problem of diffusion on a comb whose teeth display varying lengths. Specifically, the length ℓ of each tooth is drawn from a probability distribution displaying power law behavior at large ℓ ,P (ℓ ) ˜ℓ-(1 +α ) (α >0 ). To start with, we focus on the computation of the anomalous diffusion coefficient for the subdiffusive motion along the backbone. This quantity is subsequently used as an input to compute concentration recovery curves mimicking fluorescence recovery after photobleaching experiments in comblike geometries such as spiny dendrites. Our method is based on the mean-field description provided by the well-tested continuous time random-walk approach for the random-comb model, and the obtained analytical result for the diffusion coefficient is confirmed by numerical simulations of a random walk with finite steps in time and space along the backbone and the teeth. We subsequently incorporate retardation effects arising from binding-unbinding kinetics into our model and obtain a scaling law characterizing the corresponding change in the diffusion coefficient. Finally, we show that recovery curves obtained with the help of the analytical expression for the anomalous diffusion coefficient cannot be fitted perfectly by a model based on scaled Brownian motion, i.e., a standard diffusion equation with a time-dependent diffusion coefficient. However, differences between the exact curves and such fits are small, thereby providing justification for the practical use of models relying on scaled Brownian motion as a fitting procedure for recovery curves arising from particle diffusion in comblike systems.
Vajuvalli, Nithin N; Nayak, Krupa N; Geethanath, Sairam
2014-01-01
Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is widely used in the diagnosis of cancer and is also a promising tool for monitoring tumor response to treatment. The Tofts model has become a standard for the analysis of DCE-MRI. The process of curve fitting employed in the Tofts equation to obtain the pharmacokinetic (PK) parameters is time-consuming for high resolution scans. Current work demonstrates a frequency-domain approach applied to the standard Tofts equation to speed-up the process of curve-fitting in order to obtain the pharmacokinetic parameters. The results obtained show that using the frequency domain approach, the process of curve fitting is computationally more efficient compared to the time-domain approach.
NASA Astrophysics Data System (ADS)
Kolbin, A. I.; Shimansky, V. V.
2014-04-01
We developed a code for imaging the surfaces of spotted stars by a set of circular spots with a uniform temperature distribution. The flux from the spotted surface is computed by partitioning the spots into elementary areas. The code takes into account the passing of spots behind the visible stellar limb, limb darkening, and overlapping of spots. Modeling of light curves includes the use of recent results of the theory of stellar atmospheres needed to take into account the temperature dependence of flux intensity and limb darkening coefficients. The search for spot parameters is based on the analysis of several light curves obtained in different photometric bands. We test our technique by applying it to HII 1883.
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1986-01-01
Computational predictions of turbulent flow in sharply curved 180 degree turn around ducts are presented. The CNS2D computer code is used to solve the equations of motion for two-dimensional incompressible flows transformed to a nonorthogonal body-fitted coordinate system. This procedure incorporates the pressure velocity correction algorithm SIMPLE-C to iteratively solve a discretized form of the transformed equations. A multiple scale turbulence model based on simplified spectral partitioning is employed to obtain closure. Flow field predictions utilizing the multiple scale model are compared to features predicted by the traditional single scale k-epsilon model. Tuning parameter sensitivities of the multiple scale model applied to turn around duct flows are also determined. In addition, a wall function approach based on a wall law suitable for incompressible turbulent boundary layers under strong adverse pressure gradients is tested. Turn around duct flow characteristics utilizing this modified wall law are presented and compared to results based on a standard wall treatment.
NASA Astrophysics Data System (ADS)
Huang, Q. Z.; Hsu, S. Y.; Li, M. H.
2016-12-01
The long-term streamflow prediction is important not only to estimate water-storage of a reservoir but also to the surface water intakes, which supply people's livelihood, agriculture, and industry. Climatology forecasts of streamflow have been traditionally used for calculating the exceedance probability curve of streamflow and water resource management. In this study, we proposed a stochastic approach to predict the exceedance probability curve of long-term streamflow with the seasonal weather outlook from Central Weather Bureau (CWB), Taiwan. The approach incorporates a statistical downscale weather generator and a catchment-scale hydrological model to convert the monthly outlook into daily rainfall and temperature series and to simulate the streamflow based on the outlook information. Moreover, we applied Bayes' theorem to derive a method for calculating the exceedance probability curve of the reservoir inflow based on the seasonal weather outlook and its imperfection. The results show that our approach can give the exceedance probability curves reflecting the three-month weather outlook and its accuracy. We also show how the improvement of the weather outlook affects the predicted exceedance probability curves of the streamflow. Our approach should be useful for the seasonal planning and management of water resource and their risk assessment.
Extracting the information of coastline shape and its multiple representations
NASA Astrophysics Data System (ADS)
Liu, Ying; Li, Shujun; Tian, Zhen; Chen, Huirong
2007-06-01
According to studying the coastline, a new way of multiple representations is put forward in the paper. That is stimulating human thinking way when they generalized, building the appropriate math model and describing the coastline with graphics, extracting all kinds of the coastline shape information. The coastline automatic generalization will be finished based on the knowledge rules and arithmetic operators. Showing the information of coastline shape by building the curve Douglas binary tree, it can reveal the shape character of coastline not only microcosmically but also macroscopically. Extracting the information of coastline concludes the local characteristic point and its orientation, the curve structure and the topology trait. The curve structure can be divided the single curve and the curve cluster. By confirming the knowledge rules of the coastline generalization, the generalized scale and its shape parameter, the coastline automatic generalization model is established finally. The method of the multiple scale representation of coastline in this paper has some strong points. It is human's thinking mode and can keep the nature character of the curve prototype. The binary tree structure can control the coastline comparability, avoid the self-intersect phenomenon and hold the unanimous topology relationship.
Asymmetry in Determinants of Running Speed During Curved Sprinting.
Ishimura, Kazuhiro; Sakurai, Shinji
2016-08-01
This study investigates the potential asymmetries between inside and outside legs in determinants of curved running speed. To test these asymmetries, a deterministic model of curved running speed was constructed based on components of step length and frequency, including the distances and times of different step phases, takeoff speed and angle, velocities in different directions, and relative height of the runner's center of gravity. Eighteen athletes sprinted 60 m on the curved path of a 400-m track; trials were recorded using a motion-capture system. The variables were calculated following the deterministic model. The average speeds were identical between the 2 sides; however, the step length and frequency were asymmetric. In straight sprinting, there is a trade-off relationship between the step length and frequency; however, such a trade-off relationship was not observed in each step of curved sprinting in this study. Asymmetric vertical velocity at takeoff resulted in an asymmetric flight distance and time. The runners changed the running direction significantly during the outside foot stance because of the asymmetric centripetal force. Moreover, the outside leg had a larger tangential force and shorter stance time. These asymmetries between legs indicated the outside leg plays an important role in curved sprinting.
Kondo, M; Nagao, Y; Mahbub, M H; Tanabe, T; Tanizawa, Y
2018-04-29
To identify factors predicting early postpartum glucose intolerance in Japanese women with gestational diabetes mellitus, using decision-curve analysis. A retrospective cohort study was performed. The participants were 123 Japanese women with gestational diabetes who underwent 75-g oral glucose tolerance tests at 8-12 weeks after delivery. They were divided into a glucose intolerance and a normal glucose tolerance group based on postpartum oral glucose tolerance test results. Analysis of the pregnancy oral glucose tolerance test results showed predictive factors for postpartum glucose intolerance. We also evaluated the clinical usefulness of the prediction model based on decision-curve analysis. Of 123 women, 78 (63.4%) had normoglycaemia and 45 (36.6%) had glucose intolerance. Multivariable logistic regression analysis showed insulinogenic index/fasting immunoreactive insulin and summation of glucose levels, assessed during pregnancy oral glucose tolerance tests (total glucose), to be independent risk factors for postpartum glucose intolerance. Evaluating the regression models, the best discrimination (area under the curve 0.725) was obtained using the basic model (i.e. age, family history of diabetes, BMI ≥25 kg/m 2 and use of insulin during pregnancy) plus insulinogenic index/fasting immunoreactive insulin <1.1. Decision-curve analysis showed that combining insulinogenic index/fasting immunoreactive insulin <1.1 with basic clinical information resulted in superior net benefits for prediction of postpartum glucose intolerance. Insulinogenic index/fasting immunoreactive insulin calculated using oral glucose tolerance test results during pregnancy is potentially useful for predicting early postpartum glucose intolerance in Japanese women with gestational diabetes. © 2018 Diabetes UK.
Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces
Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.
2012-01-01
Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved approximations with increasing discretization and associated increases in computational resources. The results clearly demonstrate that our methods for approximate integration on an exact geometry are far more accurate than exact integration on an approximate geometry. A MATLAB implementation of the presented integration methods and sample data files containing curved-element discretizations of several small molecules are available online at http://web.mit.edu/tidor. PMID:17627358
NASA Astrophysics Data System (ADS)
Priyadarshini, Lakshmi
Frequently transported packaging goods are more prone to damage due to impact, jolting or vibration in transit. Fragile goods, for example, glass, ceramics, porcelain are susceptible to mechanical stresses. Hence ancillary materials like cushions play an important role when utilized within package. In this work, an analytical model of a 3D cellular structure is established based on Kelvin model and lattice structure. The research will provide a comparative study between the 3D printed Kelvin unit structure and 3D printed lattice structure. The comparative investigation is based on parameters defining cushion performance such as cushion creep, indentation, and cushion curve analysis. The applications of 3D printing is in rapid prototyping where the study will provide information of which model delivers better form of energy absorption. 3D printed foam will be shown as a cost-effective approach as prototype. The research also investigates about the selection of material for 3D printing process. As cushion development demands flexible material, three-dimensional printing with material having elastomeric properties is required. Further, the concept of cushion design is based on Kelvin model structure and lattice structure. The analytical solution provides the cushion curve analysis with respect to the results observed when load is applied over the cushion. The results are reported on basis of attenuation and amplification curves.
NASA Astrophysics Data System (ADS)
Zhengang, Lu; Hongyang, Yu; Xi, Yang
2017-05-01
The Modular Multilevel Converter (MMC) is one of the most attractive topologies in recent years for medium or high voltage industrial applications, such as high voltage dc transmission (HVDC) and medium voltage varying speed motor drive. The wide adoption of MMCs in industry is mainly due to its flexible expandability, transformer-less configuration, common dc bus, high reliability from redundancy, and so on. But, when the sub module number of MMC is more, the test of MMC controller will cost more time and effort. Hardware in the loop test based on real time simulator will save a lot of time and money caused by the MMC test. And due to the flexible of HIL, it becomes more and more popular in the industry area. The MMC modelling method remains an important issue for the MMC HIL test. Specifically, the VSC model should realistically reflect the nonlinear device switching characteristics, switching and conduction losses, tailing current, and diode reverse recovery behaviour of a realistic converter. In this paper, an IGBT switching characteristic curve embedded half-bridge MMC modelling method is proposed. This method is based on the switching curve referring and sample circuit calculation, and it is sample for implementation. Based on the proposed method, a FPGA real time simulation is carried out with 200ns sample time. The real time simulation results show the proposed method is correct.
Rethinking non-inferiority: a practical trial design for optimising treatment duration.
Quartagno, Matteo; Walker, A Sarah; Carpenter, James R; Phillips, Patrick Pj; Parmar, Mahesh Kb
2018-06-01
Background Trials to identify the minimal effective treatment duration are needed in different therapeutic areas, including bacterial infections, tuberculosis and hepatitis C. However, standard non-inferiority designs have several limitations, including arbitrariness of non-inferiority margins, choice of research arms and very large sample sizes. Methods We recast the problem of finding an appropriate non-inferior treatment duration in terms of modelling the entire duration-response curve within a pre-specified range. We propose a multi-arm randomised trial design, allocating patients to different treatment durations. We use fractional polynomials and spline-based methods to flexibly model the duration-response curve. We call this a 'Durations design'. We compare different methods in terms of a scaled version of the area between true and estimated prediction curves. We evaluate sensitivity to key design parameters, including sample size, number and position of arms. Results A total sample size of ~ 500 patients divided into a moderate number of equidistant arms (5-7) is sufficient to estimate the duration-response curve within a 5% error margin in 95% of the simulations. Fractional polynomials provide similar or better results than spline-based methods in most scenarios. Conclusion Our proposed practical randomised trial 'Durations design' shows promising performance in the estimation of the duration-response curve; subject to a pending careful investigation of its inferential properties, it provides a potential alternative to standard non-inferiority designs, avoiding many of their limitations, and yet being fairly robust to different possible duration-response curves. The trial outcome is the whole duration-response curve, which may be used by clinicians and policymakers to make informed decisions, facilitating a move away from a forced binary hypothesis testing paradigm.
On the Limitations of Breakthrough Curve Analysis in Fixed-Bed Adsorption
NASA Technical Reports Server (NTRS)
Knox, James C.; Ebner, Armin D.; LeVan, M. Douglas; Coker, Robert F.; Ritter, James A.
2016-01-01
This work examined in detail the a priori prediction of the axial dispersion coefficient from available correlations versus obtaining it and also mass transfer information from experimental breakthrough data and the consequences that may arise when doing so based on using a 1-D axially dispersed plug flow model and its associated Danckwerts outlet boundary condition. These consequences mainly included determining the potential for erroneous extraction of the axial dispersion coefficient and/or the LDF mass transfer coefficient from experimental data, especially when non-plug flow conditions prevailed in the bed. Two adsorbent/adsorbate cases were considered, i.e., carbon dioxide and water vapor in zeolite 5A, because they both experimentally exhibited significant non-plug flow behavior, and the water-zeolite 5A system exhibited unusual concentration front sharpening that destroyed the expected constant pattern behavior (CPB) when modeled with the 1-D axially dispersed plug flow model. Overall, this work showed that it was possible to extract accurate mass transfer and dispersion information from experimental breakthrough curves using a 1-D axial dispersed plug flow model when they were measured both inside and outside the bed. To ensure the extracted information was accurate, the inside the bed breakthrough curves and their derivatives from the model were plotted to confirm whether or not the adsorbate/adsorbent system was exhibiting CPB or any concentration front sharpening near the bed exit. Even when concentration front sharpening was occurring with the water-zeolite 5A system, it was still possible to use the experimental inside and outside the bed breakthrough curves to extract fundamental mass transfer and dispersion information from the 1-D axial dispersed plug flow model based on the systematic methodology developed in this work.
NASA Astrophysics Data System (ADS)
Golsanami, Naser; Kadkhodaie-Ilkhchi, Ali; Erfani, Amir
2015-01-01
Capillary pressure curves are important data for reservoir rock typing, analyzing pore throat distribution, determining height above free water level, and reservoir simulation. Laboratory experiments provide accurate data, however they are expensive, time-consuming and discontinuous through the reservoir intervals. The current study focuses on synthesizing artificial capillary pressure (Pc) curves from seismic attributes with the use of artificial intelligent systems including Artificial Neural Networks (ANNs), Fuzzy logic (FL) and Adaptive Neuro-Fuzzy Inference Systems (ANFISs). The synthetic capillary pressure curves were achieved by estimating pressure values at six mercury saturation points. These points correspond to mercury filled pore volumes of core samples (Hg-saturation) at 5%, 20%, 35%, 65%, 80%, and 90% saturations. To predict the synthetic Pc curve at each saturation point, various FL, ANFIS and ANN models were constructed. The varying neural network models differ in their training algorithm. Based on the performance function, the most accurately functioning models were selected as the final solvers to do the prediction process at each of the above-mentioned mercury saturation points. The constructed models were then tested at six depth points of the studied well which were already unforeseen by the models. The results show that the Fuzzy logic and neuro-fuzzy models were not capable of making reliable estimations, while the predictions from the ANN models were satisfyingly trustworthy. The obtained results showed a good agreement between the laboratory derived and synthetic capillary pressure curves. Finally, a 3D seismic cube was captured for which the required attributes were extracted and the capillary pressure cube was estimated by using the developed models. In the next step, the synthesized Pc cube was compared with the seismic cube and an acceptable correspondence was observed.
Mathematical modeling improves EC50 estimations from classical dose-response curves.
Nyman, Elin; Lindgren, Isa; Lövfors, William; Lundengård, Karin; Cervin, Ida; Sjöström, Theresia Arbring; Altimiras, Jordi; Cedersund, Gunnar
2015-03-01
The β-adrenergic response is impaired in failing hearts. When studying β-adrenergic function in vitro, the half-maximal effective concentration (EC50 ) is an important measure of ligand response. We previously measured the in vitro contraction force response of chicken heart tissue to increasing concentrations of adrenaline, and observed a decreasing response at high concentrations. The classical interpretation of such data is to assume a maximal response before the decrease, and to fit a sigmoid curve to the remaining data to determine EC50 . Instead, we have applied a mathematical modeling approach to interpret the full dose-response curve in a new way. The developed model predicts a non-steady-state caused by a short resting time between increased concentrations of agonist, which affect the dose-response characterization. Therefore, an improved estimate of EC50 may be calculated using steady-state simulations of the model. The model-based estimation of EC50 is further refined using additional time-resolved data to decrease the uncertainty of the prediction. The resulting model-based EC50 (180-525 nm) is higher than the classically interpreted EC50 (46-191 nm). Mathematical modeling thus makes it possible to re-interpret previously obtained datasets, and to make accurate estimates of EC50 even when steady-state measurements are not experimentally feasible. The mathematical models described here have been submitted to the JWS Online Cellular Systems Modelling Database, and may be accessed at http://jjj.bio.vu.nl/database/nyman. © 2015 FEBS.
Discrete Gust Model for Launch Vehicle Assessments
NASA Technical Reports Server (NTRS)
Leahy, Frank B.
2008-01-01
Analysis of spacecraft vehicle responses to atmospheric wind gusts during flight is important in the establishment of vehicle design structural requirements and operational capability. Typically, wind gust models can be either a spectral type determined by a random process having a wide range of wavelengths, or a discrete type having a single gust of predetermined magnitude and shape. Classical discrete models used by NASA during the Apollo and Space Shuttle Programs included a 9 m/sec quasi-square-wave gust with variable wavelength from 60 to 300 m. A later study derived discrete gust from a military specification (MIL-SPEC) document that used a "1-cosine" shape. The MIL-SPEC document contains a curve of non-dimensional gust magnitude as a function of non-dimensional gust half-wavelength based on the Dryden spectral model, but fails to list the equation necessary to reproduce the curve. Therefore, previous studies could only estimate a value of gust magnitude from the curve, or attempt to fit a function to it. This paper presents the development of the MIL-SPEC curve, and provides the necessary information to calculate discrete gust magnitudes as a function of both gust half-wavelength and the desired probability level of exceeding a specified gust magnitude.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghomi, Pooyan Shirvani; Zinchenko, Yuriy
2014-08-15
Purpose: To compare methods to incorporate the Dose Volume Histogram (DVH) curves into the treatment planning optimization. Method: The performance of three methods, namely, the conventional Mixed Integer Programming (MIP) model, a convex moment-based constrained optimization approach, and an unconstrained convex moment-based penalty approach, is compared using anonymized data of a prostate cancer patient. Three plans we generated using the corresponding optimization models. Four Organs at Risk (OARs) and one Tumor were involved in the treatment planning. The OARs and Tumor were discretized into total of 50,221 voxels. The number of beamlets was 943. We used commercially available optimization softwaremore » Gurobi and Matlab to solve the models. Plan comparison was done by recording the model runtime followed by visual inspection of the resulting dose volume histograms. Conclusion: We demonstrate the effectiveness of the moment-based approaches to replicate the set of prescribed DVH curves. The unconstrained convex moment-based penalty approach is concluded to have the greatest potential to reduce the computational effort and holds a promise of substantial computational speed up.« less
Biswas, Kaushik; Shukla, Yash; Desjarlais, Andre Omer; ...
2018-04-17
This article presents combined measurements of fatty acid-based organic PCM products and numerical simulations to evaluate the energy benefits of adding a PCM layer to an exterior wall. The thermal storage characteristics of the PCM were measured using a heat flow meter apparatus (HFMA). The PCM characterization is based on a recent ASTM International standard test method, ASTM C1784. The PCM samples were subjected to step changes in temperature and allowed to stabilize at each temperature. By measuring the heat absorbed or released by the PCM, the temperature-dependent enthalpy functions for melting and freezing were determined.Here, the simulations were donemore » using a previously-validated two-dimensional (2D) wall model containing a PCM layer and incorporating the HFMA-measured enthalpy functions. The wall model was modified to include the hysteresis phenomenon observed in PCMs, which is reflected in different melting and freezing temperatures of the PCM. Simulations were done with a single enthalpy curve based on the PCM melting tests, both melting and freezing enthalpy curves, and with different degrees of hysteresis between the melting and freezing curves. Significant differences were observed between the thermal performances of the modeled wall with the PCM layer under the different scenarios.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biswas, Kaushik; Shukla, Yash; Desjarlais, Andre Omer
This article presents combined measurements of fatty acid-based organic PCM products and numerical simulations to evaluate the energy benefits of adding a PCM layer to an exterior wall. The thermal storage characteristics of the PCM were measured using a heat flow meter apparatus (HFMA). The PCM characterization is based on a recent ASTM International standard test method, ASTM C1784. The PCM samples were subjected to step changes in temperature and allowed to stabilize at each temperature. By measuring the heat absorbed or released by the PCM, the temperature-dependent enthalpy functions for melting and freezing were determined.Here, the simulations were donemore » using a previously-validated two-dimensional (2D) wall model containing a PCM layer and incorporating the HFMA-measured enthalpy functions. The wall model was modified to include the hysteresis phenomenon observed in PCMs, which is reflected in different melting and freezing temperatures of the PCM. Simulations were done with a single enthalpy curve based on the PCM melting tests, both melting and freezing enthalpy curves, and with different degrees of hysteresis between the melting and freezing curves. Significant differences were observed between the thermal performances of the modeled wall with the PCM layer under the different scenarios.« less
From Experiment to Theory: What Can We Learn from Growth Curves?
Kareva, Irina; Karev, Georgy
2018-01-01
Finding an appropriate functional form to describe population growth based on key properties of a described system allows making justified predictions about future population development. This information can be of vital importance in all areas of research, ranging from cell growth to global demography. Here, we use this connection between theory and observation to pose the following question: what can we infer about intrinsic properties of a population (i.e., degree of heterogeneity, or dependence on external resources) based on which growth function best fits its growth dynamics? We investigate several nonstandard classes of multi-phase growth curves that capture different stages of population growth; these models include hyperbolic-exponential, exponential-linear, exponential-linear-saturation growth patterns. The constructed models account explicitly for the process of natural selection within inhomogeneous populations. Based on the underlying hypotheses for each of the models, we identify whether the population that it best fits by a particular curve is more likely to be homogeneous or heterogeneous, grow in a density-dependent or frequency-dependent manner, and whether it depends on external resources during any or all stages of its development. We apply these predictions to cancer cell growth and demographic data obtained from the literature. Our theory, if confirmed, can provide an additional biomarker and a predictive tool to complement experimental research.
Lorenz curves in a new science-funding model
NASA Astrophysics Data System (ADS)
Huang, Ding-wei
2017-12-01
We propose an agent-based model to theoretically and systematically explore the implications of a new approach to fund science, which has been suggested recently by J. Bollen et al.[?] We introduce various parameters and examine their effects. The concentration of funding is shown by the Lorenz curve and the Gini coefficient. In this model, all scientists are treated equally and follow the well-intended regulations. All scientists give a fixed ratio of their funding to others. The fixed ratio becomes an upper bound for the Gini coefficient. We observe two distinct regimes in the parameter space: valley and plateau. In the valley regime, the fluidity of funding is significant. The Lorenz curve is smooth. The Gini coefficient is well below the upper bound. The funding distribution is the desired result. In the plateau regime, the cumulative advantage is significant. The Lorenz curve has a sharp turn. The Gini coefficient saturates to the upper bound. The undue concentration of funding happens swiftly. The funding distribution is the undesired results, where a minority of scientists take the majority of funding. Phase transitions between these two regimes are discussed.
Sample Skewness as a Statistical Measurement of Neuronal Tuning Sharpness
Samonds, Jason M.; Potetz, Brian R.; Lee, Tai Sing
2014-01-01
We propose using the statistical measurement of the sample skewness of the distribution of mean firing rates of a tuning curve to quantify sharpness of tuning. For some features, like binocular disparity, tuning curves are best described by relatively complex and sometimes diverse functions, making it difficult to quantify sharpness with a single function and parameter. Skewness provides a robust nonparametric measure of tuning curve sharpness that is invariant with respect to the mean and variance of the tuning curve and is straightforward to apply to a wide range of tuning, including simple orientation tuning curves and complex object tuning curves that often cannot even be described parametrically. Because skewness does not depend on a specific model or function of tuning, it is especially appealing to cases of sharpening where recurrent interactions among neurons produce sharper tuning curves that deviate in a complex manner from the feedforward function of tuning. Since tuning curves for all neurons are not typically well described by a single parametric function, this model independence additionally allows skewness to be applied to all recorded neurons, maximizing the statistical power of a set of data. We also compare skewness with other nonparametric measures of tuning curve sharpness and selectivity. Compared to these other nonparametric measures tested, skewness is best used for capturing the sharpness of multimodal tuning curves defined by narrow peaks (maximum) and broad valleys (minima). Finally, we provide a more formal definition of sharpness using a shape-based information gain measure and derive and show that skewness is correlated with this definition. PMID:24555451
Sperlich, Alexander; Werner, Arne; Genz, Arne; Amy, Gary; Worch, Eckhard; Jekel, Martin
2005-03-01
Breakthrough curves (BTC) for the adsorption of arsenate and salicylic acid onto granulated ferric hydroxide (GFH) in fixed-bed adsorbers were experimentally determined and modeled using the homogeneous surface diffusion model (HSDM). The input parameters for the HSDM, the Freundlich isotherm constants and mass transfer coefficients for film and surface diffusion, were experimentally determined. The BTC for salicylic acid revealed a shape typical for trace organic compound adsorption onto activated carbon, and model results agreed well with the experimental curves. Unlike salicylic acid, arsenate BTCs showed a non-ideal shape with a leveling off at c/c0 approximately 0.6. Model results based on the experimentally derived parameters over-predicted the point of arsenic breakthrough for all simulated curves, lab-scale or full-scale, and were unable to catch the shape of the curve. The use of a much lower surface diffusion coefficient D(S) for modeling led to an improved fit of the later stages of the BTC shape, pointing on a time-dependent D(S). The mechanism for this time dependence is still unknown. Surface precipitation was discussed as one possible removal mechanism for arsenate besides pure adsorption interfering the determination of Freundlich constants and D(S). Rapid small-scale column tests (RSSCT) proved to be a powerful experimental alternative to the modeling procedure for arsenic.
Methods to assess an exercise intervention trial based on 3-level functional data.
Li, Haocheng; Kozey Keadle, Sarah; Staudenmayer, John; Assaad, Houssein; Huang, Jianhua Z; Carroll, Raymond J
2015-10-01
Motivated by data recording the effects of an exercise intervention on subjects' physical activity over time, we develop a model to assess the effects of a treatment when the data are functional with 3 levels (subjects, weeks and days in our application) and possibly incomplete. We develop a model with 3-level mean structure effects, all stratified by treatment and subject random effects, including a general subject effect and nested effects for the 3 levels. The mean and random structures are specified as smooth curves measured at various time points. The association structure of the 3-level data is induced through the random curves, which are summarized using a few important principal components. We use penalized splines to model the mean curves and the principal component curves, and cast the proposed model into a mixed effects model framework for model fitting, prediction and inference. We develop an algorithm to fit the model iteratively with the Expectation/Conditional Maximization Either (ECME) version of the EM algorithm and eigenvalue decompositions. Selection of the number of principal components and handling incomplete data issues are incorporated into the algorithm. The performance of the Wald-type hypothesis test is also discussed. The method is applied to the physical activity data and evaluated empirically by a simulation study. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Ultrasonic velocity profiling rheometry based on a widened circular Couette flow
NASA Astrophysics Data System (ADS)
Shiratori, Takahisa; Tasaka, Yuji; Oishi, Yoshihiko; Murai, Yuichi
2015-08-01
We propose a new rheometry for characterizing the rheological properties of fluids. The technique produces flow curves, which represent the relationship between the fluid shear rate and shear stress. Flow curves are obtained by measuring the circumferential velocity distribution of tested fluids in a circular Couette system, using an ultrasonic velocity profiling technique. By adopting a widened gap of concentric cylinders, a designed range of the shear rate is obtained so that velocity profile measurement along a single line directly acquires flow curves. To reduce the effect of ultrasonic noise on resultant flow curves, several fitting functions and variable transforms are examined to best approximate the velocity profile without introducing a priori rheological models. Silicone oil, polyacrylamide solution, and yogurt were used to evaluate the applicability of this technique. These substances are purposely targeted as examples of Newtonian fluids, shear thinning fluids, and opaque fluids with unknown rheological properties, respectively. We find that fourth-order Chebyshev polynomials provide the most accurate representation of flow curves in the context of model-free rheometry enabled by ultrasonic velocity profiling.
Disentangling sampling and ecological explanations underlying species-area relationships
Cam, E.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Alpizar-Jara, R.; Flather, C.H.
2002-01-01
We used a probabilistic approach to address the influence of sampling artifacts on the form of species-area relationships (SARs). We developed a model in which the increase in observed species richness is a function of sampling effort exclusively. We assumed that effort depends on area sampled, and we generated species-area curves under that model. These curves can be realistic looking. We then generated SARs from avian data, comparing SARs based on counts with those based on richness estimates. We used an approach to estimation of species richness that accounts for species detection probability and, hence, for variation in sampling effort. The slopes of SARs based on counts are steeper than those of curves based on estimates of richness, indicating that the former partly reflect failure to account for species detection probability. SARs based on estimates reflect ecological processes exclusively, not sampling processes. This approach permits investigation of ecologically relevant hypotheses. The slope of SARs is not influenced by the slope of the relationship between habitat diversity and area. In situations in which not all of the species are detected during sampling sessions, approaches to estimation of species richness integrating species detection probability should be used to investigate the rate of increase in species richness with area.
Pawlikowski, Marek; Jankowski, Krzysztof; Skalski, Konstanty
2018-05-30
A new constitutive model for human trabecular bone is presented in the present study. As the model is based on indentation tests performed on single trabeculae it is formulated in a microscale. The constitutive law takes into account non-linear viscoelasticity of the tissue. The elastic response is described by the hyperelastic Mooney-Rivlin model while the viscoelastic effects are considered by means of the hereditary integral in which stress depends on both time and strain. The material constants in the constitutive equation are identified on the basis of the stress relaxation tests and the indentation tests using curve-fitting procedure. The constitutive model is implemented into finite element package Abaqus ® by means of UMAT subroutine. The curve-fitting error is low and the viscoelastic behaviour of the tissue predicted by the proposed constitutive model corresponds well to the realistic response of the trabecular bone. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Li, Xiaogai; von Holst, Hans; Kleiven, Svein
2013-01-01
A 3D finite element (FE) model has been developed to study the mean intracranial pressure (ICP) response during constant-rate infusion using linear poroelasticity. Due to the uncertainties in the poroelastic constants for brain tissue, the influence of each of the main parameters on the transient ICP infusion curve was studied. As a prerequisite for transient analysis, steady-state simulations were performed first. The simulated steady-state pressure distribution in the brain tissue for a normal cerebrospinal fluid (CSF) circulation system showed good correlation with experiments from the literature. Furthermore, steady-state ICP closely followed the infusion experiments at different infusion rates. The verified steady-state models then served as a baseline for the subsequent transient models. For transient analysis, the simulated ICP shows a similar tendency to that found in the experiments, however, different values of the poroelastic constants have a significant effect on the infusion curve. The influence of the main poroelastic parameters including the Biot coefficient α, Skempton coefficient B, drained Young's modulus E, Poisson's ratio ν, permeability κ, CSF absorption conductance C(b) and external venous pressure p(b) was studied to investigate the influence on the pressure response. It was found that the value of the specific storage term S(ε) is the dominant factor that influences the infusion curve, and the drained Young's modulus E was identified as the dominant parameter second to S(ε). Based on the simulated infusion curves from the FE model, artificial neural network (ANN) was used to find an optimised parameter set that best fit the experimental curve. The infusion curves from both the FE simulation and using ANN confirmed the limitation of linear poroelasticity in modelling the transient constant-rate infusion.
Dynamic Analysis of Recalescence Process and Interface Growth of Eutectic Fe82B17Si1 Alloy
NASA Astrophysics Data System (ADS)
Fan, Y.; Liu, A. M.; Chen, Z.; Li, P. Z.; Zhang, C. H.
2018-03-01
By employing the glass fluxing technique in combination with cyclical superheating, the microstructural evolution of the undercooled Fe82B17Si1 alloy in the obtained undercooling range was studied. With increase in undercooling, a transition of cooling curves was detected from one recalescence to two recalescences, followed by one recalescence. The two types of cooling curves were fitted by the break equation and the Johnson-Mehl-Avrami-Kolmogorov model. Based on the cooling curves at different undercoolings, the recalescence rate was calculated by the multi-logistic growth model and the Boettinger-Coriel-Trivedi model. Both the recalescence features and the interface growth kinetics of the eutectic Fe82B17Si1 alloy were explored. The fitting results that were obtained using TEM (SAED), SEM and XRD were consistent with the changing rule of microstructures. Finally, the relationship between the microstructure and hardness was also investigated.
Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Pardo-Vazquez, Jose L; Leboran, Victor; Molenberghs, Geert; Faes, Christel; Acuña, Carlos
2011-06-30
It is well established that neural activity is stochastically modulated over time. Therefore, direct comparisons across experimental conditions and determination of change points or maximum firing rates are not straightforward. This study sought to compare temporal firing probability curves that may vary across groups defined by different experimental conditions. Odds-ratio (OR) curves were used as a measure of comparison, and the main goal was to provide a global test to detect significant differences of such curves through the study of their derivatives. An algorithm is proposed that enables ORs based on generalized additive models, including factor-by-curve-type interactions to be flexibly estimated. Bootstrap methods were used to draw inferences from the derivatives curves, and binning techniques were applied to speed up computation in the estimation and testing processes. A simulation study was conducted to assess the validity of these bootstrap-based tests. This methodology was applied to study premotor ventral cortex neural activity associated with decision-making. The proposed statistical procedures proved very useful in revealing the neural activity correlates of decision-making in a visual discrimination task. Copyright © 2011 John Wiley & Sons, Ltd.
A Multi-Year Light Curve of Scorpius X-1 Based on CGRO BATSE Spectroscopy Detector Observations
NASA Technical Reports Server (NTRS)
McNamara, B. J.; Harrison, T. E.; Mason, P. A.; Templeton, M.; Heikkila, C. W.; Buckley, T.; Galvan, E.; Silva, A.; Harmon, B. A.
1998-01-01
A multi-year light curve of the low mass X-ray binary, Scorpius X-1, is constructed based on the Compton Gamma-ray Observatory (CGRO) Burst and Transient Source Experiment (BATSE) Spectroscopy Detector (SD) data in the nominal energy range of 10-20 keV. A detailed discussion is given of the reduction process of the BATSE/SD data. Corrections to the SD measurements are made for off-axis pointings, spectral and bandpass changes, and differences in the eight SD sensitivities. The resulting 4.4 year Sco X-1 SD light curve is characterized in terms of the time scales over which various types of emission changes occur. This light curve is then compared with Sco X-1 light curves obtained by Axiel 5, the BATSE Large Area Detectors (LADs), and the RXTE all-sky monitor (ASM). Coincident temporal coverage by the BATSE/SD and RXTE/ASM allows a direct comparison of the behavior of Sco X-1 over a range of high energies to be made. These ASM light curves are then used to discuss model constraints on the Sco X-1 system.
Survival curves of Listeria monocytogenes in chorizos modeled with artificial neural networks.
Hajmeer, M; Basheer, I; Cliver, D O
2006-09-01
Using artificial neural networks (ANNs), a highly accurate model was developed to simulate survival curves of Listeria monocytogenes in chorizos as affected by the initial water activity (a(w0)) of the sausage formulation, temperature (T), and air inflow velocity (F) where the sausages are stored. The ANN-based survival model (R(2)=0.970) outperformed the regression-based cubic model (R(2)=0.851), and as such was used to derive other models (using regression) that allow prediction of the times needed to drop count by 1, 2, 3, and 4 logs (i.e., nD-values, n=1, 2, 3, 4). The nD-value regression models almost perfectly predicted the various times derived from a number of simulated survival curves exhibiting a wide variety of the operating conditions (R(2)=0.990-0.995). The nD-values were found to decrease with decreasing a(w0), and increasing T and F. The influence of a(w0) on nD-values seems to become more significant at some critical value of a(w0), below which the variation is negligible (0.93 for 1D-value, 0.90 for 2D-value, and <0.85 for 3D- and 4D-values). There is greater influence of storage T and F on 3D- and 4D-values than on 1D- and 2D-values.
Development of p-y curves of laterally loaded piles in cohesionless soil.
Khari, Mahdy; Kassim, Khairul Anuar; Adnan, Azlan
2014-01-01
The research on damages of structures that are supported by deep foundations has been quite intensive in the past decade. Kinematic interaction in soil-pile interaction is evaluated based on the p-y curve approach. Existing p-y curves have considered the effects of relative density on soil-pile interaction in sandy soil. The roughness influence of the surface wall pile on p-y curves has not been emphasized sufficiently. The presented study was performed to develop a series of p-y curves for single piles through comprehensive experimental investigations. Modification factors were studied, namely, the effects of relative density and roughness of the wall surface of pile. The model tests were subjected to lateral load in Johor Bahru sand. The new p-y curves were evaluated based on the experimental data and were compared to the existing p-y curves. The soil-pile reaction for various relative density (from 30% to 75%) was increased in the range of 40-95% for a smooth pile at a small displacement and 90% at a large displacement. For rough pile, the ratio of dense to loose relative density soil-pile reaction was from 2.0 to 3.0 at a small to large displacement. Direct comparison of the developed p-y curve shows significant differences in the magnitude and shapes with the existing load-transfer curves. Good comparison with the experimental and design studies demonstrates the multidisciplinary applications of the present method.
Development of p-y Curves of Laterally Loaded Piles in Cohesionless Soil
Khari, Mahdy; Kassim, Khairul Anuar; Adnan, Azlan
2014-01-01
The research on damages of structures that are supported by deep foundations has been quite intensive in the past decade. Kinematic interaction in soil-pile interaction is evaluated based on the p-y curve approach. Existing p-y curves have considered the effects of relative density on soil-pile interaction in sandy soil. The roughness influence of the surface wall pile on p-y curves has not been emphasized sufficiently. The presented study was performed to develop a series of p-y curves for single piles through comprehensive experimental investigations. Modification factors were studied, namely, the effects of relative density and roughness of the wall surface of pile. The model tests were subjected to lateral load in Johor Bahru sand. The new p-y curves were evaluated based on the experimental data and were compared to the existing p-y curves. The soil-pile reaction for various relative density (from 30% to 75%) was increased in the range of 40–95% for a smooth pile at a small displacement and 90% at a large displacement. For rough pile, the ratio of dense to loose relative density soil-pile reaction was from 2.0 to 3.0 at a small to large displacement. Direct comparison of the developed p-y curve shows significant differences in the magnitude and shapes with the existing load-transfer curves. Good comparison with the experimental and design studies demonstrates the multidisciplinary applications of the present method. PMID:24574932
NASA Astrophysics Data System (ADS)
Jin, Dakai; Lu, Jia; Zhang, Xiaoliu; Chen, Cheng; Bai, ErWei; Saha, Punam K.
2017-03-01
Osteoporosis is associated with increased fracture risk. Recent advancement in the area of in vivo imaging allows segmentation of trabecular bone (TB) microstructures, which is a known key determinant of bone strength and fracture risk. An accurate biomechanical modelling of TB micro-architecture provides a comprehensive summary measure of bone strength and fracture risk. In this paper, a new direct TB biomechanical modelling method using nonlinear manifold-based volumetric reconstruction of trabecular network is presented. It is accomplished in two sequential modules. The first module reconstructs a nonlinear manifold-based volumetric representation of TB networks from three-dimensional digital images. Specifically, it starts with the fuzzy digital segmentation of a TB network, and computes its surface and curve skeletons. An individual trabecula is identified as a topological segment in the curve skeleton. Using geometric analysis, smoothing and optimization techniques, the algorithm generates smooth, curved, and continuous representations of individual trabeculae glued at their junctions. Also, the method generates a geometrically consistent TB volume at junctions. In the second module, a direct computational biomechanical stress-strain analysis is applied on the reconstructed TB volume to predict mechanical measures. The accuracy of the method was examined using micro-CT imaging of cadaveric distal tibia specimens (N = 12). A high linear correlation (r = 0.95) between TB volume computed using the new manifold-modelling algorithm and that directly derived from the voxel-based micro-CT images was observed. Young's modulus (YM) was computed using direct mechanical analysis on the TB manifold-model over a cubical volume of interest (VOI), and its correlation with the YM, computed using micro-CT based conventional finite-element analysis over the same VOI, was examined. A moderate linear correlation (r = 0.77) was observed between the two YM measures. This preliminary results show the accuracy of the new nonlinear manifold modelling algorithm for TB, and demonstrate the feasibility of a new direct mechanical strain-strain analysis on a nonlinear manifold model of a highly complex biological structure.
NASA Astrophysics Data System (ADS)
Joiner, D. A.; Stevenson, D. E.; Panoff, R. M.
2000-12-01
The Computational Science Reference Desk is an online tool designed to provide educators in math, physics, astronomy, biology, chemistry, and engineering with information on how to use computational science to enhance inquiry based learning in the undergraduate and pre college classroom. The Reference Desk features a showcase of original content exploration activities, including lesson plans and background materials; a catalog of websites which contain models, lesson plans, software, and instructional resources; and a forum to allow educators to communicate their ideas. Many of the recent advances in astronomy rely on the use of computer simulation, and tools are being developed by CSERD to allow students to experiment with some of the models that have guided scientific discovery. One of these models allows students to study how scientists use spectral information to determine the makeup of the interstellar medium by modeling the interstellar extinction curve using spherical grains of silicate, amorphous carbon, or graphite. Students can directly compare their model to the average interstellar extinction curve, and experiment with how small changes in their model alter the shape of the interstellar extinction curve. A simpler model allows students to visualize spatial relationships between the Earth, Moon, and Sun to understand the cause of the phases of the moon. A report on the usefulness of these models in two classes, the Computational Astrophysics workshop at The Shodor Education Foundation and the Conceptual Astronomy class at the University of North Carolina at Greensboro, will be presented.
NASA Astrophysics Data System (ADS)
Mohymont, B.; Demarée, G. R.; Faka, D. N.
2004-05-01
The establishment of Intensity-Duration-Frequency (IDF) curves for precipitation remains a powerful tool in the risk analysis of natural hazards. Indeed the IDF-curves allow for the estimation of the return period of an observed rainfall event or conversely of the rainfall amount corresponding to a given return period for different aggregation times. There is a high need for IDF-curves in the tropical region of Central Africa but unfortunately the adequate long-term data sets are frequently not available. The present paper assesses IDF-curves for precipitation for three stations in Central Africa. More physically based models for the IDF-curves are proposed. The methodology used here has been advanced by Koutsoyiannis et al. (1998) and an inter-station and inter-technique comparison is being carried out. The IDF-curves for tropical Central Africa are an interesting tool to be used in sewer system design to combat the frequently occurring inundations in semi-urbanized and urbanized areas of the Kinshasa megapolis.
Hysteretic behavior of stage-discharge relationships in urban streams
NASA Astrophysics Data System (ADS)
Miller, A. J.; Lindner, G. A.
2009-12-01
Reliable stage-discharge relationships or rating curves are of critical importance for accurate calculation of streamflow and maintenance of long-term flow records. Urban streams offer particular challenges for the maintenance of accurate rating curves. It is often difficult or impossible to collect direct discharge measurements at high flows, many of which are generated by short-duration high-intensity summer thunderstorms, both because of dangerous conditions in the channel and also because the stream rises and falls so rapidly that field crews cannot reach sites in time and sometimes cannot make measurements rapidly enough to keep pace with changing water levels even when they are on site during a storm. Work in urban streams in the Baltimore metropolitan area has shown that projection of rating curves beyond the range of measured flows can lead to overestimation of flood peaks by as much as 100%, and these can only be corrected when adequate field data are available to support modeling efforts. Even moderate flows that are above safe wading depth and velocity may best be estimated using hydraulic models. Current research for NSF CNH project 0709659 includes the application of 2-d depth-averaged hydraulic models to match existing rating curves over a range of low to moderate flows and to extend rating curves for higher flows, based on field collection of high-water marks. Although it is generally assumed that stage-discharge relationships are single-valued, we find that modeling results in small urban streams often generate hysteretic relationships, with higher discharges on the rising limb of the hydrograph than on the falling limb. The difference between discharges for the same stage on the rising and falling limb can be on the order of 20-30% even for in-channel flows that are less than 1 m deep. As safety considerations dictate that it is preferable to make direct discharge measurements on the falling limb of the hydrograph, the higher direct measurements used in many rating curves probably have been collected on the falling limb and therefore may not capture the correct stage-discharge relationship for the rising limb. In some cases model results selected only from the falling limb are able to match the existing rating curve very closely. Although hysteresis may be explained with reference to the innate properties of the flood wave, other factors also lead to hysteretic behavior. Downstream constrictions and obstructions associated with urban infrastructure may cause substantial backwater effects, particularly during flood flows. Flood conditions at tributary confluences also can exert a controlling influence upstream. Based on our results we recommend that at some sites it is advisable to develop separate rating curves for the rising and falling limbs, and to develop a range of modeling scenarios for predicting the range of potential uncertainty.
Using long-term datasets to study exotic plant invasions on rangelands in the western United States
C. Morris; L. R. Morris; A. J. Leffler; C. D. Holifield Collins; A. D. Forman; M. A. Weltz; S. G. Kitchen
2013-01-01
Invasions by exotic species are generally described using a logistic growth curve divided into three phases: introduction, expansion and saturation. This model is constructed primarily from regional studies of plant invasions based on historical records and herbarium samples. The goal of this study is to compare invasion curves at the local scale to the logistic growth...
Fractal active contour model for segmenting the boundary of man-made target in nature scenes
NASA Astrophysics Data System (ADS)
Li, Min; Tang, Yandong; Wang, Lidi; Shi, Zelin
2006-02-01
In this paper, a novel geometric active contour model based on the fractal dimension feature to extract the boundary of man-made target in nature scenes is presented. In order to suppress the nature clutters, an adaptive weighting function is defined using the fractal dimension feature. Then the weighting function is introduced into the geodesic active contour model to detect the boundary of man-made target. Curve driven by our proposed model can evolve gradually from the initial position to the boundary of man-made target without being disturbed by nature clutters, even if the initial curve is far away from the true boundary. Experimental results validate the effectiveness and feasibility of our model.
Mabrouk, Rostom; Dubeau, François; Bentabet, Layachi
2013-01-01
Kinetic modeling of metabolic and physiologic cardiac processes in small animals requires an input function (IF) and a tissue time-activity curves (TACs). In this paper, we present a mathematical method based on independent component analysis (ICA) to extract the IF and the myocardium's TACs directly from dynamic positron emission tomography (PET) images. The method assumes a super-Gaussian distribution model for the blood activity, and a sub-Gaussian distribution model for the tissue activity. Our appreach was applied on 22 PET measurement sets of small animals, which were obtained from the three most frequently used cardiac radiotracers, namely: desoxy-fluoro-glucose ((18)F-FDG), [(13)N]-ammonia, and [(11)C]-acetate. Our study was extended to PET human measurements obtained with the Rubidium-82 ((82) Rb) radiotracer. The resolved mathematical IF values compare favorably to those derived from curves extracted from regions of interest (ROI), suggesting that the procedure presents a reliable alternative to serial blood sampling for small-animal cardiac PET studies.
NASA Astrophysics Data System (ADS)
Fakhari, Abbas; Bolster, Diogo
2017-04-01
We introduce a simple and efficient lattice Boltzmann method for immiscible multiphase flows, capable of handling large density and viscosity contrasts. The model is based on a diffuse-interface phase-field approach. Within this context we propose a new algorithm for specifying the three-phase contact angle on curved boundaries within the framework of structured Cartesian grids. The proposed method has superior computational accuracy compared with the common approach of approximating curved boundaries with stair cases. We test the model by applying it to four benchmark problems: (i) wetting and dewetting of a droplet on a flat surface and (ii) on a cylindrical surface, (iii) multiphase flow past a circular cylinder at an intermediate Reynolds number, and (iv) a droplet falling on hydrophilic and superhydrophobic circular cylinders under differing conditions. Where available, our results show good agreement with analytical solutions and/or existing experimental data, highlighting strengths of this new approach.
A trace map comparison algorithm for the discrete fracture network models of rock masses
NASA Astrophysics Data System (ADS)
Han, Shuai; Wang, Gang; Li, Mingchao
2018-06-01
Discrete fracture networks (DFN) are widely used to build refined geological models. However, validating whether a refined model can match to reality is a crucial problem, concerning whether the model can be used for analysis. The current validation methods include numerical validation and graphical validation. However, the graphical validation, aiming at estimating the similarity between a simulated trace map and the real trace map by visual observation, is subjective. In this paper, an algorithm for the graphical validation of DFN is set up. Four main indicators, including total gray, gray grade curve, characteristic direction and gray density distribution curve, are presented to assess the similarity between two trace maps. A modified Radon transform and loop cosine similarity are presented based on Radon transform and cosine similarity respectively. Besides, how to use Bézier curve to reduce the edge effect is described. Finally, a case study shows that the new algorithm can effectively distinguish which simulated trace map is more similar to the real trace map.
NASA Astrophysics Data System (ADS)
Paris, Adrien; André Garambois, Pierre; Calmant, Stéphane; Paiva, Rodrigo; Walter, Collischonn; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Bonnet, Marie-Paule; Seyler, Frédérique; Monnier, Jérôme
2016-04-01
Estimating river discharge for ungauged river reaches from satellite measurements is not straightforward given the nonlinearity of flow behavior with respect to measurable and non measurable hydraulic parameters. As a matter of facts, current satellite datasets do not give access to key parameters such as river bed topography and roughness. A unique set of almost one thousand altimetry-based rating curves was built by fit of ENVISAT and Jason-2 water stages with discharges obtained from the MGB-IPH rainfall-runoff model in the Amazon basin. These rated discharges were successfully validated towards simulated discharges (Ens = 0.70) and in-situ discharges (Ens = 0.71) and are not mission-dependent. The rating curve writes Q = a(Z-Z0)b*sqrt(S), with Z the water surface elevation and S its slope gained from satellite altimetry, a and b power law coefficient and exponent and Z0 the river bed elevation such as Q(Z0) = 0. For several river reaches in the Amazon basin where ADCP measurements are available, the Z0 values are fairly well validated with a relative error lower than 10%. The present contribution aims at relating the identifiability and the physical meaning of a, b and Z0given various hydraulic and geomorphologic conditions. Synthetic river bathymetries sampling a wide range of rivers and inflow discharges are used to perform twin experiments. A shallow water model is run for generating synthetic satellite observations, and then rating curve parameters are determined for each river section thanks to a MCMC algorithm. Thanks to twin experiments, it is shown that rating curve formulation with water surface slope, i.e. closer from Manning equation form, improves parameter identifiability. The compensation between parameters is limited, especially for reaches with little water surface variability. Rating curve parameters are analyzed for riffle and pools for small to large rivers, different river slopes and cross section shapes. It is shown that the river bed elevation Z0is systematically well identified with relative errors on the order of a few %. Eventually, these altimetry-based rating curves provide morphological parameters of river reaches that can be used as inputs into hydraulic models and a priori information that could be useful for SWOT inversion algorithms.
Growth standard charts for monitoring bodyweight in dogs of different sizes
Salt, Carina; Morris, Penelope J.; Wilson, Derek; Lund, Elizabeth M.; Cole, Tim J.; Butterwick, Richard F.
2017-01-01
Limited information is available on what constitutes optimal growth in dogs. The primary aim of this study was to develop evidence-based growth standards for dogs, using retrospective analysis of bodyweight and age data from >6 million young dogs attending a large corporate network of primary care veterinary hospitals across the USA. Electronic medical records were used to generate bodyweight data from immature client-owned dogs, that were healthy and had remained in ideal body condition throughout the first 3 years of life. Growth centile curves were constructed using Generalised Additive Models for Location, Shape and Scale. Curves were displayed graphically as centile charts covering the age range 12 weeks to 2 years. Over 100 growth charts were modelled, specific to different combinations of breed, sex and neuter status. Neutering before 37 weeks was associated with a slight upward shift in growth trajectory, whilst neutering after 37 weeks was associated with a slight downward shift in growth trajectory. However, these shifts were small in comparison to inter-individual variability amongst dogs, suggesting that separate curves for neutered dogs were not needed. Five bodyweight categories were created to cover breeds up to 40kg, using both visual assessment and hierarchical cluster analysis of breed-specific growth curves. For 20/24 of the individual breed centile curves, agreement with curves for the corresponding bodyweight categories was good. For the remaining 4 breed curves, occasional deviation across centile lines was observed, but overall agreement was acceptable. This suggested that growth could be described using size categories rather than requiring curves for specific breeds. In the current study, a series of evidence-based growth standards have been developed to facilitate charting of bodyweight in healthy dogs. Additional studies are required to validate these standards and create a clinical tool for growth monitoring in pet dogs. PMID:28873413
Growth standard charts for monitoring bodyweight in dogs of different sizes.
Salt, Carina; Morris, Penelope J; German, Alexander J; Wilson, Derek; Lund, Elizabeth M; Cole, Tim J; Butterwick, Richard F
2017-01-01
Limited information is available on what constitutes optimal growth in dogs. The primary aim of this study was to develop evidence-based growth standards for dogs, using retrospective analysis of bodyweight and age data from >6 million young dogs attending a large corporate network of primary care veterinary hospitals across the USA. Electronic medical records were used to generate bodyweight data from immature client-owned dogs, that were healthy and had remained in ideal body condition throughout the first 3 years of life. Growth centile curves were constructed using Generalised Additive Models for Location, Shape and Scale. Curves were displayed graphically as centile charts covering the age range 12 weeks to 2 years. Over 100 growth charts were modelled, specific to different combinations of breed, sex and neuter status. Neutering before 37 weeks was associated with a slight upward shift in growth trajectory, whilst neutering after 37 weeks was associated with a slight downward shift in growth trajectory. However, these shifts were small in comparison to inter-individual variability amongst dogs, suggesting that separate curves for neutered dogs were not needed. Five bodyweight categories were created to cover breeds up to 40kg, using both visual assessment and hierarchical cluster analysis of breed-specific growth curves. For 20/24 of the individual breed centile curves, agreement with curves for the corresponding bodyweight categories was good. For the remaining 4 breed curves, occasional deviation across centile lines was observed, but overall agreement was acceptable. This suggested that growth could be described using size categories rather than requiring curves for specific breeds. In the current study, a series of evidence-based growth standards have been developed to facilitate charting of bodyweight in healthy dogs. Additional studies are required to validate these standards and create a clinical tool for growth monitoring in pet dogs.
Ground-Based Telescope Parametric Cost Model
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Rowell, Ginger Holmes
2004-01-01
A parametric cost model for ground-based telescopes is developed using multi-variable statistical analysis, The model includes both engineering and performance parameters. While diameter continues to be the dominant cost driver, other significant factors include primary mirror radius of curvature and diffraction limited wavelength. The model includes an explicit factor for primary mirror segmentation and/or duplication (i.e.. multi-telescope phased-array systems). Additionally, single variable models based on aperture diameter are derived. This analysis indicates that recent mirror technology advances have indeed reduced the historical telescope cost curve.
Using the weighted area under the net benefit curve for decision curve analysis.
Talluri, Rajesh; Shete, Sanjay
2016-07-18
Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in a clinical scenario.
Intensity - Duration - Frequency Curves for U.S. Cities in a Warming Climate
NASA Astrophysics Data System (ADS)
Ragno, Elisa; AghaKouchak, Amir; Love, Charlotte; Vahedifard, Farshid; Cheng, Linyin; Lima, Carlos
2017-04-01
Current infrastructure design procedures rely on the use of Intensity - Duration - Frequency (IDF) curves retrieved under the assumption of temporal stationarity, meaning that occurrences of extreme events are expected to be time invariant. However, numerous studies have observed more severe extreme events over time. Hence, the stationarity assumption for extreme analysis may not be appropriate in a warming climate. This issue raises concerns regarding the safety and resilience of infrastructures and natural slopes. Here we employ daily precipitation data from historical and projected (RCP 8.5) CMIP5 runs to investigate IDF curves of 14 urban areas across the United States. We first statistically assess changes in precipitation extremes using an energy-based test for equal distributions. Then, through a Bayesian inference approach for stationary and non-stationary extreme value analysis, we provide updated IDF curves based on future climatic model projections. We show that, based on CMIP5 simulations, U.S cities may experience extreme precipitation events up to 20% more intense and twice as frequently, relative to historical records, despite the expectation of unchanged annual mean precipitation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Ben; He, Feng; Ouyang, Jiting, E-mail: jtouyang@bit.edu.cn
2015-12-15
Simulation work is very important for understanding the formation of self-organized discharge patterns. Previous works have witnessed different models derived from other systems for simulation of discharge pattern, but most of these models are complicated and time-consuming. In this paper, we introduce a convenient phenomenological dynamic model based on the basic dynamic process of glow discharge and the voltage transfer curve (VTC) to study the dielectric barrier glow discharge (DBGD) pattern. VTC is an important characteristic of DBGD, which plots the change of wall voltage after a discharge as a function of the initial total gap voltage. In the modeling,more » the combined effect of the discharge conditions is included in VTC, and the activation-inhibition effect is expressed by a spatial interaction term. Besides, the model reduces the dimensionality of the system by just considering the integration effect of current flow. All these greatly facilitate the construction of this model. Numerical simulations turn out to be in good accordance with our previous fluid modeling and experimental result.« less
Modeling of Non-isothermal Austenite Formation in Spring Steel
NASA Astrophysics Data System (ADS)
Huang, He; Wang, Baoyu; Tang, Xuefeng; Li, Junling
2017-12-01
The austenitization kinetics description of spring steel 60Si2CrA plays an important role in providing guidelines for industrial production. The dilatometric curves of 60Si2CrA steel were measured using a dilatometer DIL805A at heating rates of 0.3 K to 50 K/s (0.3 °C/s to 50 °C/s). Based on the dilatometric curves, a unified kinetics model using the internal state variable (ISV) method was derived to describe the non-isothermal austenitization kinetics of 60Si2CrA, and the abovementioned model models the incubation and transition periods. The material constants in the model were determined using a genetic algorithm-based optimization technique. Additionally, good agreement between predicted and experimental volume fractions of transformed austenite was obtained, indicating that the model is effective for describing the austenitization kinetics of 60Si2CrA steel. Compared with other modeling methods of austenitization kinetics, this model, which uses the ISV method, has some advantages, such as a simple formula and explicit physics meaning, and can be probably used in engineering practice.
Solute effect on basal and prismatic slip systems of Mg.
Moitra, Amitava; Kim, Seong-Gon; Horstemeyer, M F
2014-11-05
In an effort to design novel magnesium (Mg) alloys with high ductility, we present a first principles data based on the Density Functional Theory (DFT). The DFT was employed to calculate the generalized stacking fault energy curves, which can be used in the generalized Peierls-Nabarro (PN) model to study the energetics of basal slip and prismatic slip in Mg with and without solutes to calculate continuum scale dislocation core widths, stacking fault widths and Peierls stresses. The generalized stacking fault energy curves for pure Mg agreed well with other DFT calculations. Solute effects on these curves were calculated for nine alloying elements, namely Al, Ca, Ce, Gd, Li, Si, Sn, Zn and Zr, which allowed the strength and ductility to be qualitatively estimated based on the basal dislocation properties. Based on our multiscale methodology, a suggestion has been made to improve Mg formability.
Henry, Teague; Campbell, Ashley
2015-01-01
Objective. To examine factors that determine the interindividual variability of learning within a team-based learning environment. Methods. Students in a pharmacokinetics course were given 4 interim, low-stakes cumulative assessments throughout the semester and a cumulative final examination. Students’ Myers-Briggs personality type was assessed, as well as their study skills, motivations, and attitudes towards team-learning. A latent curve model (LCM) was applied and various covariates were assessed to improve the regression model. Results. A quadratic LCM was applied for the first 4 assessments to predict final examination performance. None of the covariates examined significantly impacted the regression model fit except metacognitive self-regulation, which explained some of the variability in the rate of learning. There were some correlations between personality type and attitudes towards team learning, with introverts having a lower opinion of team-learning than extroverts. Conclusion. The LCM could readily describe the learning curve. Extroverted and introverted personality types had the same learning performance even though preference for team-learning was lower in introverts. Other personality traits, study skills, or practice did not significantly contribute to the learning variability in this course. PMID:25861101
Persky, Adam M; Henry, Teague; Campbell, Ashley
2015-03-25
To examine factors that determine the interindividual variability of learning within a team-based learning environment. Students in a pharmacokinetics course were given 4 interim, low-stakes cumulative assessments throughout the semester and a cumulative final examination. Students' Myers-Briggs personality type was assessed, as well as their study skills, motivations, and attitudes towards team-learning. A latent curve model (LCM) was applied and various covariates were assessed to improve the regression model. A quadratic LCM was applied for the first 4 assessments to predict final examination performance. None of the covariates examined significantly impacted the regression model fit except metacognitive self-regulation, which explained some of the variability in the rate of learning. There were some correlations between personality type and attitudes towards team learning, with introverts having a lower opinion of team-learning than extroverts. The LCM could readily describe the learning curve. Extroverted and introverted personality types had the same learning performance even though preference for team-learning was lower in introverts. Other personality traits, study skills, or practice did not significantly contribute to the learning variability in this course.
Meertens, Linda J E; van Montfort, Pim; Scheepers, Hubertina C J; van Kuijk, Sander M J; Aardenburg, Robert; Langenveld, Josje; van Dooren, Ivo M A; Zwaan, Iris M; Spaanderman, Marc E A; Smits, Luc J M
2018-04-17
Prediction models may contribute to personalized risk-based management of women at high risk of spontaneous preterm delivery. Although prediction models are published frequently, often with promising results, external validation generally is lacking. We performed a systematic review of prediction models for the risk of spontaneous preterm birth based on routine clinical parameters. Additionally, we externally validated and evaluated the clinical potential of the models. Prediction models based on routinely collected maternal parameters obtainable during first 16 weeks of gestation were eligible for selection. Risk of bias was assessed according to the CHARMS guidelines. We validated the selected models in a Dutch multicenter prospective cohort study comprising 2614 unselected pregnant women. Information on predictors was obtained by a web-based questionnaire. Predictive performance of the models was quantified by the area under the receiver operating characteristic curve (AUC) and calibration plots for the outcomes spontaneous preterm birth <37 weeks and <34 weeks of gestation. Clinical value was evaluated by means of decision curve analysis and calculating classification accuracy for different risk thresholds. Four studies describing five prediction models fulfilled the eligibility criteria. Risk of bias assessment revealed a moderate to high risk of bias in three studies. The AUC of the models ranged from 0.54 to 0.67 and from 0.56 to 0.70 for the outcomes spontaneous preterm birth <37 weeks and <34 weeks of gestation, respectively. A subanalysis showed that the models discriminated poorly (AUC 0.51-0.56) for nulliparous women. Although we recalibrated the models, two models retained evidence of overfitting. The decision curve analysis showed low clinical benefit for the best performing models. This review revealed several reporting and methodological shortcomings of published prediction models for spontaneous preterm birth. Our external validation study indicated that none of the models had the ability to predict spontaneous preterm birth adequately in our population. Further improvement of prediction models, using recent knowledge about both model development and potential risk factors, is necessary to provide an added value in personalized risk assessment of spontaneous preterm birth. © 2018 The Authors Acta Obstetricia et Gynecologica Scandinavica published by John Wiley & Sons Ltd on behalf of Nordic Federation of Societies of Obstetrics and Gynecology (NFOG).
Quantifying the Uncertainty in Discharge Data Using Hydraulic Knowledge and Uncertain Gaugings
NASA Astrophysics Data System (ADS)
Renard, B.; Le Coz, J.; Bonnifait, L.; Branger, F.; Le Boursicaud, R.; Horner, I.; Mansanarez, V.; Lang, M.
2014-12-01
River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.).
New well testing applications of the pressure derivative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onur, M.
1989-01-01
This work presents new derivative type curves based on a new derivative group which is equal to the dimensionless pressure group divided by its logarithmic derivative with respect to dimensionless time group. One major advantage of these type curves is that the type-curve match of field pressure/pressure-derivative data with the new derivative type curves is accomplished by moving the field data plot in only the horizontal direction. This type-curve match fixes time match-point values. The pressure change versus time data is then matched with the dimensionless pressure solution to determine match-point values. Well/reservoir parameters can then be estimated in themore » standard way. This two step type-curve matching procedure increases the likelihood of obtaining a unique match. Moreover, the unique correspondence between the ordinate of the field data plot and the new derivative type curves should prove useful in determining whether given field data actually represents the well/reservoir model assumed by a selected type curve solution. It is also shown that the basic idea used in construction the type curves can be used to ensure that proper semilog straight lines are chosen when analyzing pressure data by semilog methods. Analysis of both drawdown and buildup data is considered and actual field cases are analyzed using the new derivative type curves and the semilog identification method. This work also presents new methods based on the pressure derivative to analyze buildup data obtained at a well (fracture or unfractured) produced to pseudosteady-state prior to shut-in. By using a method of analysis based on the pressure derivative, it is shown that a well's drainage area at the instant of shut-in and the flow capacity can be computed directly from buildup data even in cases where conventional semilog straight lines are not well-defined.« less
An Elliptic Curve Based Schnorr Cloud Security Model in Distributed Environment
Muthurajan, Vinothkumar; Narayanasamy, Balaji
2016-01-01
Cloud computing requires the security upgrade in data transmission approaches. In general, key-based encryption/decryption (symmetric and asymmetric) mechanisms ensure the secure data transfer between the devices. The symmetric key mechanisms (pseudorandom function) provide minimum protection level compared to asymmetric key (RSA, AES, and ECC) schemes. The presence of expired content and the irrelevant resources cause unauthorized data access adversely. This paper investigates how the integrity and secure data transfer are improved based on the Elliptic Curve based Schnorr scheme. This paper proposes a virtual machine based cloud model with Hybrid Cloud Security Algorithm (HCSA) to remove the expired content. The HCSA-based auditing improves the malicious activity prediction during the data transfer. The duplication in the cloud server degrades the performance of EC-Schnorr based encryption schemes. This paper utilizes the blooming filter concept to avoid the cloud server duplication. The combination of EC-Schnorr and blooming filter efficiently improves the security performance. The comparative analysis between proposed HCSA and the existing Distributed Hash Table (DHT) regarding execution time, computational overhead, and auditing time with auditing requests and servers confirms the effectiveness of HCSA in the cloud security model creation. PMID:26981584
An Elliptic Curve Based Schnorr Cloud Security Model in Distributed Environment.
Muthurajan, Vinothkumar; Narayanasamy, Balaji
2016-01-01
Cloud computing requires the security upgrade in data transmission approaches. In general, key-based encryption/decryption (symmetric and asymmetric) mechanisms ensure the secure data transfer between the devices. The symmetric key mechanisms (pseudorandom function) provide minimum protection level compared to asymmetric key (RSA, AES, and ECC) schemes. The presence of expired content and the irrelevant resources cause unauthorized data access adversely. This paper investigates how the integrity and secure data transfer are improved based on the Elliptic Curve based Schnorr scheme. This paper proposes a virtual machine based cloud model with Hybrid Cloud Security Algorithm (HCSA) to remove the expired content. The HCSA-based auditing improves the malicious activity prediction during the data transfer. The duplication in the cloud server degrades the performance of EC-Schnorr based encryption schemes. This paper utilizes the blooming filter concept to avoid the cloud server duplication. The combination of EC-Schnorr and blooming filter efficiently improves the security performance. The comparative analysis between proposed HCSA and the existing Distributed Hash Table (DHT) regarding execution time, computational overhead, and auditing time with auditing requests and servers confirms the effectiveness of HCSA in the cloud security model creation.
Nonlinear elasticity in resonance experiments
NASA Astrophysics Data System (ADS)
Li, Xun; Sens-Schönfelder, Christoph; Snieder, Roel
2018-04-01
Resonant bar experiments have revealed that dynamic deformation induces nonlinearity in rocks. These experiments produce resonance curves that represent the response amplitude as a function of the driving frequency. We propose a model to reproduce the resonance curves with observed features that include (a) the log-time recovery of the resonant frequency after the deformation ends (slow dynamics), (b) the asymmetry in the direction of the driving frequency, (c) the difference between resonance curves with the driving frequency that is swept upward and downward, and (d) the presence of a "cliff" segment to the left of the resonant peak under the condition of strong nonlinearity. The model is based on a feedback cycle where the effect of softening (nonlinearity) feeds back to the deformation. This model provides a unified interpretation of both the nonlinearity and slow dynamics in resonance experiments. We further show that the asymmetry of the resonance curve is caused by the softening, which is documented by the decrease of the resonant frequency during the deformation; the cliff segment of the resonance curve is linked to a bifurcation that involves a steep change of the response amplitude when the driving frequency is changed. With weak nonlinearity, the difference between the upward- and downward-sweeping curves depends on slow dynamics; a sufficiently slow frequency sweep eliminates this up-down difference. With strong nonlinearity, the up-down difference results from both the slow dynamics and bifurcation; however, the presence of the bifurcation maintains the respective part of the up-down difference, regardless of the sweep rate.
Seasonal variations of Manning's coefficient depending on vegetation conditions in Tärnsjö, Sweden
NASA Astrophysics Data System (ADS)
Plakane, Rūta; Di Baldassarre, Giuliano; Okoli, Kenechukwu
2017-04-01
Hydrological modelling and water resources management require observations of high and low river flows. To estimate them, rating curves based on the characteristics of the river channel and floodplain are often used. Yet, multiple factors can cause uncertainties in rating curves, one of them being the variability of the Manning's roughness coefficient due to seasonal changes of vegetation. Determining this uncertainty has been a challenge, and depending on vegetation conditions on a stream, values can temporarily show an important deviation from the calibrated rating curve, enhancing the importance to understand changes in Manning's roughness coefficient. Examining the aquatic vegetation on the site throughout different seasonal conditions allows one to observe changes within the channel. By depending on cyclical changes in Manning's roughness coefficient values, different discharges may correspond to the same stage conditions. In this context, we present a combination of field work and modelling exercise to the variation of the rating curve due to vegetation changes in a Swedish stream.
The spectrum of a vertex model and related spin one chain sitting in a genus five curve
NASA Astrophysics Data System (ADS)
Martins, M. J.
2017-11-01
We derive the transfer matrix eigenvalues of a three-state vertex model whose weights are based on a R-matrix not of difference form with spectral parameters lying on a genus five curve. We have shown that the basic building blocks for both the transfer matrix eigenvalues and Bethe equations can be expressed in terms of meromorphic functions on an elliptic curve. We discuss the properties of an underlying spin one chain originated from a particular choice of the R-matrix second spectral parameter. We present numerical and analytical evidences that the respective low-energy excitations can be gapped or massless depending on the strength of the interaction coupling. In the massive phase we provide analytical and numerical evidences in favor of an exact expression for the lowest energy gap. We point out that the critical point separating these two distinct physical regimes coincides with the one in which the weights geometry degenerate into union of genus one curves.
Study on creep behavior of Grade 91 heat-resistant steel using theta projection method
NASA Astrophysics Data System (ADS)
Ren, Facai; Tang, Xiaoying
2017-10-01
Creep behavior of Grade 91 heat-resistant steel used for steam cooler was characterized using the theta projection method. Creep tests were conducted at the temperature of 923K under the stress ranging from 100-150MPa. Based on the creep curve results, four theta parameters were established using a nonlinear least square fitting method. Four theta parameters showed a good linearity as a function of stress. The predicted curves coincided well with the experimental data and creep curves were also modeled to the low stress level of 60MPa.
Bayesian analysis of stage-fall-discharge rating curves and their uncertainties
NASA Astrophysics Data System (ADS)
Mansanarez, Valentin; Le Coz, Jérôme; Renard, Benjamin; Lang, Michel; Pierrefeu, Gilles; Le Boursicaud, Raphaël; Pobanz, Karine
2016-04-01
Stage-fall-discharge (SFD) rating curves are traditionally used to compute streamflow records at sites where the energy slope of the flow is variable due to variable backwater effects. Building on existing Bayesian approaches, we introduce an original hydraulics-based method for developing SFD rating curves used at twin gauge stations and estimating their uncertainties. Conventional power functions for channel and section controls are used, and transition to a backwater-affected channel control is computed based on a continuity condition, solved either analytically or numerically. The difference between the reference levels at the two stations is estimated as another uncertain parameter of the SFD model. The method proposed in this presentation incorporates information from both the hydraulic knowledge (equations of channel or section controls) and the information available in the stage-fall-discharge observations (gauging data). The obtained total uncertainty combines the parametric uncertainty and the remnant uncertainty related to the model of rating curve. This method provides a direct estimation of the physical inputs of the rating curve (roughness, width, slope bed, distance between twin gauges, etc.). The performance of the new method is tested using an application case affected by the variable backwater of a run-of-the-river dam: the Rhône river at Valence, France. In particular, a sensitivity analysis to the prior information and to the gauging dataset is performed. At that site, the stage-fall-discharge domain is well documented with gaugings conducted over a range of backwater affected and unaffected conditions. The performance of the new model was deemed to be satisfactory. Notably, transition to uniform flow when the overall range of the auxiliary stage is gauged is correctly simulated. The resulting curves are in good agreement with the observations (gaugings) and their uncertainty envelopes are acceptable for computing streamflow records. Similar conclusions were drawn from the application to other similar sites.
The Variance Reaction Time Model
ERIC Educational Resources Information Center
Sikstrom, Sverker
2004-01-01
The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…
Talmud, Philippa J; Hingorani, Aroon D; Cooper, Jackie A; Marmot, Michael G; Brunner, Eric J; Kumari, Meena; Kivimäki, Mika; Humphries, Steve E
2010-01-14
To assess the performance of a panel of common single nucleotide polymorphisms (genotypes) associated with type 2 diabetes in distinguishing incident cases of future type 2 diabetes (discrimination), and to examine the effect of adding genetic information to previously validated non-genetic (phenotype based) models developed to estimate the absolute risk of type 2 diabetes. Workplace based prospective cohort study with three 5 yearly medical screenings. 5535 initially healthy people (mean age 49 years; 33% women), of whom 302 developed new onset type 2 diabetes over 10 years. Non-genetic variables included in two established risk models-the Cambridge type 2 diabetes risk score (age, sex, drug treatment, family history of type 2 diabetes, body mass index, smoking status) and the Framingham offspring study type 2 diabetes risk score (age, sex, parental history of type 2 diabetes, body mass index, high density lipoprotein cholesterol, triglycerides, fasting glucose)-and 20 single nucleotide polymorphisms associated with susceptibility to type 2 diabetes. Cases of incident type 2 diabetes were defined on the basis of a standard oral glucose tolerance test, self report of a doctor's diagnosis, or the use of anti-diabetic drugs. A genetic score based on the number of risk alleles carried (range 0-40; area under receiver operating characteristics curve 0.54, 95% confidence interval 0.50 to 0.58) and a genetic risk function in which carriage of risk alleles was weighted according to the summary odds ratios of their effect from meta-analyses of genetic studies (area under receiver operating characteristics curve 0.55, 0.51 to 0.59) did not effectively discriminate cases of diabetes. The Cambridge risk score (area under curve 0.72, 0.69 to 0.76) and the Framingham offspring risk score (area under curve 0.78, 0.75 to 0.82) led to better discrimination of cases than did genotype based tests. Adding genetic information to phenotype based risk models did not improve discrimination and provided only a small improvement in model calibration and a modest net reclassification improvement of about 5% when added to the Cambridge risk score but not when added to the Framingham offspring risk score. The phenotype based risk models provided greater discrimination for type 2 diabetes than did models based on 20 common independently inherited diabetes risk alleles. The addition of genotypes to phenotype based risk models produced only minimal improvement in accuracy of risk estimation assessed by recalibration and, at best, a minor net reclassification improvement. The major translational application of the currently known common, small effect genetic variants influencing susceptibility to type 2 diabetes is likely to come from the insight they provide on causes of disease and potential therapeutic targets.
Model-based POD study of manual ultrasound inspection and sensitivity analysis using metamodel
NASA Astrophysics Data System (ADS)
Ribay, Guillemette; Artusi, Xavier; Jenson, Frédéric; Reece, Christopher; Lhuillier, Pierre-Emile
2016-02-01
The reliability of NDE can be quantified by using the Probability of Detection (POD) approach. Former studies have shown the potential of the model-assisted POD (MAPOD) approach to replace expensive experimental determination of POD curves. In this paper, we make use of CIVA software to determine POD curves for a manual ultrasonic inspection of a heavy component, for which a whole experimental POD campaign was not available. The influential parameters were determined by expert analysis. The semi-analytical models used in CIVA for wave propagation and beam-defect interaction have been validated in the range of variation of the influential parameters by comparison with finite element modelling (Athena). The POD curves are computed for « hit/miss » and « â versus a » analysis. The verification of Berens hypothesis is evaluated by statistical tools. A sensitivity study is performed to measure the relative influence of parameters on the defect response amplitude variance, using the Sobol sensitivity index. A meta-model is also built to reduce computing cost and enhance the precision of estimated index.
NASA Astrophysics Data System (ADS)
Le Coz, Jérôme; Renard, Benjamin; Bonnifait, Laurent; Branger, Flora; Le Boursicaud, Raphaël; Horner, Ivan; Mansanarez, Valentin; Lang, Michel; Vigneau, Sylvain
2015-04-01
River discharge is a crucial variable for Hydrology: as the output variable of most hydrologic models, it is used for sensitivity analyses, model structure identification, parameter estimation, data assimilation, prediction, etc. A major difficulty stems from the fact that river discharge is not measured continuously. Instead, discharge time series used by hydrologists are usually based on simple stage-discharge relations (rating curves) calibrated using a set of direct stage-discharge measurements (gaugings). In this presentation, we present a Bayesian approach (cf. Le Coz et al., 2014) to build such hydrometric rating curves, to estimate the associated uncertainty and to propagate this uncertainty to discharge time series. The three main steps of this approach are described: (1) Hydraulic analysis: identification of the hydraulic controls that govern the stage-discharge relation, identification of the rating curve equation and specification of prior distributions for the rating curve parameters; (2) Rating curve estimation: Bayesian inference of the rating curve parameters, accounting for the individual uncertainties of available gaugings, which often differ according to the discharge measurement procedure and the flow conditions; (3) Uncertainty propagation: quantification of the uncertainty in discharge time series, accounting for both the rating curve uncertainties and the uncertainty of recorded stage values. The rating curve uncertainties combine the parametric uncertainties and the remnant uncertainties that reflect the limited accuracy of the mathematical model used to simulate the physical stage-discharge relation. In addition, we also discuss current research activities, including the treatment of non-univocal stage-discharge relationships (e.g. due to hydraulic hysteresis, vegetation growth, sudden change of the geometry of the section, etc.). An operational version of the BaRatin software and its graphical interface are made available free of charge on request to the authors. J. Le Coz, B. Renard, L. Bonnifait, F. Branger, R. Le Boursicaud (2014). Combining hydraulic knowledge and uncertain gaugings in the estimation of hydrometric rating curves: a Bayesian approach, Journal of Hydrology, 509, 573-587.
Modeling of short fiber reinforced injection moulded composite
NASA Astrophysics Data System (ADS)
Kulkarni, A.; Aswini, N.; Dandekar, C. R.; Makhe, S.
2012-09-01
A micromechanics based finite element model (FEM) is developed to facilitate the design of a new production quality fiber reinforced plastic injection molded part. The composite part under study is composed of a polyetheretherketone (PEEK) matrix reinforced with 30% by volume fraction of short carbon fibers. The constitutive material models are obtained by using micromechanics based homogenization theories. The analysis is carried out by successfully coupling two commercial codes, Moldflow and ANSYS. Moldflow software is used to predict the fiber orientation by considering the flow kinetics and molding parameters. Material models are inputted into the commercial software ANSYS as per the predicted fiber orientation and the structural analysis is carried out. Thus in the present approach a coupling between two commercial codes namely Moldflow and ANSYS has been established to enable the analysis of the short fiber reinforced injection moulded composite parts. The load-deflection curve is obtained based on three constitutive material model namely an isotropy, transversely isotropy and orthotropy. Average values of the predicted quantities are compared to experimental results, obtaining a good correlation. In this manner, the coupled Moldflow-ANSYS model successfully predicts the load deflection curve of a composite injection molded part.
Measurement of the Rate of Stellar Tidal Disruption Flares
NASA Astrophysics Data System (ADS)
van Velzen, Sjoert; Farrar, Glennys R.
2014-09-01
We report an observational estimate of the rate of stellar tidal disruption flares (TDFs) in inactive galaxies based on a successful search for these events among transients in galaxies using archival Sloan Digital Sky Survey (SDSS) multi-epoch imaging data (Stripe 82). This search yielded 186 nuclear flares in galaxies, 2 of which are excellent TDF candidates. Because of the systematic nature of the search, the very large number of galaxies, the long time of observation, and the fact that non-TDFs were excluded without resorting to assumptions about TDF characteristics, this study provides an unparalleled opportunity to measure the TDF rate. To compute the rate of optical stellar tidal disruption events, we simulate our entire pipeline to obtain the efficiency of detection. The rate depends on the light curves of TDFs, which are presently still poorly constrained. Using only the observed part of the SDSS light curves gives a model-independent upper limit to the optical TDF rate, \\dot{N}<2\\times 10^{-4}\\,yr^{-1}\\,galaxy^{-1} (90% CL), under the assumption that the SDSS TDFs are representative examples. We develop three empirical models of the light curves based on the two SDSS light curves and two more recent and better-sampled Pan-STARRS TDF light curves, leading to our best estimate of the rate: \\dot{N}_TDF = (1.5{--}2.0)_{-1.3}^{+2.7} \\times 10^{-5} \\,yr^{-1}\\, galaxy^{-1}. We explore the modeling uncertainties by considering two theoretically motivated light curve models, as well as two different relationships between black hole mass and galaxy luminosity, and two different treatments of the cutoff in the visibility of TDFs at large M BH. From this we conclude that these sources of uncertainty are not significantly larger than the statistical ones. Our results are applicable for galaxies hosting black holes with mass in the range of a few 106-108 M ⊙, and translates to a volumetric TDF rate of (4-8) × 10-8 ± 0.4 yr-1 Mpc-3, with the statistical uncertainty in the exponent.
On the effects of adaptive reservoir operating rules in hydrological physically-based models
NASA Astrophysics Data System (ADS)
Giudici, Federico; Anghileri, Daniela; Castelletti, Andrea; Burlando, Paolo
2017-04-01
Recent years have seen a significant increase of the human influence on the natural systems both at the global and local scale. Accurately modeling the human component and its interaction with the natural environment is key to characterize the real system dynamics and anticipate future potential changes to the hydrological regimes. Modern distributed, physically-based hydrological models are able to describe hydrological processes with high level of detail and high spatiotemporal resolution. Yet, they lack in sophistication for the behavior component and human decisions are usually described by very simplistic rules, which might underperform in reproducing the catchment dynamics. In the case of water reservoir operators, these simplistic rules usually consist of target-level rule curves, which represent the average historical level trajectory. Whilst these rules can reasonably reproduce the average seasonal water volume shifts due to the reservoirs' operation, they cannot properly represent peculiar conditions, which influence the actual reservoirs' operation, e.g., variations in energy price or water demand, dry or wet meteorological conditions. Moreover, target-level rule curves are not suitable to explore the water system response to climate and socio economic changing contexts, because they assume a business-as-usual operation. In this work, we quantitatively assess how the inclusion of adaptive reservoirs' operating rules into physically-based hydrological models contribute to the proper representation of the hydrological regime at the catchment scale. In particular, we contrast target-level rule curves and detailed optimization-based behavioral models. We, first, perform the comparison on past observational records, showing that target-level rule curves underperform in representing the hydrological regime over multiple time scales (e.g., weekly, seasonal, inter-annual). Then, we compare how future hydrological changes are affected by the two modeling approaches by considering different future scenarios comprising climate change projections of precipitation and temperature and projections of electricity prices. We perform this comparative assessment on the real-world water system of Lake Como catchment in the Italian Alps, which is characterized by the massive presence of artificial hydropower reservoirs heavily altering the natural hydrological regime. The results show how different behavioral model approaches affect the system representation in terms of hydropower performance, reservoirs dynamics and hydrological regime under different future scenarios.
Enhancement of the Daytime MODIS Based Aircraft Icing Potential Algorithm Using Mesoscale Model Data
2006-03-01
January, 15, 2006 ...... 37 x Figure 25. ROC curves using 3 hour PIREPs and Alexander Tmap with symbols plotted at the 0.5 threshold values...42 Figure 26. ROC curves using 3 hour PIREPs and Alexander Tmap with symbols plotted at the 0.5 threshold values...Table 4. Results using T icing potential values from the Alexander Tmap , and 3 Hour PIREPs
NASA Astrophysics Data System (ADS)
Kumar, Gautam; Maji, Kuntal
2018-04-01
This article deals with the prediction of strain-and stress-based forming limit curves for advanced high strength steel DP590 sheet using Marciniak-Kuczynski (M-K) method. Three yield criteria namely Von-Mises, Hill's 48 and Yld2000-2d and two hardening laws i.e., Hollomon power and Swift hardening laws were considered to predict the forming limit curves (FLCs) for DP590 steel sheet. The effects of imperfection factor and initial groove angle on prediction of FLC were also investigated. It was observed that the FLCs shifted upward with the increase of imperfection factor value. The initial groove angle was found to have significant effects on limit strains in the left side of FLC, and insignificant effect for the right side of FLC for certain range of strain paths. The limit strains were calculated at zero groove angle for the right side of FLC, and a critical groove angle was used for the left side of FLC. The numerically predicted FLCs considering the different combinations of yield criteria and hardening laws were compared with the published experimental results of FLCs for DP590 steel sheet. The FLC predicted using the combination of Yld2000-2d yield criterion and swift hardening law was in better coorelation with the experimental data. Stress based forming limit curves (SFLCs) were also calculated from the limiting strain values obtained by M-K model. Theoretically predicted SFLCs were compared with that obtained from the experimental forming limit strains. Stress based forming limit curves were seen to better represent the forming limits of DP590 steel sheet compared to that by strain-based forming limit curves.
Kawano, Shingo; Komai, Yoshinobu; Ishioka, Junichiro; Sakai, Yasuyuki; Fuse, Nozomu; Ito, Masaaki; Kihara, Kazunori; Saito, Norio
2016-10-01
The aim of this study was to determine risk factors for survival after retrograde placement of ureteral stents and develop a prognostic model for advanced gastrointestinal tract (GIT: esophagus, stomach, colon and rectum) cancer patients. We examined the clinical records of 122 patients who underwent retrograde placement of a ureteral stent against malignant extrinsic ureteral obstruction. A prediction model for survival after stenting was developed. We compared its clinical usefulness with our previous model based on the results from nephrostomy cases by decision curve analysis. Median follow-up period was 201 days (8-1490) and 97 deaths occurred. The 1-year survival rate in this cohort was 29%. Based on multivariate analysis, primary site of colon origin, absence of retroperitoneal lymph node metastasis and serum albumin >3g/dL were significantly associated with a prolonged survival time. To develop a prognostic model, we divided the patients into 3 risk groups of favorable: 0-1 factors (N.=53), intermediate: 2 risk factors (N.=54), and poor: 3 risk factors (N.=15). There were significant differences in the survival profiles of these 3 risk groups (P<0.0001). Decision curve analyses revealed that the current model has a superior net benefit than our previous model for most of the examined probabilities. We have developed a novel prognostic model for GIT cancer patients who were treated with retrograde placement of a ureteral stent. The current model should help urologists and medical oncologists to predict survival in cases of malignant extrinsic ureteral obstruction.
New Risk Curves for NHTSA's Brain Injury Criterion (BrIC): Derivations and Assessments.
Laituri, Tony R; Henry, Scott; Pline, Kevin; Li, Guosong; Frankstein, Michael; Weerappuli, Para
2016-11-01
The National Highway Traffic Safety Administration (NHTSA) recently published a Request for Comments regarding a potential upgrade to the US New Car Assessment Program (US NCAP) - a star-rating program pertaining to vehicle crashworthiness. Therein, NHTSA (a) cited two metrics for assessing head risk: Head Injury Criterion (HIC15) and Brain Injury Criterion (BrIC), and (b) proposed to conduct risk assessment via its risk curves for those metrics, but did not prescribe a specific method for applying them. Recent studies, however, have indicated that the NHTSA risk curves for BrIC significantly overstate field-based head injury rates. Therefore, in the present three-part study, a new set of BrIC-based risk curves was derived, an overarching head risk equation involving risk curves for both BrIC and HIC15 was assessed, and some additional candidatepredictor- variable assessments were conducted. Part 1 pertained to the derivation. Specifically, data were pooled from various sources: Navy volunteers, amateur boxers, professional football players, simple-fall subjects, and racecar drivers. In total, there were 4,501 cases, with brain injury reported in 63. Injury outcomes were approximated on the Abbreviated Injury Scale (AIS). The statistical analysis was conducted subject to ordinal logistic regression analysis (OLR), such that the various levels of brain injury were cast as a function of BrIC. The resulting risk curves, with Goodman Kruksal Gamma=0.83, were significantly different than those from NHTSA. Part 2 pertained to the assessment relative to field data. Two perspectives were considered: "aggregate" (ΔV=0-56 km/h) and "point" (high-speed, regulatory focus). For the aggregate perspective, the new risk curves for BrIC were applied in field models pertaining to belted, mid-size, adult drivers in 11-1 o'clock, full-engagement frontal crashes in the National Automotive Sampling System (NASS, 1993-2014 calendar years). For the point perspective, BrIC data from tests were used. The assessments were conducted for minor, moderate, and serious injury levels for both Newer Vehicles (airbag-fitted) and Older Vehicles (not airbag-fitted). Curve-based injury rates and NASS-based injury rates were compared via average percent difference (AvgPctDiff). The new risk curves demonstrated significantly better fidelity than those from NHTSA. For example, for the aggregate perspective (n=12 assessments), the results were as follows: AvgPctDiff (present risk curves) = +67 versus AvgPctDiff (NHTSA risk curves) = +9378. Part 2 also contained a more comprehensive assessment. Specifically, BrIC-based risk curves were used to estimate brain-related injury probabilities, HIC15-based risk curves from NHTSA were used to estimate bone/other injury probabilities, and the maximum of the two resulting probabilities was used to represent the attendant headinjury probabilities. (Those HIC15-based risk curves yielded AvgPctDiff=+85 for that application.) Subject to the resulting 21 assessments, similar results were observed: AvgPctDiff (present risk curves) = +42 versus AvgPctDiff (NHTSA risk curves) = +5783. Therefore, based on the results from Part 2, if the existing BrIC metric is to be applied by NHTSA in vehicle assessment, we recommend that the corresponding risk curves derived in the present study be considered. Part 3 pertained to the assessment of various other candidate brain-injury metrics. Specifically, Parts 1 and 2 were revisited for HIC15, translation acceleration (TA), rotational acceleration (RA), rotational velocity (RV), and a different rotational brain injury criterion from NHTSA (BRIC). The rank-ordered results for the 21 assessments for each metric were as follows: RA, HIC15, BRIC, TA, BrIC, and RV. Therefore, of the six studied sets of OLR-based risk curves, the set for rotational acceleration demonstrated the best performance relative to NASS.
A Study of Short-Term Variations in Jupiter's Synchrotron Emission
NASA Technical Reports Server (NTRS)
Klein, M.J.; Gulkis, S.; Bolton, S. J.; Levin, S. M.
1999-01-01
Earth-based observations of the flux density and polarization of Jupiter's microwave emission provide useful data to test and constrain computational models of synchrotron radio emission from the inner regions of the Jovian magnetosphere. Stimulated by the sudden brightening of the synchrotron emission caused by the impacts of comet Shoemaker-Levy 9 in 1994, the observational techniques of the NASA-JPL Jupiter Patrol were modified to search for other short-term variations unrelated to the SL-9 event. The characteristics of the improved data base are described and the results of the search for variability on timescales of 5 to 100 days are reported. The first results of Jupiter observations from the Goldstone-Apple Valley Radio Telescope (GAVRT) project are reported and included in the data base. GAVRT is a new project in science education that engages middle- and high school students in science research. The paper also includes new observations of Jupiter's rotational beamed emission, commonly known as the "beaming curve", that describes the observed flux density as a function of System III longitude. The shape of the "beaming curve" is known to change with the parameter D(sub E), the declination of the earth relative to Jupiter's rotational equator. While the history of Jupiter's beaming curve exhibits remarkable stability and repeatability as a function of D(sub E), there may be evidence for short term departures from the nominal curves. Data supporting this tentative conclusion are presented. Preliminary results of a study comparing the observations and computer simulations of the synchrotron beaming curve will also be presented and discussed (see companion paper, "Modeling Jupiter's Synchrotron Emission", by Bolton et. al.). The research reported in this paper was performed by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
Hewson, Kylie; Noormohammadi, Amir H; Devlin, Joanne M; Mardani, Karim; Ignjatovic, Jagoda
2009-01-01
Infectious bronchitis virus (IBV) is a coronavirus that causes upper respiratory, renal and/or reproductive diseases with high morbidity in poultry. Classification of IBV is important for implementation of vaccination strategies to control the disease in commercial poultry. Currently, the lengthy process of sequence analysis of the IBV S1 gene is considered the gold standard for IBV strain identification, with a high nucleotide identity (e.g. > or =95%) indicating related strains. However, this gene has a high propensity to mutate and/or undergo recombination, and alone it may not be reliable for strain identification. A real-time polymerase chain reaction (RT-PCR) combined with high-resolution melt (HRM) curve analysis was developed based on the 3'UTR of IBV for rapid detection and classification of IBV from commercial poultry. HRM curves generated from 230 to 435-bp PCR products of several IBV strains were subjected to further analysis using a mathematical model also developed during this study. It was shown that a combination of HRM curve analysis and the mathematical model could reliably group 189 out of 190 comparisons of pairs of IBV strains in accordance with their 3'UTR and S1 gene identities. The newly developed RT-PCR/HRM curve analysis model could detect and rapidly identify novel and vaccine-related IBV strains, as confirmed by S1 gene and 3'UTR nucleotide sequences. This model is a rapid, reliable, accurate and non-subjective system for detection of IBVs in poultry flocks.
Schmidt, James R; De Houwer, Jan; Rothermund, Klaus
2016-12-01
The current paper presents an extension of the Parallel Episodic Processing model. The model is developed for simulating behaviour in performance (i.e., speeded response time) tasks and learns to anticipate both how and when to respond based on retrieval of memories of previous trials. With one fixed parameter set, the model is shown to successfully simulate a wide range of different findings. These include: practice curves in the Stroop paradigm, contingency learning effects, learning acquisition curves, stimulus-response binding effects, mixing costs, and various findings from the attentional control domain. The results demonstrate several important points. First, the same retrieval mechanism parsimoniously explains stimulus-response binding, contingency learning, and practice effects. Second, as performance improves with practice, any effects will shrink with it. Third, a model of simple learning processes is sufficient to explain phenomena that are typically (but perhaps incorrectly) interpreted in terms of higher-order control processes. More generally, we argue that computational models with a fixed parameter set and wider breadth should be preferred over those that are restricted to a narrow set of phenomena. Copyright © 2016 Elsevier Inc. All rights reserved.
Freni, G; La Loggia, G; Notaro, V
2010-01-01
Due to the increased occurrence of flooding events in urban areas, many procedures for flood damage quantification have been defined in recent decades. The lack of large databases in most cases is overcome by combining the output of urban drainage models and damage curves linking flooding to expected damage. The application of advanced hydraulic models as diagnostic, design and decision-making support tools has become a standard practice in hydraulic research and application. Flooding damage functions are usually evaluated by a priori estimation of potential damage (based on the value of exposed goods) or by interpolating real damage data (recorded during historical flooding events). Hydraulic models have undergone continuous advancements, pushed forward by increasing computer capacity. The details of the flooding propagation process on the surface and the details of the interconnections between underground and surface drainage systems have been studied extensively in recent years, resulting in progressively more reliable models. The same level of was advancement has not been reached with regard to damage curves, for which improvements are highly connected to data availability; this remains the main bottleneck in the expected flooding damage estimation. Such functions are usually affected by significant uncertainty intrinsically related to the collected data and to the simplified structure of the adopted functional relationships. The present paper aimed to evaluate this uncertainty by comparing the intrinsic uncertainty connected to the construction of the damage-depth function to the hydraulic model uncertainty. In this way, the paper sought to evaluate the role of hydraulic model detail level in the wider context of flood damage estimation. This paper demonstrated that the use of detailed hydraulic models might not be justified because of the higher computational cost and the significant uncertainty in damage estimation curves. This uncertainty occurs mainly because a large part of the total uncertainty is dependent on depth-damage curves. Improving the estimation of these curves may provide better results in term of uncertainty reduction than the adoption of detailed hydraulic models.
NASA Astrophysics Data System (ADS)
Pradhan, Biswajeet
2013-02-01
The purpose of the present study is to compare the prediction performances of three different approaches such as decision tree (DT), support vector machine (SVM) and adaptive neuro-fuzzy inference system (ANFIS) for landslide susceptibility mapping at Penang Hill area, Malaysia. The necessary input parameters for the landslide susceptibility assessments were obtained from various sources. At first, landslide locations were identified by aerial photographs and field surveys and a total of 113 landslide locations were constructed. The study area contains 340,608 pixels while total 8403 pixels include landslides. The landslide inventory was randomly partitioned into two subsets: (1) part 1 that contains 50% (4000 landslide grid cells) was used in the training phase of the models; (2) part 2 is a validation dataset 50% (4000 landslide grid cells) for validation of three models and to confirm its accuracy. The digitally processed images of input parameters were combined in GIS. Finally, landslide susceptibility maps were produced, and the performances were assessed and discussed. Total fifteen landslide susceptibility maps were produced using DT, SVM and ANFIS based models, and the resultant maps were validated using the landslide locations. Prediction performances of these maps were checked by receiver operating characteristics (ROC) by using both success rate curve and prediction rate curve. The validation results showed that, area under the ROC curve for the fifteen models produced using DT, SVM and ANFIS varied from 0.8204 to 0.9421 for success rate curve and 0.7580 to 0.8307 for prediction rate curves, respectively. Moreover, the prediction curves revealed that model 5 of DT has slightly higher prediction performance (83.07), whereas the success rate showed that model 5 of ANFIS has better prediction (94.21) capability among all models. The results of this study showed that landslide susceptibility mapping in the Penang Hill area using the three approaches (e.g., DT, SVM and ANFIS) is viable. As far as the performance of the models are concerned, the results appeared to be quite satisfactory, i.e., the zones determined on the map being zones of relative susceptibility.
Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A
2016-03-15
The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.
Consistency assessment of rating curve data in various locations using Bidirectional Reach (BReach)
NASA Astrophysics Data System (ADS)
Van Eerdenbrugh, Katrien; Van Hoey, Stijn; Coxon, Gemma; Freer, Jim; Verhoest, Niko E. C.
2017-10-01
When estimating discharges through rating curves, temporal data consistency is a critical issue. In this research, consistency in stage-discharge data is investigated using a methodology called Bidirectional Reach (BReach), which departs from a (in operational hydrology) commonly used definition of consistency. A period is considered to be consistent if no consecutive and systematic deviations from a current situation occur that exceed observational uncertainty. Therefore, the capability of a rating curve model to describe a subset of the (chronologically sorted) data is assessed in each observation by indicating the outermost data points for which the rating curve model behaves satisfactorily. These points are called the maximum left or right reach, depending on the direction of the investigation. This temporal reach should not be confused with a spatial reach (indicating a part of a river). Changes in these reaches throughout the data series indicate possible changes in data consistency and if not resolved could introduce additional errors and biases. In this research, various measurement stations in the UK, New Zealand and Belgium are selected based on their significant historical ratings information and their specific characteristics related to data consistency. For each country, regional information is maximally used to estimate observational uncertainty. Based on this uncertainty, a BReach analysis is performed and, subsequently, results are validated against available knowledge about the history and behavior of the site. For all investigated cases, the methodology provides results that appear to be consistent with this knowledge of historical changes and thus facilitates a reliable assessment of (in)consistent periods in stage-discharge measurements. This assessment is not only useful for the analysis and determination of discharge time series, but also to enhance applications based on these data (e.g., by informing hydrological and hydraulic model evaluation design about consistent time periods to analyze).
A forecast for extinction debt in the presence of speciation.
Sgardeli, Vasiliki; Iwasa, Yoh; Varvoglis, Harry; Halley, John M
2017-02-21
Predicting biodiversity relaxation following a disturbance is of great importance to conservation biology. Recently-developed models of stochastic community assembly allow us to predict the evolution of communities on the basis of mechanistic processes at the level of individuals. The neutral model of biodiversity, in particular, has provided closed-form solutions for the relaxation of biodiversity in isolated communities (no immigration or speciation). Here, we extend these results by deriving a relaxation curve for a neutral community in which new species are introduced through the mechanism of random fission speciation (RFS). The solution provides simple closed-form expressions for the equilibrium species richness, the relaxation time and the species-individual curve, which are good approximation to the more complicated formulas existing for the same model. The derivation of the relaxation curve is based on the assumption of a broken-stick species-abundance distribution (SAD) as an initial community configuration; yet for commonly observed SADs, the maximum deviation from the curve does not exceed 10%. Importantly, the solution confirms theoretical results and observations showing that the relaxation time increases with community size and thus habitat area. Such simple and analytically tractable models can help crystallize our ideas on the leading factors affecting biodiversity loss. Copyright © 2016 Elsevier Ltd. All rights reserved.
New approach to the calculation of pistachio powder hysteresis
NASA Astrophysics Data System (ADS)
Tavakolipour, Hamid; Mokhtarian, Mohsen
2016-04-01
Moisture sorption isotherms for pistachio powder were determined by gravimetric method at temperatures of 15, 25, 35 and 40°C. A selected mathematical models were tested to determine the best suitable model to predict isotherm curve. The results show that Caurie model had the most satisfactory goodness of fit. Also, another purpose of this research was to introduce a new methodology to determine the amount of hysteresis at different temperatures by using best predictive model of isotherm curve based on definite integration method. The results demonstrated that maximum hysteresis is related to the multi-layer water (in the range of water activity 0.2-0.6) which corresponds to the capillary condensation region and this phenomenon decreases with increasing temperature.
Modeling of Continuum Manipulators Using Pythagorean Hodograph Curves.
Singh, Inderjeet; Amara, Yacine; Melingui, Achille; Mani Pathak, Pushparaj; Merzouki, Rochdi
2018-05-10
Research on continuum manipulators is increasingly developing in the context of bionic robotics because of their many advantages over conventional rigid manipulators. Due to their soft structure, they have inherent flexibility, which makes it a huge challenge to control them with high performances. Before elaborating a control strategy of such robots, it is essential to reconstruct first the behavior of the robot through development of an approximate behavioral model. This can be kinematic or dynamic depending on the conditions of operation of the robot itself. Kinematically, two types of modeling methods exist to describe the robot behavior; quantitative methods describe a model-based method, and qualitative methods describe a learning-based method. In kinematic modeling of continuum manipulator, the assumption of constant curvature is often considered to simplify the model formulation. In this work, a quantitative modeling method is proposed, based on the Pythagorean hodograph (PH) curves. The aim is to obtain a three-dimensional reconstruction of the shape of the continuum manipulator with variable curvature, allowing the calculation of its inverse kinematic model (IKM). It is noticed that the performances of the PH-based kinematic modeling of continuum manipulators are considerable regarding position accuracy, shape reconstruction, and time/cost of the model calculation, than other kinematic modeling methods, for two cases: free load manipulation and variable load manipulation. This modeling method is applied to the compact bionic handling assistant (CBHA) manipulator for validation. The results are compared with other IKMs developed in case of CBHA manipulator.
Kendrick, Sarah K; Zheng, Qi; Garbett, Nichola C; Brock, Guy N
2017-01-01
DSC is used to determine thermally-induced conformational changes of biomolecules within a blood plasma sample. Recent research has indicated that DSC curves (or thermograms) may have different characteristics based on disease status and, thus, may be useful as a monitoring and diagnostic tool for some diseases. Since thermograms are curves measured over a range of temperature values, they are considered functional data. In this paper we apply functional data analysis techniques to analyze differential scanning calorimetry (DSC) data from individuals from the Lupus Family Registry and Repository (LFRR). The aim was to assess the effect of lupus disease status as well as additional covariates on the thermogram profiles, and use FD analysis methods to create models for classifying lupus vs. control patients on the basis of the thermogram curves. Thermograms were collected for 300 lupus patients and 300 controls without lupus who were matched with diseased individuals based on sex, race, and age. First, functional regression with a functional response (DSC) and categorical predictor (disease status) was used to determine how thermogram curve structure varied according to disease status and other covariates including sex, race, and year of birth. Next, functional logistic regression with disease status as the response and functional principal component analysis (FPCA) scores as the predictors was used to model the effect of thermogram structure on disease status prediction. The prediction accuracy for patients with Osteoarthritis and Rheumatoid Arthritis but without Lupus was also calculated to determine the ability of the classifier to differentiate between Lupus and other diseases. Data were divided 1000 times into separate 2/3 training and 1/3 test data for evaluation of predictions. Finally, derivatives of thermogram curves were included in the models to determine whether they aided in prediction of disease status. Functional regression with thermogram as a functional response and disease status as predictor showed a clear separation in thermogram curve structure between cases and controls. The logistic regression model with FPCA scores as the predictors gave the most accurate results with a mean 79.22% correct classification rate with a mean sensitivity = 79.70%, and specificity = 81.48%. The model correctly classified OA and RA patients without Lupus as controls at a rate of 75.92% on average with a mean sensitivity = 79.70% and specificity = 77.6%. Regression models including FPCA scores for derivative curves did not perform as well, nor did regression models including covariates. Changes in thermograms observed in the disease state likely reflect covalent modifications of plasma proteins or changes in large protein-protein interacting networks resulting in the stabilization of plasma proteins towards thermal denaturation. By relating functional principal components from thermograms to disease status, our Functional Principal Component Analysis model provides results that are more easily interpretable compared to prior studies. Further, the model could also potentially be coupled with other biomarkers to improve diagnostic classification for lupus.
Protofit: A program for determining surface protonation constants from titration data
NASA Astrophysics Data System (ADS)
Turner, Benjamin F.; Fein, Jeremy B.
2006-11-01
Determining the surface protonation behavior of natural adsorbents is essential to understand how they interact with their environments. ProtoFit is a tool for analysis of acid-base titration data and optimization of surface protonation models. The program offers a number of useful features including: (1) enables visualization of adsorbent buffering behavior; (2) uses an optimization approach independent of starting titration conditions or initial surface charge; (3) does not require an initial surface charge to be defined or to be treated as an optimizable parameter; (4) includes an error analysis intrinsically as part of the computational methods; and (5) generates simulated titration curves for comparison with observation. ProtoFit will typically be run through ProtoFit-GUI, a graphical user interface providing user-friendly control of model optimization, simulation, and data visualization. ProtoFit calculates an adsorbent proton buffering value as a function of pH from raw titration data (including pH and volume of acid or base added). The data is reduced to a form where the protons required to change the pH of the solution are subtracted out, leaving protons exchanged between solution and surface per unit mass of adsorbent as a function of pH. The buffering intensity function Qads* is calculated as the instantaneous slope of this reduced titration curve. Parameters for a surface complexation model are obtained by minimizing the sum of squares between the modeled (i.e. simulated) buffering intensity curve and the experimental data. The variance in the slope estimate, intrinsically produced as part of the Qads* calculation, can be used to weight the sum of squares calculation between the measured buffering intensity and a simulated curve. Effects of analytical error on data visualization and model optimization are discussed. Examples are provided of using ProtoFit for data visualization, model optimization, and model evaluation.
Visible Wavelength Exoplanet Phase Curves from Global Albedo Maps
NASA Astrophysics Data System (ADS)
Webber, Matthew; Cahoy, Kerri Lynn
2015-01-01
To investigate the effect of three-dimensional global albedo maps we use an albedo model that: calculates albedo spectra for each points across grid in longitude and latitude on the planetary disk, uses the appropriate angles for the source-observer geometry for each location, and then weights and sums these spectra using the Tschebychev-Gauss integration method. This structure permits detailed 3D modeling of an illuminated planetary disk and computes disk-integrated phase curves. Different pressure-temperature profiles are used for each location based on geometry and dynamics. We directly couple high-density pressure maps from global dynamic radiative-transfer models to compute global cloud maps. Cloud formation is determined from the correlation of the species condensation curves with the temperature-pressure profiles. We use the detailed cloud patterns, of spatial-varying composition and temperature, to determine the observable albedo spectra and phase curves for exoplanets Kepler-7b and HD189733b. These albedo spectra are used to compute planet-star flux ratios using PHOENIX stellar models, exoplanet orbital parameters, and telescope transmission functions. Insight from the Earthshine spectrum and solid surface albedo functions (e.g. water, ice, snow, rocks) are used with our planetary grid to determine the phase curve and flux ratios of non-uniform Earth and Super Earth-like exoplanets with various rotation rates and stellar types. Predictions can be tailored to the visible and Near-InfraRed (NIR) spectral windows for the Kepler space telescope, Hubble space telescope, and future observatories (e.g. WFIRST, JWST, Exo-C, Exo-S). Additionally, we constrain the effect of exoplanet urban-light on the shape of the night-side phase curve for Earths and Super-Earths.
Past and Future of Astronomy and SETI Cast in Maths
NASA Astrophysics Data System (ADS)
Maccone, C.
Assume that the history of Astronomy and SETI is the leading proof of the evolution of human knowledge on Earth over the last 3000 years. Then, human knowledge has increased a lot, although not at a uniform pace. A mathematical description of how much human knowledge has increased, however, is difficult to achieve. In this paper, we cast a mathematical model of the evolution of human knowledge over the last three thousand years that seems to reflect reasonably well both what is known from the past and might be extrapolated into the future. Our model is based on two seminal books by Sagan and Finney and Jones. Our model is based on the use of two cubic curves, representing the evolution of Astronomy and of SETI, respectively. We conclude by extrapolating these curves into the future and reach the conclusion that the "Star Trek" age of humankind might possibly begin by the end of this century.
Pattern recognition tool based on complex network-based approach
NASA Astrophysics Data System (ADS)
Casanova, Dalcimar; Backes, André Ricardo; Martinez Bruno, Odemir
2013-02-01
This work proposed a generalization of the method proposed by the authors: 'A complex network-based approach for boundary shape analysis'. Instead of modelling a contour into a graph and use complex networks rules to characterize it, here, we generalize the technique. This way, the work proposes a mathematical tool for characterization signals, curves and set of points. To evaluate the pattern description power of the proposal, an experiment of plat identification based on leaf veins image are conducted. Leaf vein is a taxon characteristic used to plant identification proposes, and one of its characteristics is that these structures are complex, and difficult to be represented as a signal or curves and this way to be analyzed in a classical pattern recognition approach. Here, we model the veins as a set of points and model as graphs. As features, we use the degree and joint degree measurements in a dynamic evolution. The results demonstrates that the technique has a good power of discrimination and can be used for plant identification, as well as other complex pattern recognition tasks.
NASA Astrophysics Data System (ADS)
Fazzolari, Fiorenzo A.; Carrera, Erasmo
2014-02-01
In this paper, the Ritz minimum energy method, based on the use of the Principle of Virtual Displacements (PVD), is combined with refined Equivalent Single Layer (ESL) and Zig Zag (ZZ) shell models hierarchically generated by exploiting the use of Carrera's Unified Formulation (CUF), in order to engender the Hierarchical Trigonometric Ritz Formulation (HTRF). The HTRF is then employed to carry out the free vibration analysis of doubly curved shallow and deep functionally graded material (FGM) shells. The PVD is further used in conjunction with the Gauss theorem to derive the governing differential equations and related natural boundary conditions. Donnell-Mushtari's shallow shell-type equations are given as a particular case. Doubly curved FGM shells and doubly curved sandwich shells made up of isotropic face sheets and FGM core are investigated. The proposed shell models are widely assessed by comparison with the literature results. Two benchmarks are provided and the effects of significant parameters such as stacking sequence, boundary conditions, length-to-thickness ratio, radius-to-length ratio and volume fraction index on the circular frequency parameters and modal displacements are discussed.
The Bass diffusion model on networks with correlations and inhomogeneous advertising
NASA Astrophysics Data System (ADS)
Bertotti, M. L.; Brunner, J.; Modanese, G.
2016-09-01
The Bass model, which is an effective forecasting tool for innovation diffusion based on large collections of empirical data, assumes an homogeneous diffusion process. We introduce a network structure into this model and we investigate numerically the dynamics in the case of networks with link density $P(k)=c/k^\\gamma$, where $k=1, \\ldots , N$. The resulting curve of the total adoptions in time is qualitatively similar to the homogeneous Bass curve corresponding to a case with the same average number of connections. The peak of the adoptions, however, tends to occur earlier, particularly when $\\gamma$ and $N$ are large (i.e., when there are few hubs with a large maximum number of connections). Most interestingly, the adoption curve of the hubs anticipates the total adoption curve in a predictable way, with peak times which can be, for instance when $N=100$, between 10% and 60% of the total adoptions peak. This may allow to monitor the hubs for forecasting purposes. We also consider the case of networks with assortative and disassortative correlations and a case of inhomogeneous advertising where the publicity terms are "targeted" on the hubs while maintaining their total cost constant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, N. E.; Soderberg, A. M.; Chornock, R.
2015-02-01
In recent years, wide-field sky surveys providing deep multiband imaging have presented a new path for indirectly characterizing the progenitor populations of core-collapse supernovae (SNe): systematic light-curve studies. We assemble a set of 76 grizy-band Type IIP SN light curves from Pan-STARRS1, obtained over a constant survey program of 4 yr and classified using both spectroscopy and machine-learning-based photometric techniques. We develop and apply a new Bayesian model for the full multiband evolution of each light curve in the sample. We find no evidence of a subpopulation of fast-declining explosions (historically referred to as ''Type IIL'' SNe). However, we identify a highly significantmore » relation between the plateau phase decay rate and peak luminosity among our SNe IIP. These results argue in favor of a single parameter, likely determined by initial stellar mass, predominantly controlling the explosions of red supergiants. This relation could also be applied for SN cosmology, offering a standardizable candle good to an intrinsic scatter of ≲ 0.2 mag. We compare each light curve to physical models from hydrodynamic simulations to estimate progenitor initial masses and other properties of the Pan-STARRS1 Type IIP SN sample. We show that correction of systematic discrepancies between modeled and observed SN IIP light-curve properties and an expanded grid of progenitor properties are needed to enable robust progenitor inferences from multiband light-curve samples of this kind. This work will serve as a pathfinder for photometric studies of core-collapse SNe to be conducted through future wide-field transient searches.« less
A MODEL FOR (QUASI-)PERIODIC MULTIWAVELENGTH PHOTOMETRIC VARIABILITY IN YOUNG STELLAR OBJECTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kesseli, Aurora Y.; Petkova, Maya A.; Wood, Kenneth
We present radiation transfer models of rotating young stellar objects (YSOs) with hot spots in their atmospheres, inner disk warps, and other three-dimensional effects in the nearby circumstellar environment. Our models are based on the geometry expected from magneto-accretion theory, where material moving inward in the disk flows along magnetic field lines to the star and creates stellar hot spots upon impact. Due to rotation of the star and magnetosphere, the disk is variably illuminated. We compare our model light curves to data from the Spitzer YSOVAR project to determine if these processes can explain the variability observed at opticalmore » and mid-infrared wavelengths in young stars. We focus on those variables exhibiting “dipper” behavior that may be periodic, quasi-periodic, or aperiodic. We find that the stellar hot-spot size and temperature affects the optical and near-infrared light curves, while the shape and vertical extent of the inner disk warp affects the mid-IR light curve variations. Clumpy disk distributions with non-uniform fractal density structure produce more stochastic light curves. We conclude that magneto-accretion theory is consistent with certain aspects of the multiwavelength photometric variability exhibited by low-mass YSOs. More detailed modeling of individual sources can be used to better determine the stellar hot-spot and inner disk geometries of particular sources.« less
Validating a biometric authentication system: sample size requirements.
Dass, Sarat C; Zhu, Yongfang; Jain, Anil K
2006-12-01
Authentication systems based on biometric features (e.g., fingerprint impressions, iris scans, human face images, etc.) are increasingly gaining widespread use and popularity. Often, vendors and owners of these commercial biometric systems claim impressive performance that is estimated based on some proprietary data. In such situations, there is a need to independently validate the claimed performance levels. System performance is typically evaluated by collecting biometric templates from n different subjects, and for convenience, acquiring multiple instances of the biometric for each of the n subjects. Very little work has been done in 1) constructing confidence regions based on the ROC curve for validating the claimed performance levels and 2) determining the required number of biometric samples needed to establish confidence regions of prespecified width for the ROC curve. To simplify the analysis that address these two problems, several previous studies have assumed that multiple acquisitions of the biometric entity are statistically independent. This assumption is too restrictive and is generally not valid. We have developed a validation technique based on multivariate copula models for correlated biometric acquisitions. Based on the same model, we also determine the minimum number of samples required to achieve confidence bands of desired width for the ROC curve. We illustrate the estimation of the confidence bands as well as the required number of biometric samples using a fingerprint matching system that is applied on samples collected from a small population.
NASA Technical Reports Server (NTRS)
Starkey, D.; Gehrels, Cornelis; Horne, Keith; Fausnaugh, M. M.; Peterson, B. M.; Bentz, M. C.; Kochanek, C. S.; Denney, K. D.; Edelson, R.; Goad, M. R.;
2017-01-01
We conduct a multi-wavelength continuum variability study of the Seyfert 1 galaxy NGC 5548 to investigate the temperature structure of its accretion disk. The 19 overlapping continuum light curves (1158 Angstrom to 9157 Angstrom) combine simultaneous Hubble Space Telescope, Swift, and ground-based observations over a 180 day period from 2014 January to July. Light-curve variability is interpreted as the reverberation response of the accretion disk to irradiation by a central time-varying point source. Our model yields the disk inclination i = 36deg +/- 10deg, temperature T(sub 1) = (44+/-6) times 10 (exp 3)K at 1 light day from the black hole, and a temperature radius slope (T proportional to r (exp -alpha)) of alpha = 0.99 +/- 0.03. We also infer the driving light curve and find that it correlates poorly with both the hard and soft X-ray light curves, suggesting that the X-rays alone may not drive the ultraviolet and optical variability over the observing period. We also decompose the light curves into bright, faint, and mean accretion-disk spectra. These spectra lie below that expected for a standard blackbody accretion disk accreting at L/L(sub Edd) = 0.1.
Statistical tools for transgene copy number estimation based on real-time PCR.
Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal
2007-11-01
As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.
Subcritical crack growth in fibrous materials
NASA Astrophysics Data System (ADS)
Santucci, S.; Cortet, P.-P.; Deschanel, S.; Vanel, L.; Ciliberto, S.
2006-05-01
We present experiments on the slow growth of a single crack in a fax paper sheet submitted to a constant force F. We find that statistically averaged crack growth curves can be described by only two parameters: the mean rupture time τ and a characteristic growth length ζ. We propose a model based on a thermally activated rupture process that takes into account the microstructure of cellulose fibers. The model is able to reproduce the shape of the growth curve, the dependence of ζ on F as well as the effect of temperature on the rupture time τ. We find that the length scale at which rupture occurs in this model is consistently close to the diameter of cellulose microfibrils.
Scholl, Joep H G; van Hunsel, Florence P A M; Hak, Eelko; van Puijenbroek, Eugène P
2018-02-01
The statistical screening of pharmacovigilance databases containing spontaneously reported adverse drug reactions (ADRs) is mainly based on disproportionality analysis. The aim of this study was to improve the efficiency of full database screening using a prediction model-based approach. A logistic regression-based prediction model containing 5 candidate predictors was developed and internally validated using the Summary of Product Characteristics as the gold standard for the outcome. All drug-ADR associations, with the exception of those related to vaccines, with a minimum of 3 reports formed the training data for the model. Performance was based on the area under the receiver operating characteristic curve (AUC). Results were compared with the current method of database screening based on the number of previously analyzed associations. A total of 25 026 unique drug-ADR associations formed the training data for the model. The final model contained all 5 candidate predictors (number of reports, disproportionality, reports from healthcare professionals, reports from marketing authorization holders, Naranjo score). The AUC for the full model was 0.740 (95% CI; 0.734-0.747). The internal validity was good based on the calibration curve and bootstrapping analysis (AUC after bootstrapping = 0.739). Compared with the old method, the AUC increased from 0.649 to 0.740, and the proportion of potential signals increased by approximately 50% (from 12.3% to 19.4%). A prediction model-based approach can be a useful tool to create priority-based listings for signal detection in databases consisting of spontaneous ADRs. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.
Joint inversion of apparent resistivity and seismic surface and body wave data
NASA Astrophysics Data System (ADS)
Garofalo, Flora; Sauvin, Guillaume; Valentina Socco, Laura; Lecomte, Isabelle
2013-04-01
A novel inversion algorithm has been implemented to jointly invert apparent resistivity curves from vertical electric soundings, surface wave dispersion curves, and P-wave travel times. The algorithm works in the case of laterally varying layered sites. Surface wave dispersion curves and P-wave travel times can be extracted from the same seismic dataset and apparent resistivity curves can be obtained from continuous vertical electric sounding acquisition. The inversion scheme is based on a series of local 1D layered models whose unknown parameters are thickness h, S-wave velocity Vs, P-wave velocity Vp, and Resistivity R of each layer. 1D models are linked to surface-wave dispersion curves and apparent resistivity curves through classical 1D forward modelling, while a 2D model is created by interpolating the 1D models and is linked to refracted P-wave hodograms. A priori information can be included in the inversion and a spatial regularization is introduced as a set of constraints between model parameters of adjacent models and layers. Both a priori information and regularization are weighted by covariance matrixes. We show the comparison of individual inversions and joint inversion for a synthetic dataset that presents smooth lateral variations. Performing individual inversions, the poor sensitivity to some model parameters leads to estimation errors up to 62.5 %, whereas for joint inversion the cooperation of different techniques reduces most of the model estimation errors below 5% with few exceptions up to 39 %, with an overall improvement. Even though the final model retrieved by joint inversion is internally consistent and more reliable, the analysis of the results evidences unacceptable values of Vp/Vs ratio for some layers, thus providing negative Poisson's ratio values. To further improve the inversion performances, an additional constraint is added imposing Poisson's ratio in the range 0-0.5. The final results are globally improved by the introduction of this constraint further reducing the maximum error to 30 %. The same test was performed on field data acquired in a landslide-prone area close by the town of Hvittingfoss, Norway. Seismic data were recorded on two 160-m long profiles in roll-along mode using a 5-kg sledgehammer as source and 24 4.5-Hz vertical geophones with 4-m separation. First-arrival travel times were picked at every shot locations and surface wave dispersion curves extracted at 8 locations for each profile. 2D resistivity measurements were carried out on the same profiles using Gradient and Dipole-Dipole arrays with 2-m electrode spacing. The apparent resistivity curves were extracted at the same location as for the dispersion curves. The data were subsequently jointly inverted and the resulting model compared to individual inversions. Although models from both, individual and joint inversions are consistent, the estimation error is smaller for joint inversion, and more especially for first-arrival travel times. The joint inversion exploits different sensitivities of the methods to model parameters and therefore mitigates solution nonuniqueness and the effects of intrinsic limitations of the different techniques. Moreover, it produces an internally consistent multi-parametric final model that can be profitably interpreted to provide a better understanding of subsurface properties.
Comparison of Numerical Analyses with a Static Load Test of a Continuous Flight Auger Pile
NASA Astrophysics Data System (ADS)
Hoľko, Michal; Stacho, Jakub
2014-12-01
The article deals with numerical analyses of a Continuous Flight Auger (CFA) pile. The analyses include a comparison of calculated and measured load-settlement curves as well as a comparison of the load distribution over a pile's length. The numerical analyses were executed using two types of software, i.e., Ansys and Plaxis, which are based on FEM calculations. Both types of software are different from each other in the way they create numerical models, model the interface between the pile and soil, and use constitutive material models. The analyses have been prepared in the form of a parametric study, where the method of modelling the interface and the material models of the soil are compared and analysed. Our analyses show that both types of software permit the modelling of pile foundations. The Plaxis software uses advanced material models as well as the modelling of the impact of groundwater or overconsolidation. The load-settlement curve calculated using Plaxis is equal to the results of a static load test with a more than 95 % degree of accuracy. In comparison, the load-settlement curve calculated using Ansys allows for the obtaining of only an approximate estimate, but the software allows for the common modelling of large structure systems together with a foundation system.
Venta, Kimberly; Baker, Erin; Fidopiastis, Cali; Stanney, Kay
2017-12-01
The purpose of this study was to investigate the potential of developing an EHR-based model of physician competency, named the Skill Deficiency Evaluation Toolkit for Eliminating Competency-loss Trends (Skill-DETECT), which presents the opportunity to use EHR-based models to inform selection of Continued Medical Education (CME) opportunities specifically targeted at maintaining proficiency. The IBM Explorys platform provided outpatient Electronic Health Records (EHRs) representing 76 physicians with over 5000 patients combined. These data were used to develop the Skill-DETECT model, a predictive hybrid model composed of a rule-based model, logistic regression model, and a thresholding model, which predicts cognitive clinical skill deficiencies in internal medicine physicians. A three-phase approach was then used to statistically validate the model performance. Subject Matter Expert (SME) panel reviews resulted in a 100% overall approval rate of the rule based model. Area under the receiver-operating characteristic curves calculated for each logistic regression curve resulted in values between 0.76 and 0.92, which indicated exceptional performance. Normality, skewness, and kurtosis were determined and confirmed that the distribution of values output from the thresholding model were unimodal and peaked, which confirmed effectiveness and generalizability. The validation has confirmed that the Skill-DETECT model has a strong ability to evaluate EHR data and support the identification of internal medicine cognitive clinical skills that are deficient or are of higher likelihood of becoming deficient and thus require remediation, which will allow both physician and medical organizations to fine tune training efforts. Copyright © 2017 Elsevier B.V. All rights reserved.
Modeling the Residual Strength of a Fibrous Composite Using the Residual Daniels Function
NASA Astrophysics Data System (ADS)
Paramonov, Yu.; Cimanis, V.; Varickis, S.; Kleinhofs, M.
2016-09-01
The concept of a residual Daniels function (RDF) is introduced. Together with the concept of Daniels sequence, the RDF is used for estimating the residual (after some preliminary fatigue loading) static strength of a unidirectional fibrous composite (UFC) and its S-N curve on the bases of test data. Usually, the residual strength is analyzed on the basis of a known S-N curve. In our work, an inverse approach is used: the S-N curve is derived from an analysis of the residual strength. This approach gives a good qualitive description of the process of decreasing residual strength and explanes the existence of the fatigue limit. The estimates of parameters of the corresponding regression model can be interpreted as estimates of parameters of the local strength of components of the UFC. In order to approach the quantitative experimental estimates of the fatigue life, some ideas based on the mathematics of the semiMarkovian process are employed. Satisfactory results in processing experimental data on the fatigue life and residual strength of glass/epoxy laminates are obtained.
Microfocal angiography of the pulmonary vasculature
NASA Astrophysics Data System (ADS)
Clough, Anne V.; Haworth, Steven T.; Roerig, David T.; Linehan, John H.; Dawson, Christopher A.
1998-07-01
X-ray microfocal angiography provides a means of assessing regional microvascular perfusion parameters using residue detection of vascular indicators. As an application of this methodology, we studied the effects of alveolar hypoxia, a pulmonary vasoconstrictor, on the pulmonary microcirculation to determine changes in regional blood mean transit time, volume and flow between control and hypoxic conditions. Video x-ray images of a dog lung were acquired as a bolus of radiopaque contrast medium passed through the lobar vasculature. X-ray time-absorbance curves were acquired from arterial and microvascular regions-of-interest during both control and hypoxic alveolar gas conditions. A mathematical model based on indicator-dilution theory applied to image residue curves was applied to the data to determine changes in microvascular perfusion parameters. Sensitivity of the model parameters to the model assumptions was analyzed. Generally, the model parameter describing regional microvascular volume, corresponding to area under the microvascular absorbance curve, was the most robust. The results of the model analysis applied to the experimental data suggest a significant decrease in microvascular volume with hypoxia. However, additional model assumptions concerning the flow kinematics within the capillary bed may be required for assessing changes in regional microvascular flow and mean transit time from image residue data.
Enrollment Planning Using Computer Decision Model: A Case Study at Grambling State University.
ERIC Educational Resources Information Center
Ghosh, Kalyan; Lundy, Harold W.
Achieving enrollment goals continues to be a major administrative concern in higher education. Enrollment management can be assisted through the use of computerized planning and forecast models. Although commercially available Markov transition type curve fitting models have been developed and used, a microcomputer-based decision planning model…
A mathematical model of microalgae growth in cylindrical photobioreactor
NASA Astrophysics Data System (ADS)
Bakeri, Noorhadila Mohd; Jamaian, Siti Suhana
2017-08-01
Microalgae are unicellular organisms, which exist individually or in chains or groups but can be utilized in many applications. Researchers have done various efforts in order to increase the growth rate of microalgae. Microalgae have a potential as an effective tool for wastewater treatment, besides as a replacement for natural fuel such as coal and biodiesel. The growth of microalgae can be estimated by using Geider model, which this model is based on photosynthesis irradiance curve (PI-curve) and focused on flat panel photobioreactor. Therefore, in this study a mathematical model for microalgae growth in cylindrical photobioreactor is proposed based on the Geider model. The light irradiance is the crucial part that affects the growth rate of microalgae. The absorbed photon flux will be determined by calculating the average light irradiance in a cylindrical system illuminated by unidirectional parallel flux and considering the cylinder as a collection of differential parallelepipeds. Results from this study showed that the specific growth rate of microalgae increases until the constant level is achieved. Therefore, the proposed mathematical model can be used to estimate the rate of microalgae growth in cylindrical photobioreactor.
NASA Astrophysics Data System (ADS)
Qamar, Muhammad Uzair; Azmat, Muhammad; Cheema, Muhammad Jehanzeb Masud; Shahid, Muhammad Adnan; Khushnood, Rao Arsalan; Ahmad, Sajjad
2016-10-01
The issue of lack of donor basins for prediction of flow duration curves (FDCs) in ungauged basins (PUB) is an important area of research that is not resolved in the literature. We present a distance based approach to predict FDCs at ungauged basins by quantifying the dissimilarity between FDCs and characteristics data of basins. This enables us to bracket hydrologically similar basins and thus allowing us to estimate FDCs at ungauged basins. Generally, a single regression model is selected to make hydrological estimates at an ungauged basin. Based on established laws and theories of hydrology, we work to devise a method to improve the output of selected model for an ungauged basin by swapping it with another model in case the latter gives better coverage and statistical estimates of the nearest neighbors of an ungauged basin. We report two examples to demonstrate the effectiveness of model swapping. Out of 124 basins used in analysis, 34 basins in example 1 and 41 basins in example 2 fulfill the set criteria of model swapping and subsequently their estimates are improved significantly.
Derieppe, Marc; de Senneville, Baudouin Denis; Kuijf, Hugo; Moonen, Chrit; Bos, Clemens
2014-10-01
Previously, we demonstrated the feasibility to monitor ultrasound-mediated uptake of a cell-impermeable model drug in real time with fibered confocal fluorescence microscopy. Here, we present a complete post-processing methodology, which corrects for cell displacements, to improve the accuracy of pharmacokinetic parameter estimation. Nucleus detection was performed based on the radial symmetry transform algorithm. Cell tracking used an iterative closest point approach. Pharmacokinetic parameters were calculated by fitting a two-compartment model to the time-intensity curves of individual cells. Cells were tracked successfully, improving time-intensity curve accuracy and pharmacokinetic parameter estimation. With tracking, 93 % of the 370 nuclei showed a fluorescence signal variation that was well-described by a two-compartment model. In addition, parameter distributions were narrower, thus increasing precision. Dedicated image analysis was implemented and enabled studying ultrasound-mediated model drug uptake kinetics in hundreds of cells per experiment, using fiber-based confocal fluorescence microscopy.
NASA Astrophysics Data System (ADS)
Latorre, Borja; Peña-Sancho, Carolina; Angulo-Jaramillo, Rafaël; Moret-Fernández, David
2015-04-01
Measurement of soil hydraulic properties is of paramount importance in fields such as agronomy, hydrology or soil science. Fundamented on the analysis of the Haverkamp et al. (1994) model, the aim of this paper is to explain a technique to estimate the soil hydraulic properties (sorptivity, S, and hydraulic conductivity, K) from the full-time cumulative infiltration curves. The method (NSH) was validated by means of 12 synthetic infiltration curves generated with HYDRUS-3D from known soil hydraulic properties. The K values used to simulate the synthetic curves were compared to those estimated with the proposed method. A procedure to identify and remove the effect of the contact sand layer on the cumulative infiltration curve was also developed. A sensitivity analysis was performed using the water level measurement as uncertainty source. Finally, the procedure was evaluated using different infiltration times and data noise. Since a good correlation between the K used in HYDRUS-3D to model the infiltration curves and those estimated by the NSH method was obtained, (R2 =0.98), it can be concluded that this technique is robust enough to estimate the soil hydraulic conductivity from complete infiltration curves. The numerical procedure to detect and remove the influence of the contact sand layer on the K and S estimates seemed to be robust and efficient. An effect of the curve infiltration noise on the K estimate was observed, which uncertainty increased with increasing noise. Finally, the results showed that infiltration time was an important factor to estimate K. Lower values of K or smaller uncertainty needed longer infiltration times.
Surface triads with optical properties
NASA Astrophysics Data System (ADS)
Panchuk, K. L.; Lyubchinov, E. V.; Krysova, I. V.
2018-01-01
A geometric model of formation of surfaces comprising an interconnected triple of emitter, reflector and receiver is presented in the paper. The model is based on cyclographic mapping of a spatial curve to the plane. In such map any given point (x, y, z) of the curve corresponds to a cycle with center (x, y) and radius equal to z applicate. The entire curve corresponds to a directed envelope of cycles consisting, in the general case, of two branches. It is shown that the triad of curves consisting of two branches of the envelope and the orthogonal projection of the original curve within the plane (xy) corresponds to a triad of developable surfaces. The triad of curves in the plane (xy) and the original curve together form a triad of ruled surfaces. Both triads have an optical property. Any ray of light emerging from the point of the emitter surface along the normal to it and falling on the surface of the reflector afterwards is directed along the normal vector to the surface of the receiver. The direct and inverse problems of formation of the triad of surfaces are solved. In the first case, a one-parameter set of triads of surfaces is defined from a given spatial curve. In the second case, a single triad of surfaces is defined from a pair of curves "emitter-receiver" defined on the plane (xy). Numerical examples of solutions of the direct and inverse problems are considered and the corresponding visualizations are given. The results of the work can be used in the design of reflector antennas in radar systems and systems for converting solar energy into electric and thermal energy.
NASA Astrophysics Data System (ADS)
Wu, You-Lin; Lin, Jing-Jenn; Lin, Shih-Hung; Sung, Yi-Hsing
2017-11-01
Hysteretic current-voltage (I-V) characteristics are quite common in metal-insulator-metal (MIM) devices used for resistive switching random access memory (RRAM). Two types of hysteretic I-V curves are usually observed, figure eight and counter figure eight (counter-clockwise and clockwise in the positive voltage sweep direction, respectively). In this work, a clockwise hysteretic I-V curve was found for an MIM device with polystyrene (PS)/ZnO nanorods stack as an insulator layer. Three distinct regions I ∼ V, I ∼ V2, and I ∼ V0.6 are observed in the double logarithmic plot of the I-V curves, which cannot be explained completely with the conventional trap-controlled space-charge-limited-current (SCLC) model. A model based on the energy band with two separate traps plus local energy variation and trap-controlled SCLC has been developed, which can successfully describe the behavior of the clockwise hysteretic I-V characteristics obtained in this work.
Peltola, Mikko; Malmivaara, Antti; Paavola, Mika
2013-12-04
The risk of early revision is increased for the first patients operatively treated with a newly introduced knee prosthesis. In this study, we explored the learning curves associated with ten knee implant models to determine their effect on early revision risk. We studied register data from all seventy-five surgical units that performed knee arthroplasty in Finland from 1998 to 2007. Of 54,925 patients (66,098 knees), 39,528 patients (46,363 knees) underwent arthroplasty for osteoarthritis of the knee with the ten most common total knee implants and were followed with complete data until December 31, 2010, or the time of death. We used a Cox proportional-hazards regression model for calculating the hazard ratios for early revision for the first fifteen arthroplasties and subsequent increments of numbers of arthroplasties. We found large differences among knee implants at the introduction with regard to the risk of early revision, as well as for the overall risk of early revision. A learning curve was found for four implant models, while six models did not show a learning effect on the risk of early revision. The survivorship of the studied prostheses showed substantial differences. Knee implants have model-specific learning curves and early revision risks. Some models are more difficult to implement than others. The manufacturers should consider the learning effect when designing implants and instrumentation. The surgeons should thoroughly familiarize themselves with the new knee implants before use.
Predicting Madura cattle growth curve using non-linear model
NASA Astrophysics Data System (ADS)
Widyas, N.; Prastowo, S.; Widi, T. S. M.; Baliarti, E.
2018-03-01
Madura cattle is Indonesian native. It is a composite breed that has undergone hundreds of years of selection and domestication to reach nowadays remarkable uniformity. Crossbreeding has reached the isle of Madura and the Madrasin, a cross between Madura cows and Limousine semen emerged. This paper aimed to compare the growth curve between Madrasin and one type of pure Madura cows, the common Madura cattle (Madura) using non-linear models. Madura cattles are kept traditionally thus reliable records are hardly available. Data were collected from small holder farmers in Madura. Cows from different age classes (<6 months, 6-12 months, 1-2years, 2-3years, 3-5years and >5years) were observed, and body measurements (chest girth, body length and wither height) were taken. In total 63 Madura and 120 Madrasin records obtained. Linear model was built with cattle sub-populations and age as explanatory variables. Body weights were estimated based on the chest girth. Growth curves were built using logistic regression. Results showed that within the same age, Madrasin has significantly larger body compared to Madura (p<0.05). The logistic models fit better for Madura and Madrasin cattle data; with the estimated MSE for these models were 39.09 and 759.28 with prediction accuracy of 99 and 92% for Madura and Madrasin, respectively. Prediction of growth curve using logistic regression model performed well in both types of Madura cattle. However, attempts to administer accurate data on Madura cattle are necessary to better characterize and study these cattle.
Shared and Distinct Rupture Discriminants of Small and Large Intracranial Aneurysms.
Varble, Nicole; Tutino, Vincent M; Yu, Jihnhee; Sonig, Ashish; Siddiqui, Adnan H; Davies, Jason M; Meng, Hui
2018-04-01
Many ruptured intracranial aneurysms (IAs) are small. Clinical presentations suggest that small and large IAs could have different phenotypes. It is unknown if small and large IAs have different characteristics that discriminate rupture. We analyzed morphological, hemodynamic, and clinical parameters of 413 retrospectively collected IAs (training cohort; 102 ruptured IAs). Hierarchal cluster analysis was performed to determine a size cutoff to dichotomize the IA population into small and large IAs. We applied multivariate logistic regression to build rupture discrimination models for small IAs, large IAs, and an aggregation of all IAs. We validated the ability of these 3 models to predict rupture status in a second, independently collected cohort of 129 IAs (testing cohort; 14 ruptured IAs). Hierarchal cluster analysis in the training cohort confirmed that small and large IAs are best separated at 5 mm based on morphological and hemodynamic features (area under the curve=0.81). For small IAs (<5 mm), the resulting rupture discrimination model included undulation index, oscillatory shear index, previous subarachnoid hemorrhage, and absence of multiple IAs (area under the curve=0.84; 95% confidence interval, 0.78-0.88), whereas for large IAs (≥5 mm), the model included undulation index, low wall shear stress, previous subarachnoid hemorrhage, and IA location (area under the curve=0.87; 95% confidence interval, 0.82-0.93). The model for the aggregated training cohort retained all the parameters in the size-dichotomized models. Results in the testing cohort showed that the size-dichotomized rupture discrimination model had higher sensitivity (64% versus 29%) and accuracy (77% versus 74%), marginally higher area under the curve (0.75; 95% confidence interval, 0.61-0.88 versus 0.67; 95% confidence interval, 0.52-0.82), and similar specificity (78% versus 80%) compared with the aggregate-based model. Small (<5 mm) and large (≥5 mm) IAs have different hemodynamic and clinical, but not morphological, rupture discriminants. Size-dichotomized rupture discrimination models performed better than the aggregate model. © 2018 American Heart Association, Inc.
Catchment area-based evaluation of the AMC-dependent SCS-CN-based rainfall-runoff models
NASA Astrophysics Data System (ADS)
Mishra, S. K.; Jain, M. K.; Pandey, R. P.; Singh, V. P.
2005-09-01
Using a large set of rainfall-runoff data from 234 watersheds in the USA, a catchment area-based evaluation of the modified version of the Mishra and Singh (2002a) model was performed. The model is based on the Soil Conservation Service Curve Number (SCS-CN) methodology and incorporates the antecedent moisture in computation of direct surface runoff. Comparison with the existing SCS-CN method showed that the modified version performed better than did the existing one on the data of all seven area-based groups of watersheds ranging from 0.01 to 310.3 km2.
Stephan, Carsten; Xu, Chuanliang; Finne, Patrik; Cammann, Henning; Meyer, Hellmuth-Alexander; Lein, Michael; Jung, Klaus; Stenman, Ulf-Hakan
2007-09-01
Different artificial neural networks (ANNs) using total prostate-specific antigen (PSA) and percentage of free PSA (%fPSA) have been introduced to enhance the specificity of prostate cancer detection. The applicability of independently trained ANN and logistic regression (LR) models to different populations regarding the composition (screening versus referred) and different PSA assays has not yet been tested. Two ANN and LR models using PSA (range 4 to 10 ng/mL), %fPSA, prostate volume, digital rectal examination findings, and patient age were tested. A multilayer perceptron network (MLP) was trained on 656 screening participants (Prostatus PSA assay) and another ANN (Immulite-based ANN [iANN]) was constructed on 606 multicentric urologically referred men. These and other assay-adapted ANN models, including one new iANN-based ANN, were used. The areas under the curve for the iANN (0.736) and MLP (0.745) were equal but showed no differences to %fPSA (0.725) in the Finnish group. Only the new iANN-based ANN reached a significant larger area under the curve (0.77). At 95% sensitivity, the specificities of MLP (33%) and the new iANN-based ANN (34%) were significantly better than the iANN (23%) and %fPSA (19%). Reverse methodology using the MLP model on the referred patients revealed, in contrast, a significant improvement in the areas under the curve for iANN and MLP (each 0.83) compared with %fPSA (0.70). At 90% and 95% sensitivity, the specificities of all LR and ANN models were significantly greater than those for %fPSA. The ANNs based on different PSA assays and populations were mostly comparable, but the clearly different patient composition also allowed with assay adaptation no unbiased ANN application to the other cohort. Thus, the use of ANNs in other populations than originally built is possible, but has limitations.
Characterizing the UV-to-NIR shape of the dust attenuation curve of IR luminous galaxies up to z ˜ 2
NASA Astrophysics Data System (ADS)
Lo Faro, B.; Buat, V.; Roehlly, Y.; Alvarez-Marquez, J.; Burgarella, D.; Silva, L.; Efstathiou, A.
2017-12-01
In this work, we investigate the far-ultraviolet (UV) to near-infrared (NIR) shape of the dust attenuation curve of a sample of IR-selected dust obscured (ultra)luminous IR galaxies at z ∼ 2. The spectral energy distributions (SEDs) are fitted with Code Investigating GALaxy Emission, a physically motivated spectral-synthesis model based on energy balance. Its flexibility allows us to test a wide range of different analytical prescriptions for the dust attenuation curve, including the well-known Calzetti and Charlot & Fall curves, and modified versions of them. The attenuation curves computed under the assumption of our reference double power-law model are in very good agreement with those derived, in previous works, with radiative transfer (RT) SED fitting. We investigate the position of our galaxies in the IRX-β diagram and find this to be consistent with greyer slopes, on average, in the UV. We also find evidence for a flattening of the attenuation curve in the NIR with respect to more classical Calzetti-like recipes. This larger NIR attenuation yields larger derived stellar masses from SED fitting, by a median factor of ∼1.4 and up to a factor ∼10 for the most extreme cases. The star formation rate appears instead to be more dependent on the total amount of attenuation in the galaxy. Our analysis highlights the need for a flexible attenuation curve when reproducing the physical properties of a large variety of objects.
Volatile Transport on Pluto: First Results from the 2013 Observing Season
NASA Astrophysics Data System (ADS)
Buratti, B. J.; Dalba, P. A.; Hicks, M.; Chu, D.; O'Neill, A.; Chesley, J. P.
2013-12-01
With the New Horizons spacecraft due to encounter Pluto in slightly less than two years, close scrutiny of this dwarf ice planet has begun in earnest. Ground-based observations are especially critical for context and for a larger temporal excursion. Seasonal transport of volatiles should occur on Pluto, and this transport should be detectable through changes in its rotational light curve, once all variations due to viewing geometry have been modeled. Giving the steady increase observed in Pluto's atmospheric pressure over the past two decades, associated sublimation of frost from the surface has likely occurred, as predicted by volatile transport models. Rotational light curves of Pluto through time have been created for static frost models based on images from the Hubble Space Telescope. These models, which account for changes in viewing geometry, have been compared with observed light curves obtained between 1950 and 2013. No evidence for transport was evident prior to 2000. Observations from 2002 (Buie et al., 2010, Astron. J. 139, 1128) and 2007-2008 (Hicks et al. 2008, B.A.A.S. 40, 460) suggest changes in the frost pattern on Pluto's surface. New observations of Pluto's light curve from the 2013 season from Table Mountain Observatory show no evidence for the large transport of volatiles on Pluto's surface. Our data are the first measurement of a large opposition surge on Pluto similar to that seen on other icy bodies. Both Buie et al. (2010) and our observations from the 2012-2013 seasons show that Pluto is becoming more red in color. This observation makes sense if nitrogen is being removed from the surface to uncover a red, photolyzed substrate of methane. Funded by NASA.
Marginalizing Instrument Systematics in HST WFC3 Transit Light Curves
NASA Astrophysics Data System (ADS)
Wakeford, H. R.; Sing, D. K.; Evans, T.; Deming, D.; Mandell, A.
2016-03-01
Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) infrared observations at 1.1-1.7 μm probe primarily the H2O absorption band at 1.4 μm, and have provided low-resolution transmission spectra for a wide range of exoplanets. We present the application of marginalization based on Gibson to analyze exoplanet transit light curves obtained from HST WFC3 to better determine important transit parameters such as Rp/R*, which are important for accurate detections of H2O. We approximate the evidence, often referred to as the marginal likelihood, for a grid of systematic models using the Akaike Information Criterion. We then calculate the evidence-based weight assigned to each systematic model and use the information from all tested models to calculate the final marginalized transit parameters for both the band-integrated and spectroscopic light curves to construct the transmission spectrum. We find that a majority of the highest weight models contain a correction for a linear trend in time as well as corrections related to HST orbital phase. We additionally test the dependence on the shift in spectral wavelength position over the course of the observations and find that spectroscopic wavelength shifts {δ }λ (λ ) best describe the associated systematic in the spectroscopic light curves for most targets while fast scan rate observations of bright targets require an additional level of processing to produce a robust transmission spectrum. The use of marginalization allows for transparent interpretation and understanding of the instrument and the impact of each systematic evaluated statistically for each data set, expanding the ability to make true and comprehensive comparisons between exoplanet atmospheres.
NASA Astrophysics Data System (ADS)
Chan, H. M.; van der Velden, B. H. M.; E Loo, C.; Gilhuijs, K. G. A.
2017-08-01
We present a radiomics model to discriminate between patients at low risk and those at high risk of treatment failure at long-term follow-up based on eigentumors: principal components computed from volumes encompassing tumors in washin and washout images of pre-treatment dynamic contrast-enhanced (DCE-) MR images. Eigentumors were computed from the images of 563 patients from the MARGINS study. Subsequently, a least absolute shrinkage selection operator (LASSO) selected candidates from the components that contained 90% of the variance of the data. The model for prediction of survival after treatment (median follow-up time 86 months) was based on logistic regression. Receiver operating characteristic (ROC) analysis was applied and area-under-the-curve (AUC) values were computed as measures of training and cross-validated performances. The discriminating potential of the model was confirmed using Kaplan-Meier survival curves and log-rank tests. From the 322 principal components that explained 90% of the variance of the data, the LASSO selected 28 components. The ROC curves of the model yielded AUC values of 0.88, 0.77 and 0.73, for the training, leave-one-out cross-validated and bootstrapped performances, respectively. The bootstrapped Kaplan-Meier survival curves confirmed significant separation for all tumors (P < 0.0001). Survival analysis on immunohistochemical subgroups shows significant separation for the estrogen-receptor subtype tumors (P < 0.0001) and the triple-negative subtype tumors (P = 0.0039), but not for tumors of the HER2 subtype (P = 0.41). The results of this retrospective study show the potential of early-stage pre-treatment eigentumors for use in prediction of treatment failure of breast cancer.
Marginalizing Instrument Systematics in HST WFC3 Transit Light Curves
NASA Technical Reports Server (NTRS)
Wakeford, H. R.; Sing, D.K.; Deming, D.; Mandell, A.
2016-01-01
Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) infrared observations at 1.1-1.7 microns probe primarily the H2O absorption band at 1.4 microns, and have provided low-resolution transmission spectra for a wide range of exoplanets. We present the application of marginalization based on Gibson to analyze exoplanet transit light curves obtained from HST WFC3 to better determine important transit parameters such as "ramp" probability (R (sub p)) divided by "ramp" total (R (sub asterisk)), which are important for accurate detections of H2O. We approximate the evidence, often referred to as the marginal likelihood, for a grid of systematic models using the Akaike Information Criterion. We then calculate the evidence-based weight assigned to each systematic model and use the information from all tested models to calculate the final marginalized transit parameters for both the band-integrated and spectroscopic light curves to construct the transmission spectrum. We find that a majority of the highest weight models contain a correction for a linear trend in time as well as corrections related to HST orbital phase. We additionally test the dependence on the shift in spectral wavelength position over the course of the observations and find that spectroscopic wavelength shifts delta (sub lambda) times lambda) best describe the associated systematic in the spectroscopic light curves for most targets while fast scan rate observations of bright targets require an additional level of processing to produce a robust transmission spectrum. The use of marginalization allows for transparent interpretation and understanding of the instrument and the impact of each systematic evaluated statistically for each data set, expanding the ability to make true and comprehensive comparisons between exoplanet atmospheres.
Suzuki, Ryo; Ito, Kohta; Lee, Taeyong; Ogihara, Naomichi
2017-12-01
Identifying the viscous properties of the plantar soft tissue is crucial not only for understanding the dynamic interaction of the foot with the ground during locomotion, but also for development of improved footwear products and therapeutic footwear interventions. In the present study, the viscous and hyperelastic material properties of the plantar soft tissue were experimentally identified using a spherical indentation test and an analytical contact model of the spherical indentation test. Force-relaxation curves of the heel pads were obtained from the indentation experiment. The curves were fit to the contact model incorporating a five-element Maxwell model to identify the viscous material parameters. The finite element method with the experimentally identified viscoelastic parameters could successfully reproduce the measured force-relaxation curves, indicating the material parameters were correctly estimated using the proposed method. Although there are some methodological limitations, the proposed framework to identify the viscous material properties may facilitate the development of subject-specific finite element modeling of the foot and other biological materials. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Watermarked cardiac CT image segmentation using deformable models and the Hermite transform
NASA Astrophysics Data System (ADS)
Gomez-Coronel, Sandra L.; Moya-Albor, Ernesto; Escalante-Ramírez, Boris; Brieva, Jorge
2015-01-01
Medical image watermarking is an open area for research and is a solution for the protection of copyright and intellectual property. One of the main challenges of this problem is that the marked images should not differ perceptually from the original images allowing a correct diagnosis and authentication. Furthermore, we also aim at obtaining watermarked images with very little numerical distortion so that computer vision tasks such as segmentation of important anatomical structures do not be impaired or affected. We propose a preliminary watermarking application in cardiac CT images based on a perceptive approach that includes a brightness model to generate a perceptive mask and identify the image regions where the watermark detection becomes a difficult task for the human eye. We propose a normalization scheme of the image in order to improve robustness against geometric attacks. We follow a spread spectrum technique to insert an alphanumeric code, such as patient's information, within the watermark. The watermark scheme is based on the Hermite transform as a bio-inspired image representation model. In order to evaluate the numerical integrity of the image data after watermarking, we perform a segmentation task based on deformable models. The segmentation technique is based on a vector-value level sets method such that, given a curve in a specific image, and subject to some constraints, the curve can evolve in order to detect objects. In order to stimulate the curve evolution we introduce simultaneously some image features like the gray level and the steered Hermite coefficients as texture descriptors. Segmentation performance was assessed by means of the Dice index and the Hausdorff distance. We tested different mark sizes and different insertion schemes on images that were later segmented either automatic or manual by physicians.
Structural model of the 50S subunit of E.Coli ribosomes from solution scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svergun, D.I.; Koch, M.H.J.; Pedersen, J.S.
1994-12-31
The application of new methods of small-angle scattering data interpretation to a contrast variation study of the 50S ribosomal subunit of Escherichia coli in solution is described. The X-ray data from contrast variation with sucrose are analyzed in terms of the basic scattering curves from the volume inaccessible to sucrose and from the regions inside this volume occupied mainly by RNA and by proteins. From these curves models of the shape of the 50S and its RNA-rich core are evaluated and positioned so that their difference produces a scattering curve which is in good agreement with the scattering from themore » protein moiety. Basing on this preliminary model, the X-ray and neutron contrast variation data of the 50S subunit in aqueous solutions are interpreted in the frame of the advanced two-phase model described by the shapes of the 50S subunit and its RNA-rich core taking into account density fluctuations inside the RNA and the protein moiety. The shape of the envelope of the 50S subunit and of the RNA-rich core are evaluated with a resolution of about 40A. The shape of the envelope is in good agreement with the models of the 50S subunit obtained from electron microscopy on isolated particles. The shape of the RNA-rich core correlates well with the model of the entire particle determined by the image reconstruction from ordered sheets indicating that the latter model which is based on the subjective contouring of density maps is heavily biased towards the RNA.« less
Evidence for Endothermy in Pterosaurs Based on Flight Capability Analyses
NASA Astrophysics Data System (ADS)
Jenkins, H. S.; Pratson, L. F.
2005-12-01
Previous attempts to constrain flight capability in pterosaurs have relied heavily on the fossil record, using bone articulation and apparent muscle allocation to evaluate flight potential (Frey et al., 1997; Padian, 1983; Bramwell, 1974). However, broad definitions of the physical parameters necessary for flight in pterosaurs remain loosely defined and few systematic approaches to constraining flight capability have been synthesized (Templin, 2000; Padian, 1983). Here we present a new method to assess flight capability in pterosaurs as a function of humerus length and flight velocity. By creating an energy-balance model to evaluate the power required for flight against the power available to the animal, we derive a `U'-shaped power curve and infer optimal flight speeds and maximal wingspan lengths for pterosaurs Quetzalcoatlus northropi and Pteranodon ingens. Our model corroborates empirically derived power curves for the modern black-billed magpie ( Pica Pica) and accurately reproduces the mechanical power curve for modern cockatiels ( Nymphicus hollandicus) (Tobalske et al., 2003). When we adjust our model to include an endothermic metabolic rate for pterosaurs, we find a maximal wingspan length of 18 meters for Q. northropi. Model runs using an exothermic metabolism derive maximal wingspans of 6-8 meters. As estimates based on fossil evidence show total wingspan lengths reaching up to 15 meters for Q. northropi, we conclude that large pterosaurs may have been endothermic and therefore more metabolically similar to birds than to reptiles.
Ma, Qiang; Cheng, Huanyu; Jang, Kyung-In; Luan, Haiwen; Hwang, Keh-Chih; Rogers, John A.; Huang, Yonggang; Zhang, Yihui
2016-01-01
Development of advanced synthetic materials that can mimic the mechanical properties of non-mineralized soft biological materials has important implications in a wide range of technologies. Hierarchical lattice materials constructed with horseshoe microstructures belong to this class of bio-inspired synthetic materials, where the mechanical responses can be tailored to match the nonlinear J-shaped stress-strain curves of human skins. The underlying relations between the J-shaped stress-strain curves and their microstructure geometry are essential in designing such systems for targeted applications. Here, a theoretical model of this type of hierarchical lattice material is developed by combining a finite deformation constitutive relation of the building block (i.e., horseshoe microstructure), with the analyses of equilibrium and deformation compatibility in the periodical lattices. The nonlinear J-shaped stress-strain curves and Poisson ratios predicted by this model agree very well with results of finite element analyses (FEA) and experiment. Based on this model, analytic solutions were obtained for some key mechanical quantities, e.g., elastic modulus, Poisson ratio, peak modulus, and critical strain around which the tangent modulus increases rapidly. A negative Poisson effect is revealed in the hierarchical lattice with triangular topology, as opposed to a positive Poisson effect in hierarchical lattices with Kagome and honeycomb topologies. The lattice topology is also found to have a strong influence on the stress-strain curve. For the three isotropic lattice topologies (triangular, Kagome and honeycomb), the hierarchical triangular lattice material renders the sharpest transition in the stress-strain curve and relative high stretchability, given the same porosity and arc angle of horseshoe microstructure. Furthermore, a demonstrative example illustrates the utility of the developed model in the rapid optimization of hierarchical lattice materials for reproducing the desired stress-strain curves of human skins. This study provides theoretical guidelines for future designs of soft bio-mimetic materials with hierarchical lattice constructions. PMID:27087704
A Global Optimization Method to Calculate Water Retention Curves
NASA Astrophysics Data System (ADS)
Maggi, S.; Caputo, M. C.; Turturro, A. C.
2013-12-01
Water retention curves (WRC) have a key role for the hydraulic characterization of soils and rocks. The behaviour of the medium is defined by relating the unsaturated water content to the matric potential. The experimental determination of WRCs requires an accurate and detailed measurement of the dependence of matric potential on water content, a time-consuming and error-prone process, in particular for rocky media. A complete experimental WRC needs at least a few tens of data points, distributed more or less uniformly from full saturation to oven dryness. Since each measurement requires to wait to reach steady state conditions (i.e., between a few tens of minutes for soils and up to several hours or days for rocks or clays), the whole process can even take a few months. The experimental data are fitted to the most appropriate parametric model, such as the widely used models of Van Genuchten, Brooks and Corey and Rossi-Nimmo, to obtain the analytic WRC. We present here a new method for the determination of the parameters that best fit the models to the available experimental data. The method is based on differential evolution, an evolutionary computation algorithm particularly useful for multidimensional real-valued global optimization problems. With this method it is possible to strongly reduce the number of measurements necessary to optimize the model parameters that accurately describe the WRC of the samples, allowing to decrease the time needed to adequately characterize the medium. In the present work, we have applied our method to calculate the WRCs of sedimentary carbonatic rocks of marine origin, belonging to 'Calcarenite di Gravina' Formation (Middle Pliocene - Early Pleistocene) and coming from two different quarry districts in Southern Italy. WRC curves calculated using the Van Genuchten model by simulated annealing (dashed curve) and differential evolution (solid curve). The curves are calculated using 10 experimental data points randomly extracted from the full experimental dataset. Simulated annealing is not able to find the optimal solution with this reduced data set.
NASA Astrophysics Data System (ADS)
Ma, Qiang; Cheng, Huanyu; Jang, Kyung-In; Luan, Haiwen; Hwang, Keh-Chih; Rogers, John A.; Huang, Yonggang; Zhang, Yihui
2016-05-01
Development of advanced synthetic materials that can mimic the mechanical properties of non-mineralized soft biological materials has important implications in a wide range of technologies. Hierarchical lattice materials constructed with horseshoe microstructures belong to this class of bio-inspired synthetic materials, where the mechanical responses can be tailored to match the nonlinear J-shaped stress-strain curves of human skins. The underlying relations between the J-shaped stress-strain curves and their microstructure geometry are essential in designing such systems for targeted applications. Here, a theoretical model of this type of hierarchical lattice material is developed by combining a finite deformation constitutive relation of the building block (i.e., horseshoe microstructure), with the analyses of equilibrium and deformation compatibility in the periodical lattices. The nonlinear J-shaped stress-strain curves and Poisson ratios predicted by this model agree very well with results of finite element analyses (FEA) and experiment. Based on this model, analytic solutions were obtained for some key mechanical quantities, e.g., elastic modulus, Poisson ratio, peak modulus, and critical strain around which the tangent modulus increases rapidly. A negative Poisson effect is revealed in the hierarchical lattice with triangular topology, as opposed to a positive Poisson effect in hierarchical lattices with Kagome and honeycomb topologies. The lattice topology is also found to have a strong influence on the stress-strain curve. For the three isotropic lattice topologies (triangular, Kagome and honeycomb), the hierarchical triangular lattice material renders the sharpest transition in the stress-strain curve and relative high stretchability, given the same porosity and arc angle of horseshoe microstructure. Furthermore, a demonstrative example illustrates the utility of the developed model in the rapid optimization of hierarchical lattice materials for reproducing the desired stress-strain curves of human skins. This study provides theoretical guidelines for future designs of soft bio-mimetic materials with hierarchical lattice constructions.
Ma, Qiang; Cheng, Huanyu; Jang, Kyung-In; Luan, Haiwen; Hwang, Keh-Chih; Rogers, John A; Huang, Yonggang; Zhang, Yihui
2016-05-01
Development of advanced synthetic materials that can mimic the mechanical properties of non-mineralized soft biological materials has important implications in a wide range of technologies. Hierarchical lattice materials constructed with horseshoe microstructures belong to this class of bio-inspired synthetic materials, where the mechanical responses can be tailored to match the nonlinear J-shaped stress-strain curves of human skins. The underlying relations between the J-shaped stress-strain curves and their microstructure geometry are essential in designing such systems for targeted applications. Here, a theoretical model of this type of hierarchical lattice material is developed by combining a finite deformation constitutive relation of the building block (i.e., horseshoe microstructure), with the analyses of equilibrium and deformation compatibility in the periodical lattices. The nonlinear J-shaped stress-strain curves and Poisson ratios predicted by this model agree very well with results of finite element analyses (FEA) and experiment. Based on this model, analytic solutions were obtained for some key mechanical quantities, e.g., elastic modulus, Poisson ratio, peak modulus, and critical strain around which the tangent modulus increases rapidly. A negative Poisson effect is revealed in the hierarchical lattice with triangular topology, as opposed to a positive Poisson effect in hierarchical lattices with Kagome and honeycomb topologies. The lattice topology is also found to have a strong influence on the stress-strain curve. For the three isotropic lattice topologies (triangular, Kagome and honeycomb), the hierarchical triangular lattice material renders the sharpest transition in the stress-strain curve and relative high stretchability, given the same porosity and arc angle of horseshoe microstructure. Furthermore, a demonstrative example illustrates the utility of the developed model in the rapid optimization of hierarchical lattice materials for reproducing the desired stress-strain curves of human skins. This study provides theoretical guidelines for future designs of soft bio-mimetic materials with hierarchical lattice constructions.
Vickers, Mathew J; Aubret, Fabien; Coulon, Aurélie
2017-01-01
The thermal performance curve (TPC) illustrates the dependence on body- and therefore environmental- temperature of many fitness-related aspects of ectotherm ecology and biology including foraging, growth, predator avoidance, and reproduction. The typical thermal performance curve model is linear in its parameters despite the well-known, strong, non-linearity of the response of performance to temperature. In addition, it is usual to consider a single model based on few individuals as descriptive of a species-level response to temperature. To overcome these issues, we used generalized additive mixed modeling (GAMM) to estimate thermal performance curves for 73 individual hatchling Natrix natrix grass snakes from seven clutches, taking advantage of the structure of GAMM to demonstrate that almost 16% of the deviance in thermal performance curves is attributed to inter-individual variation, while only 1.3% is attributable to variation amongst clutches. GAMM allows precise estimation of curve characteristics, which we used to test hypotheses on tradeoffs thought to constrain the thermal performance curve: hotter is better, the specialist-generalist trade off, and resource allocation/acquisition. We observed a negative relationship between maximum performance and performance breadth, indicating a specialist-generalist tradeoff, and a positive relationship between thermal optimum and maximum performance, suggesting "hotter is better". There was a significant difference among matrilines in the relationship between Area Under the Curve and maximum performance - relationship that is an indicator of evenness in acquisition or allocation of resources. As we used unfed hatchlings, the observed matriline effect indicates divergent breeding strategies among mothers, with some mothers provisioning eggs unequally resulting in some offspring being better than others, while other mothers provisioned the eggs more evenly, resulting in even performance throughout the clutch. This observation is reminiscent of bet-hedging strategies, and implies the possibility for intra-clutch variability in the TPCs to buffer N. natrix against unpredictable environmental variability. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A
2018-05-01
High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2 = 0.98; p < 0.01) with a mean bias of -2.2% and precision of 9.4%. A similar relationship was observed in children (R 2 = 0.99; p < 0.01). The developed pharmacokinetic model-based sparse sampling strategy promises to achieve the target area under the curve as part of precision dosing.
NASA Astrophysics Data System (ADS)
Engeland, Kolbjørn; Steinsland, Ingelin; Johansen, Stian Solvang; Petersen-Øverleir, Asgeir; Kolberg, Sjur
2016-05-01
In this study, we explore the effect of uncertainty and poor observation quality on hydrological model calibration and predictions. The Osali catchment in Western Norway was selected as case study and an elevation distributed HBV-model was used. We systematically evaluated the effect of accounting for uncertainty in parameters, precipitation input, temperature input and streamflow observations. For precipitation and temperature we accounted for the interpolation uncertainty, and for streamflow we accounted for rating curve uncertainty. Further, the effects of poorer quality of precipitation input and streamflow observations were explored. Less information about precipitation was obtained by excluding the nearest precipitation station from the analysis, while reduced information about the streamflow was obtained by omitting the highest and lowest streamflow observations when estimating the rating curve. The results showed that including uncertainty in the precipitation and temperature inputs has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Less information in precipitation input resulted in a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions, giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using streamflow observations based on different rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions, the best evaluation scores were not achieved for the rating curve used for calibration, but for rating curves giving smoother streamflow observations. Less information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores by giving both better and worse scores.
Song, Yang; Zhang, Yu-Dong; Yan, Xu; Liu, Hui; Zhou, Minxiong; Hu, Bingwen; Yang, Guang
2018-04-16
Deep learning is the most promising methodology for automatic computer-aided diagnosis of prostate cancer (PCa) with multiparametric MRI (mp-MRI). To develop an automatic approach based on deep convolutional neural network (DCNN) to classify PCa and noncancerous tissues (NC) with mp-MRI. Retrospective. In all, 195 patients with localized PCa were collected from a PROSTATEx database. In total, 159/17/19 patients with 444/48/55 observations (215/23/23 PCas and 229/25/32 NCs) were randomly selected for training/validation/testing, respectively. T 2 -weighted, diffusion-weighted, and apparent diffusion coefficient images. A radiologist manually labeled the regions of interest of PCas and NCs and estimated the Prostate Imaging Reporting and Data System (PI-RADS) scores for each region. Inspired by VGG-Net, we designed a patch-based DCNN model to distinguish between PCa and NCs based on a combination of mp-MRI data. Additionally, an enhanced prediction method was used to improve the prediction accuracy. The performance of DCNN prediction was tested using a receiver operating characteristic (ROC) curve, and the area under the ROC curve (AUC), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Moreover, the predicted result was compared with the PI-RADS score to evaluate its clinical value using decision curve analysis. Two-sided Wilcoxon signed-rank test with statistical significance set at 0.05. The DCNN produced excellent diagnostic performance in distinguishing between PCa and NC for testing datasets with an AUC of 0.944 (95% confidence interval: 0.876-0.994), sensitivity of 87.0%, specificity of 90.6%, PPV of 87.0%, and NPV of 90.6%. The decision curve analysis revealed that the joint model of PI-RADS and DCNN provided additional net benefits compared with the DCNN model and the PI-RADS scheme. The proposed DCNN-based model with enhanced prediction yielded high performance in statistical analysis, suggesting that DCNN could be used in computer-aided diagnosis (CAD) for PCa classification. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.
Štiglic, G; Kocbek, P; Cilar, L; Fijačko, N; Stožer, A; Zaletel, J; Sheikh, A; Povalej Bržan, P
2018-05-01
To develop and validate a simplified screening test for undiagnosed Type 2 diabetes mellitus and impaired fasting glucose for the Slovenian population (SloRisk) to be used in the general population. Data on 11 391 people were collected from the electronic health records of comprehensive medical examinations in five Slovenian healthcare centres. Fasting plasma glucose as well as information related to the Finnish Diabetes Risk Score questionnaire, FINDRISC, were collected for 2073 people to build predictive models. Bootstrapping-based evaluation was used to estimate the area under the receiver-operating characteristic curve performance metric of two proposed logistic regression models as well as the Finnish Diabetes Risk Score model both at recommended and at alternative cut-off values. The final model contained five questions for undiagnosed Type 2 diabetes prediction and achieved an area under the receiver-operating characteristic curve of 0.851 (95% CI 0.850-0.853). The impaired fasting glucose prediction model included six questions and achieved an area under the receiver-operating characteristic curve of 0.840 (95% CI 0.839-0.840). There were four questions that were included in both models (age, sex, waist circumference and blood sugar history), with physical activity selected only for undiagnosed Type 2 diabetes and questions on family history and hypertension drug use selected only for the impaired fasting glucose prediction model. This study proposes two simplified models based on FINDRISC questions for screening of undiagnosed Type 2 diabetes and impaired fasting glucose in the Slovenian population. A significant improvement in performance was achieved compared with the original FINDRISC questionnaire. Both models include waist circumference instead of BMI. © 2018 Diabetes UK.
An agent-based computational model for tuberculosis spreading on age-structured populations
NASA Astrophysics Data System (ADS)
Graciani Rodrigues, C. C.; Espíndola, Aquino L.; Penna, T. J. P.
2015-06-01
In this work we present an agent-based computational model to study the spreading of the tuberculosis (TB) disease on age-structured populations. The model proposed is a merge of two previous models: an agent-based computational model for the spreading of tuberculosis and a bit-string model for biological aging. The combination of TB with the population aging, reproduces the coexistence of health states, as seen in real populations. In addition, the universal exponential behavior of mortalities curves is still preserved. Finally, the population distribution as function of age shows the prevalence of TB mostly in elders, for high efficacy treatments.
NASA Astrophysics Data System (ADS)
Aigrain, S.; Llama, J.; Ceillier, T.; Chagas, M. L. das; Davenport, J. R. A.; García, R. A.; Hay, K. L.; Lanza, A. F.; McQuillan, A.; Mazeh, T.; de Medeiros, J. R.; Nielsen, M. B.; Reinhold, T.
2015-07-01
We present the results of a blind exercise to test the recoverability of stellar rotation and differential rotation in Kepler light curves. The simulated light curves lasted 1000 d and included activity cycles, Sun-like butterfly patterns, differential rotation and spot evolution. The range of rotation periods, activity levels and spot lifetime were chosen to be representative of the Kepler data of solar-like stars. Of the 1000 simulated light curves, 770 were injected into actual quiescent Kepler light curves to simulate Kepler noise. The test also included five 1000-d segments of the Sun's total irradiance variations at different points in the Sun's activity cycle. Five teams took part in the blind exercise, plus two teams who participated after the content of the light curves had been released. The methods used included Lomb-Scargle periodograms and variants thereof, autocorrelation function and wavelet-based analyses, plus spot modelling to search for differential rotation. The results show that the `overall' period is well recovered for stars exhibiting low and moderate activity levels. Most teams reported values within 10 per cent of the true value in 70 per cent of the cases. There was, however, little correlation between the reported and simulated values of the differential rotation shear, suggesting that differential rotation studies based on full-disc light curves alone need to be treated with caution, at least for solar-type stars. The simulated light curves and associated parameters are available online for the community to test their own methods.
Park, Daeryong; Roesner, Larry A
2012-12-15
This study examined pollutant loads released to receiving water from a typical urban watershed in the Los Angeles (LA) Basin of California by applying a best management practice (BMP) performance model that includes uncertainty. This BMP performance model uses the k-C model and incorporates uncertainty analysis and the first-order second-moment (FOSM) method to assess the effectiveness of BMPs for removing stormwater pollutants. Uncertainties were considered for the influent event mean concentration (EMC) and the aerial removal rate constant of the k-C model. The storage treatment overflow and runoff model (STORM) was used to simulate the flow volume from watershed, the bypass flow volume and the flow volume that passes through the BMP. Detention basins and total suspended solids (TSS) were chosen as representatives of stormwater BMP and pollutant, respectively. This paper applies load frequency curves (LFCs), which replace the exceedance percentage with an exceedance frequency as an alternative to load duration curves (LDCs), to evaluate the effectiveness of BMPs. An evaluation method based on uncertainty analysis is suggested because it applies a water quality standard exceedance based on frequency and magnitude. As a result, the incorporation of uncertainty in the estimates of pollutant loads can assist stormwater managers in determining the degree of total daily maximum load (TMDL) compliance that could be expected from a given BMP in a watershed. Copyright © 2012 Elsevier Ltd. All rights reserved.
Interpretation of BM Orionis. [eclipsing binary model
NASA Technical Reports Server (NTRS)
Huang, S.-S.
1975-01-01
The entire light curve of the BM Ori system both inside and outside primary and secondary eclipses has been examined on the basis of two models for the disk around the secondary component: one with the luminous energy of the disk coming entirely from the secondary, and another with the luminous energy coming at least partly from the primary. It has been found that if the disk is highly opaque, as is suggested by the fitting of the light curve, there exist in the first model discrepancies between what has been derived from the luminosity consideration for the secondary component and what has been derived from the radius consideration. Hence the second model is accepted. Based on this model the nature of both component stars has been examined from a consideration of the luminosity and the dimensions of the disk.
Satellite altimetry based rating curves throughout the entire Amazon basin
NASA Astrophysics Data System (ADS)
Paris, A.; Calmant, S.; Paiva, R. C.; Collischonn, W.; Silva, J. S.; Bonnet, M.; Seyler, F.
2013-05-01
The Amazonian basin is the largest hydrological basin all over the world. In the recent past years, the basin has experienced an unusual succession of extreme draughts and floods, which origin is still a matter of debate. Yet, the amount of data available is poor, both over time and space scales, due to factor like basin's size, access difficulty and so on. One of the major locks is to get discharge series distributed over the entire basin. Satellite altimetry can be used to improve our knowledge of the hydrological stream flow conditions in the basin, through rating curves. Rating curves are mathematical relationships between stage and discharge at a given place. The common way to determine the parameters of the relationship is to compute the non-linear regression between the discharge and stage series. In this study, the discharge data was obtained by simulation through the entire basin using the MGB-IPH model with TRMM Merge input rainfall data and assimilation of gage data, run from 1998 to 2010. The stage dataset is made of ~800 altimetry series at ENVISAT and JASON-2 virtual stations. Altimetry series span between 2002 and 2010. In the present work we present the benefits of using stochastic methods instead of probabilistic ones to determine a dataset of rating curve parameters which are consistent throughout the entire Amazon basin. The rating curve parameters have been computed using a parameter optimization technique based on Markov Chain Monte Carlo sampler and Bayesian inference scheme. This technique provides an estimate of the best parameters for the rating curve, but also their posterior probability distribution, allowing the determination of a credibility interval for the rating curve. Also is included in the rating curve determination the error over discharges estimates from the MGB-IPH model. These MGB-IPH errors come from either errors in the discharge derived from the gage readings or errors in the satellite rainfall estimates. The present experiment shows that the stochastic approach is more efficient than the determinist one. By using for the parameters prior credible intervals defined by the user, this method provides an estimate of best rating curve estimate without any unlikely parameter, and all sites achieved convergence before reaching the maximum number of model evaluations. Results were assessed trough the Nash Sutcliffe efficiency coefficient, applied both to discharge and logarithm of discharges. Most of the virtual stations had good or very good results, showing values of Ens going from 0.7 to 0.98. However, worse results were found at a few virtual stations, unveiling the necessity of investigating possibilities of segmentation of the rating curve, depending on the stage or the rising or recession limb, but also possible errors in the altimetry series.
NASA Astrophysics Data System (ADS)
Morlot, Thomas; Perret, Christian; Favre, Anne-Catherine
2013-04-01
Whether we talk about safety reasons, energy production or regulation, water resources management is one of EDF's (French hydropower company) main concerns. To meet these needs, since the fifties EDF-DTG operates a hydrometric network that includes more than 350 hydrometric stations. The data collected allow real time monitoring of rivers (hydro meteorological forecasts at points of interests), as well as hydrological studies and the sizing of structures. Ensuring the quality of stream flow data is a priority. A rating curve is an indirect method of estimating the discharge in rivers based on water level measurements. The value of discharge obtained thanks to the rating curve is not entirely accurate due to the constant changes of the river bed morphology, to the precision of the gaugings (direct and punctual discharge measurements) and to the quality of the tracing. As time goes on, the uncertainty of the estimated discharge from a rating curve « gets older » and increases: therefore the final level of uncertainty remains particularly difficult to assess. Moreover, the current EDF capacity to produce a rating curve is not suited to the frequency of change of the stage-discharge relationship. The actual method does not take into consideration the variation of the flow conditions and the modifications of the river bed which occur due to natural processes such as erosion, sedimentation and seasonal vegetation growth. In order to get the most accurate stream flow data and to improve their reliability, this study undertakes an original « dynamic» method to compute rating curves based on historical gaugings from a hydrometric station. A curve is computed for each new gauging and a model of uncertainty is adjusted for each of them. The model of uncertainty takes into account the inaccuracies in the measurement of the water height, the quality of the tracing, the uncertainty of the gaugings and the aging of the confidence intervals calculated with a variographic analysis. These rating curves enable to provide values of stream flow taking into account the variability of flow conditions, while providing a model of uncertainties resulting from the aging of the rating curves. By taking into account the variability of the flow conditions and the life of the hydrometric station, this original dynamic method can answer important questions in the field of hydrometry such as « How many gaugings a year have to be made so as to produce stream flow data with an average uncertainty of X% ? » and « When and in which range of water flow do we have to realize those gaugings ? ». KEY WORDS : Uncertainty, Rating curve, Hydrometric station, Gauging, Variogram, Stream Flow
NASA Technical Reports Server (NTRS)
Yatheendradas, Soni; Peters-Lidard, Christa D.; Koren, Victor; Cosgrove, Brian A.; DeGoncalves, Luis G. D.; Smith, Michael; Geiger, James; Cui, Zhengtao; Borak, Jordan; Kumar, Sujay V.;
2012-01-01
Snow cover area affects snowmelt, soil moisture, evapotranspiration, and ultimately streamflow. For the Distributed Model Intercomparison Project - Phase 2 Western basins, we assimilate satellite-based fractional snow cover area (fSCA) from the Moderate Resolution Imaging Spectroradiometer, or MODIS, into the National Weather Service (NWS) SNOW-17 model. This model is coupled with the NWS Sacramento Heat Transfer (SAC-HT) model inside the National Aeronautics and Space Administration's (NASA) Land Information System. SNOW-17 computes fSCA from snow water equivalent (SWE) values using an areal depletion curve. Using a direct insertion, we assimilate fSCAs in two fully distributed ways: 1) we update the curve by attempting SWE preservation, and 2) we reconstruct SWEs using the curve. The preceding are refinements of an existing simple, conceptually-guided NWS algorithm. Satellite fSCA over dense forests inadequately accounts for below-canopy snow, degrading simulated streamflow upon assimilation during snowmelt. Accordingly, we implement a below-canopy allowance during assimilation. This simplistic allowance and direct insertion are found to be inadequate for improving calibrated results, still degrading them as mentioned above. However, for streamflow volume for the uncalibrated runs, we obtain: (1) substantial to major improvements (64-81 %) as a percentage of the control run residuals (or distance from observations), and (2) minor improvements (16-22 %) as a percentage of observed values. We highlight the need for detailed representations of canopy-snow optical radiative transfer processes in mountainous, dense forest regions if assimilation-based improvements are to be seen in calibrated runs over these areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, B; Georgia Institute of Technology, Atlanta, GA; Wang, C
Purpose: To correlate the damage produced by particles of different types and qualities to cell survival on the basis of nanodosimetric analysis and advanced DNA structures in the cell nucleus. Methods: A Monte Carlo code was developed to simulate subnuclear DNA chromatin fibers (CFs) of 30nm utilizing a mean-free-path approach common to radiation transport. The cell nucleus was modeled as a spherical region containing 6000 chromatin-dense domains (CDs) of 400nm diameter, with additional CFs modeled in a sparser interchromatin region. The Geant4-DNA code was utilized to produce a particle track database representing various particles at different energies and dose quantities.more » These tracks were used to stochastically position the DNA structures based on their mean free path to interaction with CFs. Excitation and ionization events intersecting CFs were analyzed using the DBSCAN clustering algorithm for assessment of the likelihood of producing DSBs. Simulated DSBs were then assessed based on their proximity to one another for a probability of inducing cell death. Results: Variations in energy deposition to chromatin fibers match expectations based on differences in particle track structure. The quality of damage to CFs based on different particle types indicate more severe damage by high-LET radiation than low-LET radiation of identical particles. In addition, the model indicates more severe damage by protons than of alpha particles of same LET, which is consistent with differences in their track structure. Cell survival curves have been produced showing the L-Q behavior of sparsely ionizing radiation. Conclusion: Initial results indicate the feasibility of producing cell survival curves based on the Monte Carlo cell nucleus method. Accurate correlation between simulated DNA damage to cell survival on the basis of nanodosimetric analysis can provide insight into the biological responses to various radiation types. Current efforts are directed at producing cell survival curves for high-LET radiation.« less
Villadiego, Faider Alberto Castaño; Camilo, Breno Soares; León, Victor Gomez; Peixoto, Thiago; Díaz, Edgar; Okano, Denise; Maitan, Paula; Lima, Daniel; Guimarães, Simone Facioni; Siqueira, Jeanne Broch; Pinho, Rogério
2018-01-01
Nonlinear mixed models were used to describe longitudinal scrotal circumference (SC) measurements of Nellore bulls. Models comparisons were based on Akaike’s information criterion, Bayesian information criterion, error sum of squares, adjusted R2 and percentage of convergence. Sequentially, the best model was used to compare the SC growth curve in bulls divergently classified according to SC at 18–21 months of age. For this, bulls were classified into five groups: SC < 28cm; 28cm ≤ SC < 30cm, 30cm ≤ SC < 32cm, 32cm ≤ SC < 34cm and SC ≥ 34cm. Michaelis-Menten model showed the best fit according to the mentioned criteria. In this model, β1 is the asymptotic SC value and β2 represents the time to half-final growth and may be related to sexual precocity. Parameters of the individual estimated growth curves were used to create a new dataset to evaluate the effect of the classification, farms, and year of birth on β1 and β2 parameters. Bulls of the largest SC group presented a larger predicted SC along all analyzed periods; nevertheless, smaller SC group showed predicted SC similar to intermediate SC groups (28cm ≤ SC < 32cm), around 1200 days of age. In this context, bulls classified as improper for reproduction at 18–21 months old can reach a similar condition to those considered as good condition. In terms of classification at 18–21 months, asymptotic SC was similar among groups, farms and years; however, β2 differed among groups indicating that differences in growth curves are related to sexual precocity. In summary, it seems that selection based on SC at too early ages may lead to discard bulls with suitable reproductive potential. PMID:29494597
Ply cracking in composite laminates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Youngmyong.
1989-01-01
Ply cracking behavior and accompanying stiffness changes in thermoset as well as thermoplastic matrix composites under various loading conditions are investigated. Specific topics addressed are: analytical model development for property degradations due to ply cracking under general in-plane loading; crack initiation and multiplication under static loading; and crack multiplication under cyclic loading. A model was developed to calculate the energy released due to ply cracking in a composite laminate subjected to general in-plane loading. The method is based on the use of a second order polynomial to represent the crack opening displacement and the concept of a through-the-thickness inherent flaw.more » The model is then used in conjunction with linear elastic fracture mechanics to predict the progressive ply cracking as well as first ply cracking. A resistance curve for crack multiplication is proposed as a means of characterizing the resistance to ply cracking in composite laminates. A methodology of utilizing the resistance curve to assess the crack density or overloading is also discussed. The method was applied to the graphite/thermoplastic polyimide composite to predict progressive ply cracking. However, unlike the thermoset matrix composites, a strength model is found to fit the experimental results better than the fracture mechanics based model. A set of closed form equations is also developed to calculate the accompanying stiffness changes due to the ply cracking. The effect of thermal residual stress is included in the analysis. A new method is proposed to characterize transverse ply cracking of symmetric balanced laminates under cyclic loading. The method is based on the concept of a through-the-thickness inherent flaw, the Paris law, and the resistance curve. Only two constants are needed to predict the crack density as a function of fatigue cycles.« less
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther
2016-01-01
The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased usage in the aerospace and automotive communities. In order to address a series of issues identified by the aerospace community as being desirable to include in a next generation composite impact model, an orthotropic, macroscopic constitutive model incorporating both plasticity and damage suitable for implementation within the commercial LS-DYNA computer code is being developed. The plasticity model is based on extending the Tsai-Wu composite failure model into a strain hardening-based orthotropic plasticity model with a non-associative flow rule. The evolution of the yield surface is determined based on tabulated stress-strain curves in the various normal and shear directions and is tracked using the effective plastic strain. To compute the evolution of damage, a strain equivalent semi-coupled formulation is used in which a load in one direction results in a stiffness reduction in multiple material coordinate directions. A detailed analysis is carried out to ensure that the strain equivalence assumption is appropriate for the derived plasticity and damage formulations that are employed in the current model. Procedures to develop the appropriate input curves for the damage model are presented and the process required to develop an appropriate characterization test matrix is discussed
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Hoffarth, Canio; Rajan, Subramaniam; Blankenhorn, Gunther
2016-01-01
The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased usage in the aerospace and automotive communities. In order to address a series of issues identified by the aerospace community as being desirable to include in a next generation composite impact model, an orthotropic, macroscopic constitutive model incorporating both plasticity and damage suitable for implementation within the commercial LS-DYNA computer code is being developed. The plasticity model is based on extending the Tsai-Wu composite failure model into a strain hardening-based orthotropic plasticity model with a non-associative flow rule. The evolution of the yield surface is determined based on tabulated stress-strain curves in the various normal and shear directions and is tracked using the effective plastic strain. To compute the evolution of damage, a strain equivalent semi-coupled formulation is used in which a load in one direction results in a stiffness reduction in multiple material coordinate directions. A detailed analysis is carried out to ensure that the strain equivalence assumption is appropriate for the derived plasticity and damage formulations that are employed in the current model. Procedures to develop the appropriate input curves for the damage model are presented and the process required to develop an appropriate characterization test matrix is discussed.
Robust, open-source removal of systematics in Kepler data
NASA Astrophysics Data System (ADS)
Aigrain, S.; Parviainen, H.; Roberts, S.; Reece, S.; Evans, T.
2017-10-01
We present ARC2 (Astrophysically Robust Correction 2), an open-source python-based systematics-correction pipeline, to correct for the Kepler prime mission long-cadence light curves. The ARC2 pipeline identifies and corrects any isolated discontinuities in the light curves and then removes trends common to many light curves. These trends are modelled using the publicly available co-trending basis vectors, within an (approximate) Bayesian framework with 'shrinkage' priors to minimize the risk of overfitting and the injection of any additional noise into the corrected light curves, while keeping any astrophysical signals intact. We show that the ARC2 pipeline's performance matches that of the standard Kepler PDC-MAP data products using standard noise metrics, and demonstrate its ability to preserve astrophysical signals using injection tests with simulated stellar rotation and planetary transit signals. Although it is not identical, the ARC2 pipeline can thus be used as an open-source alternative to PDC-MAP, whenever the ability to model the impact of the systematics removal process on other kinds of signal is important.
CEC-normalized clay-water sorption isotherm
NASA Astrophysics Data System (ADS)
Woodruff, W. F.; Revil, A.
2011-11-01
A normalized clay-water isotherm model based on BET theory and describing the sorption and desorption of the bound water in clays, sand-clay mixtures, and shales is presented. Clay-water sorption isotherms (sorption and desorption) of clayey materials are normalized by their cation exchange capacity (CEC) accounting for a correction factor depending on the type of counterion sorbed on the mineral surface in the so-called Stern layer. With such normalizations, all the data collapse into two master curves, one for sorption and one for desorption, independent of the clay mineralogy, crystallographic considerations, and bound cation type; therefore, neglecting the true heterogeneity of water sorption/desorption in smectite. The two master curves show the general hysteretic behavior of the capillary pressure curve at low relative humidity (below 70%). The model is validated against several data sets obtained from the literature comprising a broad range of clay types and clay mineralogies. The CEC values, derived by inverting the sorption/adsorption curves using a Markov chain Monte Carlo approach, are consistent with the CEC associated with the clay mineralogy.
A local-circulation model for Darrieus vertical-axis wind turbines
NASA Astrophysics Data System (ADS)
Masse, B.
1986-04-01
A new computational model for the aerodynamics of the vertical-axis wind turbine is presented. Based on the local-circulation method generalized for curved blades, combined with a wake model for the vertical-axis wind turbine, it differs markedly from current models based on variations in the streamtube momentum and vortex models using the lifting-line theory. A computer code has been developed to calculate the loads and performance of the Darrieus vertical-axis wind turbine. The results show good agreement with experimental data and compare well with other methods.
LeDell, Erin; Petersen, Maya; van der Laan, Mark
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.
Numerical simulation of a horizontal sedimentation tank considering sludge recirculation.
Zhang, Wei; Zou, Zhihong; Sui, Jun
2010-01-01
Most research conducted on the concentration distribution of sediment in the sedimentation tank does not consider the role of the suction dredge. To analyze concentration distribution more accurately, a suspended sediment transportation model was constructed and the velocity field in the sedimentation tank was determined based on the influence of the suction dredge. An application model was then used to analyze the concentration distribution in the sedimentation tank when the suction dredge was fixed, with results showing that distribution was in accordance with theoretical analysis. The simulated value of the outlet concentration was similar to the experimental value, and the trends of the isoconcentration distribution curves, as well as the vertical distribution curves of the five monitoring sections acquired through simulations, were almost the same as curves acquired through experimentation. The differences between the simulated values and the experimental values were significant.
Flow in curved ducts of varying cross-section
NASA Astrophysics Data System (ADS)
Sotiropoulos, F.; Patel, V. C.
1992-07-01
Two numerical methods for solving the incompressible Navier-Stokes equations are compared with each other by applying them to calculate laminar and turbulent flows through curved ducts of regular cross-section. Detailed comparisons, between the computed solutions and experimental data, are carried out in order to validate the two methods and to identify their relative merits and disadvantages. Based on the conclusions of this comparative study a numerical method is developed for simulating viscous flows through curved ducts of varying cross-sections. The proposed method is capable of simulating the near-wall turbulence using fine computational meshes across the sublayer in conjunction with a two-layer k-epsilon model. Numerical solutions are obtained for: (1) a straight transition duct geometry, and (2) a hydroturbine draft-tube configuration at model scale Reynolds number for various inlet swirl intensities. The report also provides a detailed literature survey that summarizes all the experimental and computational work in the area of duct flows.
Petersen, Maya; van der Laan, Mark
2015-01-01
In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737
Goodford, P J; St-Louis, J; Wootton, R
1978-01-01
1. Oxygen dissociation curves have been measured for human haemoglobin solutions with different concentrations of the allosteric effectors 2,3-diphosphoglycerate, adenosine triphosphate and inositol hexaphosphate. 2. Each effector produces a concentration dependent right shift of the oxygen dissociation curve, but a point is reached where the shift is maximal and increasing the effector concentration has no further effect. 3. Mathematical models based on the Monod, Wyman & Changeux (1965) treatment of allosteric proteins have been fitted to the data. For each compound the simple two-state model and its extension to take account of subunit inequivalence were shown to be inadequate, and a better fit was obtained by allowing the effector to lower the oxygen affinity of the deoxy conformational state as well as binding preferentially to this conformation. PMID:722582
Modeling Propagation of Shock Waves in Metals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howard, W M; Molitoris, J D
2005-08-19
We present modeling results for the propagation of strong shock waves in metals. In particular, we use an arbitrary Lagrange Eulerian (ALE3D) code to model the propagation of strong pressure waves (P {approx} 300 to 400 kbars) generated with high explosives in contact with aluminum cylinders. The aluminum cylinders are assumed to be both flat-topped and have large-amplitude curved surfaces. We use 3D Lagrange mechanics. For the aluminum we use a rate-independent Steinberg-Guinan model, where the yield strength and shear modulus depend on pressure, density and temperature. The calculation of the melt temperature is based on the Lindermann law. Atmore » melt the yield strength and shear modulus is set to zero. The pressure is represented as a seven-term polynomial as a function of density. For the HMX-based high explosive, we use a JWL, with a program burn model that give the correct detonation velocity and C-J pressure (P {approx} 390 kbars). For the case of the large-amplitude curved surface, we discuss the evolving shock structure in terms of the early shock propagation experiments by Sakharov.« less
Modeling Propagation of Shock Waves in Metals
NASA Astrophysics Data System (ADS)
Howard, W. M.; Molitoris, J. D.
2006-07-01
We present modeling results for the propagation of strong shock waves in metals. In particular, we use an arbitrary Lagrange Eulerian (ALE3D) code to model the propagation of strong pressure waves (P ˜ 300 to 400 kbars) generated with high explosives in contact with aluminum cylinders. The aluminum cylinders are assumed to be both flat-topped and have large-amplitude curved surfaces. We use 3D Lagrange mechanics. For the aluminum we use a rate-independent Steinberg-Guinan model, where the yield strength and shear modulus depend on pressure, density and temperature. The calculation of the melt temperature is based on the Lindermann law. At melt the yield strength and shear modulus is set to zero. The pressure is represented as a seven-term polynomial as a function of density. For the HMX-based high explosive, we use a JWL, with a program burn model that give the correct detonation velocity and C-J pressure (P ˜ 390 kbars). For the case of the large-amplitude curved surface, we discuss the evolving shock structure in terms of the early shock propagation experiments by Sakharov.
The eclipsing binary CW Eridani. [three-color photoelectric observation
NASA Technical Reports Server (NTRS)
Chen, K.-Y.
1975-01-01
Results of three-color photoelectric observations of CW Eridani are presented which were made with a 30-inch telescope over the three-year period from 1970 to 1973. The times of minima are computed, solutions of the light curves are obtained, and theoretical light curves are computed from the solutions. The period is determined to be 2.72837 days, and the orbital and photoelectric elements are derived from solutions based on the idealized Russell model.
Electric Transport Traction Power Supply System With Distributed Energy Sources
NASA Astrophysics Data System (ADS)
Abramov, E. Y.; Schurov, N. I.; Rozhkova, M. V.
2016-04-01
The paper states the problem of traction substation (TSS) leveling of daily-load curve for urban electric transport. The circuit of traction power supply system (TPSS) with distributed autonomous energy source (AES) based on photovoltaic (PV) and energy storage (ES) units is submitted here. The distribution algorithm of power flow for the daily traction load curve leveling is also introduced in this paper. In addition, it illustrates the implemented experiment model of power supply system.
NASA Astrophysics Data System (ADS)
Lara, Nadia C.; Haider, Asad A.; Wilson, Lon J.; Curley, Steven A.; Corr, Stuart J.
2017-01-01
Aqueous and nanoparticle-based solutions have been reported to heat when exposed to an alternating radiofrequency (RF) electric-field. Although the theoretical models have been developed to accurately model such a behavior given the solution composition as well as the geometrical constraints of the sample holder, these models have not been investigated across a wide-range of solutions where the dielectric properties differ, especially with regard to the real permittivity. In this work, we investigate the RF heating properties of non-aqueous solutions composed of ethanol, propylene glycol, and glycine betaine with and without varying amounts of NaCl and LiCl. This allowed us to modulate the real permittivity across the range 25-132, as well as the imaginary permittivity across the range 37-177. Our results are in excellent agreement with the previously developed theoretical models. We have shown that different materials generate unique RF heating curves that differ from the standard aqueous heating curves. The theoretical model previously described is robust and accounts for the RF heating behavior of materials with a variety of dielectric properties, which may provide applications in non-invasive RF cancer hyperthermia.
Fast evolving pair-instability supernovae
Kozyreva, Alexandra; Gilmer, Matthew; Hirschi, Raphael; ...
2016-10-06
With an increasing number of superluminous supernovae (SLSNe) discovered the ques- tion of their origin remains open and causes heated debates in the supernova commu- nity. Currently, there are three proposed mechanisms for SLSNe: (1) pair-instability supernovae (PISN), (2) magnetar-driven supernovae, and (3) models in which the su- pernova ejecta interacts with a circumstellar material ejected before the explosion. Based on current observations of SLSNe, the PISN origin has been disfavoured for a number of reasons. Many PISN models provide overly broad light curves and too reddened spectra, because of massive ejecta and a high amount of nickel. In themore » cur- rent study we re-examine PISN properties using progenitor models computed with the GENEC code. We calculate supernova explosions with FLASH and light curve evolu- tion with the radiation hydrodynamics code STELLA. We find that high-mass models (200 M⊙ and 250 M⊙) at relatively high metallicity (Z=0.001) do not retain hydro- gen in the outer layers and produce relatively fast evolving PISNe Type I and might be suitable to explain some SLSNe. We also investigate uncertainties in light curve modelling due to codes, opacities, the nickel-bubble effect and progenitor structure and composition.« less
Time-Dependent Behavior of Diabase and a Nonlinear Creep Model
NASA Astrophysics Data System (ADS)
Yang, Wendong; Zhang, Qiangyong; Li, Shucai; Wang, Shugang
2014-07-01
Triaxial creep tests were performed on diabase specimens from the dam foundation of the Dagangshan hydropower station, and the typical characteristics of creep curves were analyzed. Based on the test results under different stress levels, a new nonlinear visco-elasto-plastic creep model with creep threshold and long-term strength was proposed by connecting an instantaneous elastic Hooke body, a visco-elasto-plastic Schiffman body, and a nonlinear visco-plastic body in series mode. By introducing the nonlinear visco-plastic component, this creep model can describe the typical creep behavior, which includes the primary creep stage, the secondary creep stage, and the tertiary creep stage. Three-dimensional creep equations under constant stress conditions were deduced. The yield approach index (YAI) was used as the criterion for the piecewise creep function to resolve the difficulty in determining the creep threshold value and the long-term strength. The expression of the visco-plastic component was derived in detail and the three-dimensional central difference form was given. An example was used to verify the credibility of the model. The creep parameters were identified, and the calculated curves were in good agreement with the experimental curves, indicating that the model is capable of replicating the physical processes.
Snowmelt runoff modeling in simulation and forecasting modes with the Martinec-Mango model
NASA Technical Reports Server (NTRS)
Shafer, B.; Jones, E. B.; Frick, D. M. (Principal Investigator)
1982-01-01
The Martinec-Rango snowmelt runoff model was applied to two watersheds in the Rio Grande basin, Colorado-the South Fork Rio Grande, a drainage encompassing 216 sq mi without reservoirs or diversions and the Rio Grande above Del Norte, a drainage encompassing 1,320 sq mi without major reservoirs. The model was successfully applied to both watersheds when run in a simulation mode for the period 1973-79. This period included both high and low runoff seasons. Central to the adaptation of the model to run in a forecast mode was the need to develop a technique to forecast the shape of the snow cover depletion curves between satellite data points. Four separate approaches were investigated-simple linear estimation, multiple regression, parabolic exponential, and type curve. Only the parabolic exponential and type curve methods were run on the South Fork and Rio Grande watersheds for the 1980 runoff season using satellite snow cover updates when available. Although reasonable forecasts were obtained in certain situations, neither method seemed ready for truly operational forecasts, possibly due to a large amount of estimated climatic data for one or two primary base stations during the 1980 season.
NASA Astrophysics Data System (ADS)
Zeng, Wenhui; Yi, Jin; Rao, Xiao; Zheng, Yun
2017-11-01
In this article, collision-avoidance path planning for multiple car-like robots with variable motion is formulated as a two-stage objective optimization problem minimizing both the total length of all paths and the task's completion time. Accordingly, a new approach based on Pythagorean Hodograph (PH) curves and Modified Harmony Search algorithm is proposed to solve the two-stage path-planning problem subject to kinematic constraints such as velocity, acceleration, and minimum turning radius. First, a method of path planning based on PH curves for a single robot is proposed. Second, a mathematical model of the two-stage path-planning problem for multiple car-like robots with variable motion subject to kinematic constraints is constructed that the first-stage minimizes the total length of all paths and the second-stage minimizes the task's completion time. Finally, a modified harmony search algorithm is applied to solve the two-stage optimization problem. A set of experiments demonstrate the effectiveness of the proposed approach.
Real-time dual-loop electric current measurement for label-free nanofluidic preconcentration chip.
Chung, Pei-Shan; Fan, Yu-Jui; Sheen, Horn-Jiunn; Tian, Wei-Cheng
2015-01-07
An electrokinetic trapping (EKT)-based nanofluidic preconcentration device with the capability of label-free monitoring trapped biomolecules through real-time dual-loop electric current measurement was demonstrated. Universal current-voltage (I-V) curves of EKT-based preconcentration devices, consisting of two microchannels connected by ion-selective channels, are presented for functional validation and optimal operation; universal onset current curves indicating the appearance of the EKT mechanism serve as a confirmation of the concentrating action. The EKT mechanism and the dissimilarity in the current curves related to the volume flow rate (Q), diffusion coefficient (D), and diffusion layer (DL) thickness were explained by a control volume model with a five-stage preconcentration process. Different behaviors of the trapped molecular plug were categorized based on four modes associated with different degrees of electroosmotic instability (EOI). A label-free approach to preconcentrating (bio)molecules and monitoring the multibehavior molecular plug was demonstrated through real-time electric current monitoring, rather than through the use of microscope images.
Calibration and accuracy analysis of a focused plenoptic camera
NASA Astrophysics Data System (ADS)
Zeller, N.; Quint, F.; Stilla, U.
2014-08-01
In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The optical imaging process is calibrated by using a method which is already known from the calibration of traditional cameras. For the calibration of the depth map two new model based methods, which make use of the projection concept of the camera are developed. These new methods are compared to a common curve fitting approach, which is based on Taylor-series-approximation. Both model based methods show significant advantages compared to the curve fitting method. They need less reference points for calibration than the curve fitting method and moreover, supply a function which is valid in excess of the range of calibration. In addition the depth map accuracy of the plenoptic camera was experimentally investigated for different focal lengths of the main lens and is compared to the analytical evaluation.
PHOTOMETRIC ANALYSIS OF HS Aqr, EG Cep, VW LMi, AND DU Boo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Djurasevic, G.; Latkovic, O.; Bastuerk, Oe.
2013-03-15
We analyze new multicolor light curves for four close late-type binaries: HS Aqr, EG Cep, VW LMi, and DU Boo, in order to determine the orbital and physical parameters of the systems and estimate the distances. The analysis is done using the modeling code of G. Djurasevic, and is based on up-to-date measurements of spectroscopic elements. All four systems have complex, asymmetric light curves that we model by including bright or dark spots on one or both components. Our findings indicate that HS Aqr and EG Cep are in semi-detached, while VW LMi and DU Boo are in overcontact configurations.
An experimental comparison of two adaptation strategies in an adaptive-walls wind-tunnel
NASA Astrophysics Data System (ADS)
Russo, G. P.; Zuppardi, G.; Basciani, M.
1995-08-01
In the present work an experimental comparison is made between two adaptation strategies: the Judd's method and the Everhart's method. A NACA 0012 airfoil has been tested at Mach numbers up to 0.4: models with chords up to 200 mm have been tested in a 200 mm × 200 mm test section. The two strategies, though based on different theoretical approaches, show a fairly good agreement as far as c p distribution on the model, lift and drag curves and residual interference are concerned and agree, in terms of lift curve slope and drag coefficient at zero lift, with the McCroskey correlation.
NASA Technical Reports Server (NTRS)
Weil, Joseph; Sleeman, William C , Jr
1949-01-01
The effects of propeller operation on the static longitudinal stability of single-engine tractor monoplanes are analyzed, and a simple method is presented for computing power-on pitching-moment curves for flap-retracted flight conditions. The methods evolved are based on the results of powered-model wind-tunnel investigations of 28 model configurations. Correlation curves are presented from which the effects of power on the downwash over the tail and the stabilizer effectiveness can be rapidly predicted. The procedures developed enable prediction of power-on longitudinal stability characteristics that are generally in very good agreement with experiment.
Lithium-ion Open Circuit Voltage (OCV) curve modelling and its ageing adjustment
NASA Astrophysics Data System (ADS)
Lavigne, L.; Sabatier, J.; Francisco, J. Mbala; Guillemard, F.; Noury, A.
2016-08-01
This paper is a contribution to lithium-ion batteries modelling taking into account aging effects. It first analyses the impact of aging on electrode stoichiometry and then on lithium-ion cell Open Circuit Voltage (OCV) curve. Through some hypotheses and an appropriate definition of the cell state of charge, it shows that each electrode equilibrium potential, but also the whole cell equilibrium potential can be modelled by a polynomial that requires only one adjustment parameter during aging. An adjustment algorithm, based on the idea that for two fixed OCVs, the state of charge between these two equilibrium states is unique for a given aging level, is then proposed. Its efficiency is evaluated on a battery pack constituted of four cells.
Spiral blood flows in an idealized 180-degree curved artery model
NASA Astrophysics Data System (ADS)
Bulusu, Kartik V.; Kulkarni, Varun; Plesniak, Michael W.
2017-11-01
Understanding of cardiovascular flows has been greatly advanced by the Magnetic Resonance Velocimetry (MRV) technique and its potential for three-dimensional velocity encoding in regions of anatomic interest. The MRV experiments were performed on a 180-degree curved artery model using a Newtonian blood analog fluid at the Richard M. Lucas Center at Stanford University employing a 3 Tesla General Electric (Discovery 750 MRI system) whole body scanner with an eight-channel cardiac coil. Analysis in two regions of the model-artery was performed for flow with Womersley number=4.2. In the entrance region (or straight-inlet pipe) the unsteady pressure drop per unit length, in-plane vorticity and wall shear stress for the pulsatile, carotid artery-based flow rate waveform were calculated. Along the 180-degree curved pipe (curvature ratio =1/7) the near-wall vorticity and the stretching of the particle paths in the vorticity field are visualized. The resultant flow behavior in the idealized curved artery model is associated with parameters such as Dean number and Womersley number. Additionally, using length scales corresponding to the axial and secondary flow we attempt to understand the mechanisms leading to the formation of various structures observed during the pulsatile flow cycle. Supported by GW Center for Biomimetics and Bioinspired Engineering (COBRE), MRV measurements in collaboration with Prof. John K. Eaton and, Dr. Chris Elkins at Stanford University.
Broadband Photometric Reverberation Mapping Analysis on SDSS-RM and Stripe 82 Quasars
NASA Astrophysics Data System (ADS)
Zhang, Haowen; Yang, Qian; Wu, Xuebing; Shen, Yue
2018-01-01
We extended the broadband photometric reverberation mapping (PRM) code, JAVELIN and test the availability to get broad line region (BLR) time delays that are consistent with spectroscopic reverberation mapping (SRM) projects. Broadband light curves of SDSS-RM quasars produced by convolution with system transmission curve were used in the test. We find that under similar sampling conditions (evenly and frequently sampled), the key factor determining whether the broadband PRM code can yield lags consistent with spectroscopic projects is the flux ratio of line to the reference continuum, which is in line with the findings in Zu et al. (2016). We further find a crucial line-to-continuum flux ratio, above which the mean of the ratios between the lags from PRM and SRM becomes closer to unity, and the scatter is pronouncedly reduced. Based on this flux ratio criteria, we selected some of the quasars from Hernitschek et al. (2015) and carry out broadband PRM on this subset. The performance of damped random walking (DRW) model and power-law (PL) structure function model on broadband PRM are compared using mock light curves with high, even cadences and low, uneven ones, respectively. We find that DRW model performs better in carrying out broadband PRM than PL model both for high and low cadence light curves with other data qualities similar to SDSS-RM quasars.
Conformational Modeling of Continuum Structures in Robotics and Structural Biology: A Review
Chirikjian, G. S.
2016-01-01
Hyper-redundant (or snakelike) manipulators have many more degrees of freedom than are required to position and orient an object in space. They have been employed in a variety of applications ranging from search-and-rescue to minimally invasive surgical procedures, and recently they even have been proposed as solutions to problems in maintaining civil infrastructure and the repair of satellites. The kinematic and dynamic properties of snakelike robots are captured naturally using a continuum backbone curve equipped with a naturally evolving set of reference frames, stiffness properties, and mass density. When the snakelike robot has a continuum architecture, the backbone curve corresponds with the physical device itself. Interestingly, these same modeling ideas can be used to describe conformational shapes of DNA molecules and filamentous protein structures in solution and in cells. This paper reviews several classes of snakelike robots: (1) hyper-redundant manipulators guided by backbone curves; (2) flexible steerable needles; and (3) concentric tube continuum robots. It is then shown how the same mathematical modeling methods used in these robotics contexts can be used to model molecules such as DNA. All of these problems are treated in the context of a common mathematical framework based on the differential geometry of curves, continuum mechanics, and variational calculus. Both coordinate-dependent Euler-Lagrange formulations and coordinate-free Euler-Poincaré approaches are reviewed. PMID:27030786
Conformational Modeling of Continuum Structures in Robotics and Structural Biology: A Review.
Chirikjian, G S
Hyper-redundant (or snakelike) manipulators have many more degrees of freedom than are required to position and orient an object in space. They have been employed in a variety of applications ranging from search-and-rescue to minimally invasive surgical procedures, and recently they even have been proposed as solutions to problems in maintaining civil infrastructure and the repair of satellites. The kinematic and dynamic properties of snakelike robots are captured naturally using a continuum backbone curve equipped with a naturally evolving set of reference frames, stiffness properties, and mass density. When the snakelike robot has a continuum architecture, the backbone curve corresponds with the physical device itself. Interestingly, these same modeling ideas can be used to describe conformational shapes of DNA molecules and filamentous protein structures in solution and in cells. This paper reviews several classes of snakelike robots: (1) hyper-redundant manipulators guided by backbone curves; (2) flexible steerable needles; and (3) concentric tube continuum robots. It is then shown how the same mathematical modeling methods used in these robotics contexts can be used to model molecules such as DNA. All of these problems are treated in the context of a common mathematical framework based on the differential geometry of curves, continuum mechanics, and variational calculus. Both coordinate-dependent Euler-Lagrange formulations and coordinate-free Euler-Poincaré approaches are reviewed.
Statistical damage constitutive model for rocks subjected to cyclic stress and cyclic temperature
NASA Astrophysics Data System (ADS)
Zhou, Shu-Wei; Xia, Cai-Chu; Zhao, Hai-Bin; Mei, Song-Hua; Zhou, Yu
2017-10-01
A constitutive model of rocks subjected to cyclic stress-temperature was proposed. Based on statistical damage theory, the damage constitutive model with Weibull distribution was extended. Influence of model parameters on the stress-strain curve for rock reloading after stress-temperature cycling was then discussed. The proposed model was initially validated by rock tests for cyclic stress-temperature and only cyclic stress. Finally, the total damage evolution induced by stress-temperature cycling and reloading after cycling was explored and discussed. The proposed constitutive model is reasonable and applicable, describing well the stress-strain relationship during stress-temperature cycles and providing a good fit to the test results. Elastic modulus in the reference state and the damage induced by cycling affect the shape of reloading stress-strain curve. Total damage induced by cycling and reloading after cycling exhibits three stages: initial slow increase, mid-term accelerated increase, and final slow increase.
Individual Differences in a Positional Learning Task across the Adult Lifespan
ERIC Educational Resources Information Center
Rast, Philippe; Zimprich, Daniel
2010-01-01
This study aimed at modeling individual and average non-linear trajectories of positional learning using a structured latent growth curve approach. The model is based on an exponential function which encompasses three parameters: Initial performance, learning rate, and asymptotic performance. These learning parameters were compared in a positional…
Gamma-Ray Light Curves from Pulsar Magnetospheres with Finite Conductivity
NASA Technical Reports Server (NTRS)
Harding, A. K.; Kalapotharakos, C.; Kazanas, D.; Contopoulos, I.
2012-01-01
The Fermi Large Area Telescope has provided an unprecedented database for pulsar emission studies that includes gamma-ray light curves for over 100 pulsars. Modeling these light curves can reveal and constrain the geometry of the particle accelerator, as well as the pulsar magnetic field structure. We have constructed 3D magnetosphere models with finite conductivity, that bridge the extreme vacuum and force-free solutions used in previous light curves modeling. We are investigating the shapes of pulsar gamma-ray light curves using these dissipative solutions with two different approaches: (l) assuming geometric emission patterns of the slot gap and outer gap, and (2) using the parallel electric field provided by the resistive models to compute the trajectories and . emission of the radiating particles. The light curves using geometric emission patterns show a systematic increase in gamma-ray peak phase with increasing conductivity, introducing a new diagnostic of these solutions. The light curves using the model electric fields are very sensitive to the conductivity but do not resemble the observed Fermi light curves, suggesting that some screening of the parallel electric field, by pair cascades not included in the models, is necessary
Morel, Jean-Pierre; Marmier, Nicolas; Hurel, Charlotte; Morel-Desrosiers, Nicole
2006-06-15
Sorption reactions on natural or synthetic materials that can attenuate the migration of pollutants in the geosphere could be affected by temperature variations. Nevertheless, most of the theoretical models describing sorption reactions are at 25 degrees C. To check these models at different temperatures, experimental data such as the enthalpies of sorption are thus required. Highly sensitive microcalorimeters can now be used to determine the heat effects accompanying the sorption of radionuclides on oxide-water interfaces, but enthalpies of sorption cannot be extracted from microcalorimetric data without a clear knowledge of the thermodynamics of protonation and deprotonation of the oxide surface. However, the values reported in the literature show large discrepancies and one must conclude that, amazingly, this fundamental problem of proton binding is not yet resolved. We have thus undertaken to measure by titration microcalorimetry the heat effects accompanying proton exchange at the alumina-water interface at 25 degrees C. Based on (i) the surface sites speciation provided by a surface complexation model (built from acid-base titrations at 25 degrees C) and (ii) results of the microcalorimetric experiments, calculations have been made to extract the enthalpic variations associated respectively to first and second deprotonation of the alumina surface. Values obtained are deltaH1 = 80+/-10 kJ mol(-1) and deltaH2 = 5+/-3 kJ mol(-1). In a second step, these enthalpy values were used to calculate the alumina surface acidity constants at 50 degrees C via the van't Hoff equation. Then a theoretical titration curve at 50 degrees C was calculated and compared to the experimental alumina surface titration curve. Good agreement between the predicted acid-base titration curve and the experimental one was observed.
NASA Astrophysics Data System (ADS)
Hayek, W.; Sing, D.; Pont, F.; Asplund, M.
2012-03-01
We compare limb darkening laws derived from 3D hydrodynamical model atmospheres and 1D hydrostatic MARCS models for the host stars of two well-studied transiting exoplanet systems, the late-type dwarfs HD 209458 and HD 189733. The surface brightness distribution of the stellar disks is calculated for a wide spectral range using 3D LTE spectrum formation and opacity sampling⋆. We test our theoretical predictions using least-squares fits of model light curves to wavelength-integrated primary eclipses that were observed with the Hubble Space Telescope (HST). The limb darkening law derived from the 3D model of HD 209458 in the spectral region between 2900 Å and 5700 Å produces significantly better fits to the HST data, removing systematic residuals that were previously observed for model light curves based on 1D limb darkening predictions. This difference arises mainly from the shallower mean temperature structure of the 3D model, which is a consequence of the explicit simulation of stellar surface granulation where 1D models need to rely on simplified recipes. In the case of HD 189733, the model atmospheres produce practically equivalent limb darkening curves between 2900 Å and 5700 Å, partly due to obstruction by spectral lines, and the data are not sufficient to distinguish between the light curves. We also analyze HST observations between 5350 Å and 10 500 Å for this star; the 3D model leads to a better fit compared to 1D limb darkening predictions. The significant improvement of fit quality for the HD 209458 system demonstrates the higher degree of realism of 3D hydrodynamical models and the importance of surface granulation for the formation of the atmospheric radiation field of late-type stars. This result agrees well with recent investigations of limb darkening in the solar continuum and other observational tests of the 3D models. The case of HD 189733 is no contradiction as the model light curves are less sensitive to the temperature stratification of the stellar atmosphere and the observed data in the 2900-5700 Å region are not sufficient to distinguish more clearly between the 3D and 1D limb darkening predictions. Full theoretical spectra for both stars are available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/539/A102, as well as at www.astro.ex.ac.uk/people/sing.
A physically based analytical model of flood frequency curves
NASA Astrophysics Data System (ADS)
Basso, S.; Schirmer, M.; Botter, G.
2016-09-01
Predicting magnitude and frequency of floods is a key issue in hydrology, with implications in many fields ranging from river science and geomorphology to the insurance industry. In this paper, a novel physically based approach is proposed to estimate the recurrence intervals of seasonal flow maxima. The method links the extremal distribution of streamflows to the stochastic dynamics of daily discharge, providing an analytical expression of the seasonal flood frequency curve. The parameters involved in the formulation embody climate and landscape attributes of the contributing catchment and can be estimated from daily rainfall and streamflow data. Only one parameter, which is linked to the antecedent wetness condition in the watershed, needs to be calibrated on the observed maxima. The performance of the method is discussed through a set of applications in four rivers featuring heterogeneous daily flow regimes. The model provides reliable estimates of seasonal maximum flows in different climatic settings and is able to capture diverse shapes of flood frequency curves emerging in erratic and persistent flow regimes. The proposed method exploits experimental information on the full range of discharges experienced by rivers. As a consequence, model performances do not deteriorate when the magnitude of events with return times longer than the available sample size is estimated. The approach provides a framework for the prediction of floods based on short data series of rainfall and daily streamflows that may be especially valuable in data scarce regions of the world.
Seismic fragility assessment of low-rise stone masonry buildings
NASA Astrophysics Data System (ADS)
Abo-El-Ezz, Ahmad; Nollet, Marie-José; Nastev, Miroslav
2013-03-01
Many historic buildings in old urban centers in Eastern Canada are made of stone masonry reputed to be highly vulnerable to seismic loads. Seismic risk assessment of stone masonry buildings is therefore the first step in the risk mitigation process to provide adequate planning for retrofit and preservation of historical urban centers. This paper focuses on development of analytical displacement-based fragility curves reflecting the characteristics of existing stone masonry buildings in Eastern Canada. The old historic center of Quebec City has been selected as a typical study area. The standard fragility analysis combines the inelastic spectral displacement, a structure-dependent earthquake intensity measure, and the building damage state correlated to the induced building displacement. The proposed procedure consists of a three-step development process: (1) mechanics-based capacity model, (2) displacement-based damage model and (3) seismic demand model. The damage estimation for a uniform hazard scenario of 2% in 50 years probability of exceedance indicates that slight to moderate damage is the most probable damage experienced by these stone masonry buildings. Comparison is also made with fragility curves implicit in the seismic risk assessment tools Hazus and ELER. Hazus shows the highest probability of the occurrence of no to slight damage, whereas the highest probability of extensive and complete damage is predicted with ELER. This comparison shows the importance of the development of fragility curves specific to the generic construction characteristics in the study area and emphasizes the need for critical use of regional risk assessment tools and generated results.
NASA Astrophysics Data System (ADS)
Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.
2003-04-01
Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.
Ferrarese, Alessia; Gentile, Valentina; Bindi, Marco; Rivelli, Matteo; Cumbo, Jacopo; Solej, Mario; Enrico, Stefano; Martino, Valter
2016-01-01
A well-designed learning curve is essential for the acquisition of laparoscopic skills: but, are there risk factors that can derail the surgical method? From a review of the current literature on the learning curve in laparoscopic surgery, we identified learning curve components in video laparoscopic cholecystectomy; we suggest a learning curve model that can be applied to assess the progress of general surgical residents as they learn and master the stages of video laparoscopic cholecystectomy regardless of type of patient. Electronic databases were interrogated to better define the terms "surgeon", "specialized surgeon", and "specialist surgeon"; we surveyed the literature on surgical residency programs outside Italy to identify learning curve components, influential factors, the importance of tutoring, and the role of reference centers in residency education in surgery. From the definition of acceptable error, self-efficacy, and error classification, we devised a learning curve model that may be applied to training surgical residents in video laparoscopic cholecystectomy. Based on the criteria culled from the literature, the three surgeon categories (general, specialized, and specialist) are distinguished by years of experience, case volume, and error rate; the patients were distinguished for years and characteristics. The training model was constructed as a series of key learning steps in video laparoscopic cholecystectomy. Potential errors were identified and the difficulty of each step was graded using operation-specific characteristics. On completion of each procedure, error checklist scores on procedure-specific performance are tallied to track the learning curve and obtain performance indices of measurement that chart the trainee's progress. The concept of the learning curve in general surgery is disputed. The use of learning steps may enable the resident surgical trainee to acquire video laparoscopic cholecystectomy skills proportional to the instructor's ability, the trainee's own skills, and the safety of the surgical environment. There were no patient characteristics that can derail the methods. With this training scheme, resident trainees may be provided the opportunity to develop their intrinsic capabilities without the loss of basic technical skills.
NASA Astrophysics Data System (ADS)
Lei, Yuchuan; Chen, Zhenqian; Shi, Juan
2017-12-01
Numerical simulations of condensation heat transfer of R134a in curved triangle microchannels with various curvatures are proposed. The model is established on the volume of fluid (VOF) approach and user-defined routines which including mass transfer at the vapor-liquid interface and latent heat. Microgravity operating condition is assumed in order to highlight the surface tension. The predictive accuracy of the model is assessed by comparing the simulated results with available correlations in the literature. Both an increased mass flux and the decreased hydraulic diameter could bring better heat transfer performance. No obvious effect of the wall heat flux is observed in condensation heat transfer coefficient. Changes in geometry and surface tension lead to a reduction of the condensate film thickness at the sides of the channel and accumulation of the condensate film at the corners of the channel. Better heat transfer performance is obtained in the curved triangle microchannels over the straight ones, and the performance could be further improved in curved triangle microchannels with larger curvatures. The minimum film thickness where most of the heat transfer process takes place exists near the corners and moves toward the corners in curved triangle microchannels with larger curvatures.
A sediment graph model based on SCS-CN method
NASA Astrophysics Data System (ADS)
Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.
2008-01-01
SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.
REFLECTED LIGHT CURVES, SPHERICAL AND BOND ALBEDOS OF JUPITER- AND SATURN-LIKE EXOPLANETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyudina, Ulyana; Kopparla, Pushkar; Ingersoll, Andrew P.
Reflected light curves observed for exoplanets indicate that a few of them host bright clouds. We estimate how the light curve and total stellar heating of a planet depends on forward and backward scattering in the clouds based on Pioneer and Cassini spacecraft images of Jupiter and Saturn. We fit analytical functions to the local reflected brightnesses of Jupiter and Saturn depending on the planet’s phase. These observations cover broadbands at 0.59–0.72 and 0.39–0.5 μ m, and narrowbands at 0.938 (atmospheric window), 0.889 (CH4 absorption band), and 0.24–0.28 μ m. We simulate the images of the planets with a ray-tracingmore » model, and disk-integrate them to produce the full-orbit light curves. For Jupiter, we also fit the modeled light curves to the observed full-disk brightness. We derive spherical albedos for Jupiter and Saturn, and for planets with Lambertian and Rayleigh-scattering atmospheres. Jupiter-like atmospheres can produce light curves that are a factor of two fainter at half-phase than the Lambertian planet, given the same geometric albedo at transit. The spherical albedo is typically lower than for a Lambertian planet by up to a factor of ∼1.5. The Lambertian assumption will underestimate the absorption of the stellar light and the equilibrium temperature of the planetary atmosphere. We also compare our light curves with the light curves of solid bodies: the moons Enceladus and Callisto. Their strong backscattering peak within a few degrees of opposition (secondary eclipse) can lead to an even stronger underestimate of the stellar heating.« less
Cochlear microphonic broad tuning curves
NASA Astrophysics Data System (ADS)
Ayat, Mohammad; Teal, Paul D.; Searchfield, Grant D.; Razali, Najwani
2015-12-01
It is known that the cochlear microphonic voltage exhibits much broader tuning than does the basilar membrane motion. The most commonly used explanation for this is that when an electrode is inserted at a particular point inside the scala media, the microphonic potentials of neighbouring hair cells have different phases, leading to cancelation at the electrodes location. In situ recording of functioning outer hair cells (OHCs) for investigating this hypothesis is exceptionally difficult. Therefore, to investigate the discrepancy between the tuning curves of the basilar membrane and those of the cochlear microphonic, and the effect of phase cancellation of adjacent hair cells on the broadness of the cochlear microphonic tuning curves, we use an electromechanical model of the cochlea to devise an experiment. We explore the effect of adjacent hair cells (i.e., longitudinal phase cancellation) on the broadness of the cochlear microphonic tuning curves in different locations. The results of the experiment indicate that active longitudinal coupling (i.e., coupling with active adjacent outer hair cells) only slightly changes the broadness of the CM tuning curves. The results also demonstrate that there is a π phase difference between the potentials produced by the hair bundle and the soma near the place associated with the characteristic frequency based on place-frequency maps (i.e., the best place). We suggest that the transversal phase cancellation (caused by the phase difference between the hair bundle and the soma) plays a far more important role than longitudinal phase cancellation in the broadness of the cochlear microphonic tuning curves. Moreover, by increasing the modelled longitudinal resistance resulting the cochlear microphonic curves exhibiting sharper tuning. The results of the simulations suggest that the passive network of the organ of Corti determines the phase difference between the hair bundle and soma, and hence determines the sharpness of the cochlear microphonic tuning curves.
A cardioid oscillator with asymmetric time ratio for establishing CPG models.
Fu, Q; Wang, D H; Xu, L; Yuan, G
2018-01-13
Nonlinear oscillators are usually utilized by bionic scientists for establishing central pattern generator models for imitating rhythmic motions by bionic scientists. In the natural word, many rhythmic motions possess asymmetric time ratios, which means that the forward and the backward motions of an oscillating process sustain different times within one period. In order to model rhythmic motions with asymmetric time ratios, nonlinear oscillators with asymmetric forward and backward trajectories within one period should be studied. In this paper, based on the property of the invariant set, a method to design the closed curve in the phase plane of a dynamic system as its limit cycle is proposed. Utilizing the proposed method and considering that a cardioid curve is a kind of asymmetrical closed curves, a cardioid oscillator with asymmetric time ratios is proposed and realized. Through making the derivation of the closed curve in the phase plane of a dynamic system equal to zero, the closed curve is designed as its limit cycle. Utilizing the proposed limit cycle design method and according to the global invariant set theory, a cardioid oscillator applying a cardioid curve as its limit cycle is achieved. On these bases, the numerical simulations are conducted for analyzing the behaviors of the cardioid oscillator. The example utilizing the established cardioid oscillator to simulate rhythmic motions of the hip joint of a human body in the sagittal plane is presented. The results of the numerical simulations indicate that, whatever the initial condition is and without any outside input, the proposed cardioid oscillator possesses the following properties: (1) The proposed cardioid oscillator is able to generate a series of periodic and anti-interference self-exciting trajectories, (2) the generated trajectories possess an asymmetric time ratio, and (3) the time ratio can be regulated by adjusting the oscillator's parameters. Furthermore, the comparison between the simulated trajectories by the established cardioid oscillator and the measured angle trajectories of the hip angle of a human body show that the proposed cardioid oscillator is fit for imitating the rhythmic motions of the hip of a human body with asymmetric time ratios.
Robine, J M; Mormiche, P; Cambois, E
1996-01-01
In 1984, World Health Organisation (WHO) has proposed a demo-epidemiological model which allows the assessment of the possible consequences of the lengthening of life on the level of health. This model is represented in a graphic form by three curves: the observed survival curve, the hypothetical survival curve without chronic diseases and the hypothetical survival curve without disability; thus, as life expectancy at any age is calculated from the survival curve, this model allows the computation of life expectancy without chronic diseases and life expectancy without disability. The relationships between the three curves, can be used to illustrate the numerous theories dealing with the evolution of the populations' health which enliven debates in public health since several decades. Application of the model to French data on mortality, morbidity and disability also allows to enlighten the evolution of the health status of the French population over the last decade.
Swelling-induced and controlled curving in layered gel beams
Lucantonio, A.; Nardinocchi, P.; Pezzulla, M.
2014-01-01
We describe swelling-driven curving in originally straight and non-homogeneous beams. We present and verify a structural model of swollen beams, based on a new point of view adopted to describe swelling-induced deformation processes in bilayered gel beams, that is based on the split of the swelling-induced deformation of the beam at equilibrium into two components, both depending on the elastic properties of the gel. The method allows us to: (i) determine beam stretching and curving, once assigned the characteristics of the solvent bath and of the non-homogeneous beam, and (ii) estimate the characteristics of non-homogeneous flat gel beams in such a way as to obtain, under free-swelling conditions, three-dimensional shapes. The study was pursued by means of analytical, semi-analytical and numerical tools; excellent agreement of the outcomes of the different techniques was found, thus confirming the strength of the method. PMID:25383031
NASA Astrophysics Data System (ADS)
Meng, Xiao; Wang, Lai; Hao, Zhibiao; Luo, Yi; Sun, Changzheng; Han, Yanjun; Xiong, Bing; Wang, Jian; Li, Hongtao
2016-01-01
Efficiency droop is currently one of the most popular research problems for GaN-based light-emitting diodes (LEDs). In this work, a differential carrier lifetime measurement system is optimized to accurately determine carrier lifetimes (τ) of blue and green LEDs under different injection current (I). By fitting the τ-I curves and the efficiency droop curves of the LEDs according to the ABC carrier rate equation model, the impact of Auger recombination and carrier leakage on efficiency droop can be characterized simultaneously. For the samples used in this work, it is found that the experimental τ-I curves cannot be described by Auger recombination alone. Instead, satisfactory fitting results are obtained by taking both carrier leakage and carriers delocalization into account, which implies carrier leakage plays a more significant role in efficiency droop at high injection level.
Using Evolved Fuzzy Neural Networks for Injury Detection from Isokinetic Curves
NASA Astrophysics Data System (ADS)
Couchet, Jorge; Font, José María; Manrique, Daniel
In this paper we propose an evolutionary fuzzy neural networks system for extracting knowledge from a set of time series containing medical information. The series represent isokinetic curves obtained from a group of patients exercising the knee joint on an isokinetic dynamometer. The system has two parts: i) it analyses the time series input in order generate a simplified model of an isokinetic curve; ii) it applies a grammar-guided genetic program to obtain a knowledge base represented by a fuzzy neural network. Once the knowledge base has been generated, the system is able to perform knee injuries detection. The results suggest that evolved fuzzy neural networks perform better than non-evolutionary approaches and have a high accuracy rate during both the training and testing phases. Additionally, they are robust, as the system is able to self-adapt to changes in the problem without human intervention.
ERIC Educational Resources Information Center
Lu, Yi
2016-01-01
To model students' math growth trajectory, three conventional growth curve models and three growth mixture models are applied to the Early Childhood Longitudinal Study Kindergarten-Fifth grade (ECLS K-5) dataset in this study. The results of conventional growth curve model show gender differences on math IRT scores. When holding socio-economic…
H. Li; X. Deng; Andy Dolloff; E. P. Smith
2015-01-01
A novel clustering method for bivariate functional data is proposed to group streams based on their waterâair temperature relationship. A distance measure is developed for bivariate curves by using a time-varying coefficient model and a weighting scheme. This distance is also adjusted by spatial correlation of streams via the variogram. Therefore, the proposed...
PID Controller Settings Based on a Transient Response Experiment
ERIC Educational Resources Information Center
Silva, Carlos M.; Lito, Patricia F.; Neves, Patricia S.; Da Silva, Francisco A.
2008-01-01
An experimental work on controller tuning for chemical engineering undergraduate students is proposed using a small heat exchange unit. Based upon process reaction curves in open-loop configuration, system gain and time constant are determined for first order model with time delay with excellent accuracy. Afterwards students calculate PID…
NASA Astrophysics Data System (ADS)
Ghnimi, Thouraya; Hassini, Lamine; Bagane, Mohamed
2016-12-01
The aim of this work is to determine the desorption isotherms and the drying kinetics of bay laurel leaves ( Laurus Nobilis L.). The desorption isotherms were performed at three temperature levels: 50, 60 and 70 °C and at water activity ranging from 0.057 to 0.88 using the statistic gravimetric method. Five sorption models were used to fit desorption experimental isotherm data. It was found that Kuhn model offers the best fitting of experimental moisture isotherms in the mentioned investigated ranges of temperature and water activity. The Net isosteric heat of water desorption was evaluated using The Clausius-Clapeyron equation and was then best correlated to equilibrium moisture content by the empirical Tsami's equation. Thin layer convective drying curves of bay laurel leaves were obtained for temperatures of 45, 50, 60 and 70 °C, relative humidity of 5, 15, 30 and 45 % and air velocities of 1, 1.5 and 2 m/s. A non linear regression procedure of Levenberg-Marquardt was used to fit drying curves with five semi empirical mathematical models available in the literature, The R2 and χ2 were used to evaluate the goodness of fit of models to data. Based on the experimental drying curves the drying characteristic curve (DCC) has been established and fitted with a third degree polynomial function. It was found that the Midilli Kucuk model was the best semi-empirical model describing thin layer drying kinetics of bay laurel leaves. The bay laurel leaves effective moisture diffusivity and activation energy were also identified.
Robotic partial nephrectomy - Evaluation of the impact of case mix on the procedural learning curve.
Roman, A; Ahmed, K; Challacombe, B
2016-05-01
Although Robotic partial nephrectomy (RPN) is an emerging technique for the management of small renal masses, this approach is technically demanding. To date, there is limited data on the nature and progression of the learning curve in RPN. To analyse the impact of case mix on the RPN LC and to model the learning curve. The records of the first 100 RPN performed, were analysed at our institution that were carried out by a single surgeon (B.C) (June 2010-December 2013). Cases were split based on their Preoperative Aspects and Dimensions Used for an Anatomical (PADUA) score into the following groups: 6-7, 8-9 and >10. Using a split group (20 patients in each group) and incremental analysis, the mean, the curve of best fit and R(2) values were calculated for each group. Of 100 patients (F:28, M:72), the mean age was 56.4 ± 11.9 years. The number of patients in each PADUA score groups: 6-7, 8-9 and >10 were 61, 32 and 7 respectively. An increase in incidence of more complex cases throughout the cohort was evident within the 8-9 group (2010: 1 case, 2013: 16 cases). The learning process did not significantly affect the proxies used to assess surgical proficiency in this study (operative time and warm ischaemia time). Case difficulty is an important parameter that should be considered when evaluating procedural learning curves. There is not one well fitting model that can be used to model the learning curve. With increasing experience, clinicians tend to operate on more difficult cases. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
Estimating the Area Under ROC Curve When the Fitted Binormal Curves Demonstrate Improper Shape.
Bandos, Andriy I; Guo, Ben; Gur, David
2017-02-01
The "binormal" model is the most frequently used tool for parametric receiver operating characteristic (ROC) analysis. The binormal ROC curves can have "improper" (non-concave) shapes that are unrealistic in many practical applications, and several tools (eg, PROPROC) have been developed to address this problem. However, due to the general robustness of binormal ROCs, the improperness of the fitted curves might carry little consequence for inferences about global summary indices, such as the area under the ROC curve (AUC). In this work, we investigate the effect of severe improperness of fitted binormal ROC curves on the reliability of AUC estimates when the data arise from an actually proper curve. We designed theoretically proper ROC scenarios that induce severely improper shape of fitted binormal curves in the presence of well-distributed empirical ROC points. The binormal curves were fitted using maximum likelihood approach. Using simulations, we estimated the frequency of severely improper fitted curves, bias of the estimated AUC, and coverage of 95% confidence intervals (CIs). In Appendix S1, we provide additional information on percentiles of the distribution of AUC estimates and bias when estimating partial AUCs. We also compared the results to a reference standard provided by empirical estimates obtained from continuous data. We observed up to 96% of severely improper curves depending on the scenario in question. The bias in the binormal AUC estimates was very small and the coverage of the CIs was close to nominal, whereas the estimates of partial AUC were biased upward in the high specificity range and downward in the low specificity range. Compared to a non-parametric approach, the binormal model led to slightly more variable AUC estimates, but at the same time to CIs with more appropriate coverage. The improper shape of the fitted binormal curve, by itself, ie, in the presence of a sufficient number of well-distributed points, does not imply unreliable AUC-based inferences. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Neck curve polynomials in neck rupture model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurniadi, Rizal; Perkasa, Yudha S.; Waris, Abdul
2012-06-06
The Neck Rupture Model is a model that explains the scission process which has smallest radius in liquid drop at certain position. Old fashion of rupture position is determined randomly so that has been called as Random Neck Rupture Model (RNRM). The neck curve polynomials have been employed in the Neck Rupture Model for calculation the fission yield of neutron induced fission reaction of {sup 280}X{sub 90} with changing of order of polynomials as well as temperature. The neck curve polynomials approximation shows the important effects in shaping of fission yield curve.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Hongxiang; Sun, Ning; Wigmosta, Mark
Precipitation-based intensity-duration-frequency (PREC-IDF) curves are a standard tool used to derive design floods for hydraulic infrastructure worldwide. In snow-dominated regions where a large percentage of flood events are caused by snowmelt and rain-on-snow events, the PREC-IDF design approach can lead to substantial underestimation/overestimation of design floods and associated infrastructure. In this study, next-generation IDF (NG-IDF) curves, which characterize the actual water reaching the land surface, are introduced into the design process to improve hydrologic design. The authors compared peak design flood estimates from the National Resource Conservation Service TR-55 hydrologic model driven by NG-IDF and PREC-IDF curves at 399 Snowpackmore » Telemetry (SNOTEL) stations across the western United States, all of which had at least 30 years of high-quality records. They found that about 72% of the stations in the western United States showed the potential for underdesign, for which the PREC-IDF curves underestimated peak design floods by as much as 324%. These results demonstrated the need to update the use of PREC-IDF curves to the use of NG-IDF curves for hydrologic design in snow-dominated regions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Hongxiang; Sun, Ning; Wigmosta, Mark S.
Precipitation-based intensity-duration-frequency (PREC-IDF) curves are a standard tool used to derive design floods for hydraulic infrastructure worldwide. In snow-dominated regions where a large percentage of flood events are caused by snowmelt and rain-on-snow events, the PREC-IDF design approach can lead to substantial underestimation/overestimation of design floods and associated infrastructure. In this study, next-generation IDF (NG-IDF) curves, which characterize the actual water reaching the land surface, are introduced into the design process to improve hydrologic design. The authors compared peak design flood estimates from the National Resource Conservation Service TR-55 hydrologic model driven by NG-IDF and PREC-IDF curves at 399 Snowpackmore » Telemetry (SNOTEL) stations across the western United States, all of which had at least 30 years of high-quality records. They found that about 72% of the stations in the western United States showed the potential for underdesign, for which the PREC-IDF curves underestimated peak design floods by as much as 324%. These results demonstrated the need to update the use of PREC-IDF curves to the use of NG-IDF curves for hydrologic design in snow-dominated regions.« less
Estimation of the uncertainty of analyte concentration from the measurement uncertainty.
Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F
2015-09-01
Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.
Phonon Dispersion in Amorphous Ni-Alloys
NASA Astrophysics Data System (ADS)
Vora, A. M.
2007-06-01
The well-known model potential is used to investigate the longitudinal and transverse phonon dispersion curves for six Ni-based binary amorphous alloys, viz. Ni31Dy69, Ni33Y67, Ni36Zr64, Ni50Zr50, Ni60 Nb40, and Ni81B19. The thermodynamic and elastic properties are also computed from the elastic limits of the phonon dispersion curves. The theoretical approach given by Hubbard-Beeby is used in the present study to compute the phonon dispersion curves. Five local field correction functions proposed by Hartree, Taylor, Ichimaru-Utsumi, Farid et al. and Sarkar et al. are employed to see the effect of exchange and correlation in the aforesaid properties.
NASA Astrophysics Data System (ADS)
Müller, M. F.; Thompson, S. E.
2016-02-01
The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.
New Insight into Combined Model and Revised Model for RTD Curves in a Multi-strand Tundish
NASA Astrophysics Data System (ADS)
Lei, Hong
2015-12-01
The analysis for the residence time distribution (RTD) curve is one of the important experimental technologies to optimize the tundish design. But there are some issues about RTD analysis model. Firstly, the combined (or mixed) model and the revised model give different analysis results for the same RTD curve. Secondly, different upper limits of integral in the numerator for the mean residence time give different results for the same RTD curve. Thirdly, the negative dead volume fraction sometimes appears at the outer strand of the multi-strand tundish. In order to solve the above problems, it is necessary to have a deep insight into the RTD curve and to propose a reasonable method to analyze the RTD curve. The results show that (1) the revised model is not appropriate to treat with the RTD curve; (2) the conception of the visual single-strand tundish and the combined model with the dimensionless time at the cut-off point are applied to estimate the flow characteristics in the multi-strand tundish; and that (3) the mean residence time at each exit is the key parameter to estimate the similarity of fluid flow among strands.
Automated data processing and radioassays.
Samols, E; Barrows, G H
1978-04-01
Radioassays include (1) radioimmunoassays, (2) competitive protein-binding assays based on competition for limited antibody or specific binding protein, (3) immunoradiometric assay, based on competition for excess labeled antibody, and (4) radioreceptor assays. Most mathematical models describing the relationship between labeled ligand binding and unlabeled ligand concentration have been based on the law of mass action or the isotope dilution principle. These models provide useful data reduction programs, but are theoretically unfactory because competitive radioassay usually is not based on classical dilution principles, labeled and unlabeled ligand do not have to be identical, antibodies (or receptors) are frequently heterogenous, equilibrium usually is not reached, and there is probably steric and cooperative influence on binding. An alternative, more flexible mathematical model based on the probability or binding collisions being restricted by the surface area of reactive divalent sites on antibody and on univalent antigen has been derived. Application of these models to automated data reduction allows standard curves to be fitted by a mathematical expression, and unknown values are calculated from binding data. The vitrues and pitfalls are presented of point-to-point data reduction, linear transformations, and curvilinear fitting approaches. A third-order polynomial using the square root of concentration closely approximates the mathematical model based on probability, and in our experience this method provides the most acceptable results with all varieties of radioassays. With this curvilinear system, linear point connection should be used between the zero standard and the beginning of significant dose response, and also towards saturation. The importance is stressed of limiting the range of reported automated assay results to that portion of the standard curve that delivers optimal sensitivity. Published methods for automated data reduction of Scatchard plots for radioreceptor assay are limited by calculation of a single mean K value. The quality of the input data is generally the limiting factor in achieving good precision with automated as it is with manual data reduction. The major advantages of computerized curve fitting include: (1) handling large amounts of data rapidly and without computational error; (2) providing useful quality-control data; (3) indicating within-batch variance of the test results; (4) providing ongoing quality-control charts and between assay variance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; Mutic, S; Anastasio, M
Purpose: Traditionally, image quality in radiation therapy is assessed subjectively or by utilizing physically-based metrics. Some model observers exist for task-based medical image quality assessment, but almost exclusively for diagnostic imaging tasks. As opposed to disease diagnosis, the task for image observers in radiation therapy is to utilize the available images to design and deliver a radiation dose which maximizes patient disease control while minimizing normal tissue damage. The purpose of this study was to design and implement a new computer simulation model observer to enable task-based image quality assessment in radiation therapy. Methods: A modular computer simulation framework wasmore » developed to resemble the radiotherapy observer by simulating an end-to-end radiation therapy treatment. Given images and the ground-truth organ boundaries from a numerical phantom as inputs, the framework simulates an external beam radiation therapy treatment and quantifies patient treatment outcomes using the previously defined therapeutic operating characteristic (TOC) curve. As a preliminary demonstration, TOC curves were calculated for various CT acquisition and reconstruction parameters, with the goal of assessing and optimizing simulation CT image quality for radiation therapy. Sources of randomness and bias within the system were analyzed. Results: The relationship between CT imaging dose and patient treatment outcome was objectively quantified in terms of a singular value, the area under the TOC (AUTOC) curve. The AUTOC decreases more rapidly for low-dose imaging protocols. AUTOC variation introduced by the dose optimization algorithm was approximately 0.02%, at the 95% confidence interval. Conclusion: A model observer has been developed and implemented to assess image quality based on radiation therapy treatment efficacy. It enables objective determination of appropriate imaging parameter values (e.g. imaging dose). Framework flexibility allows for incorporation of additional modules to include any aspect of the treatment process, and therefore has great potential for both assessment and optimization within radiation therapy.« less
An, Ke; Yuan, Lang; Dial, Laura; ...
2017-09-11
Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
An, Ke; Yuan, Lang; Dial, Laura
Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less
Pseudogap and conduction dimensionalities in high-Tc superconductors
NASA Astrophysics Data System (ADS)
Das Arulsamy, Andrew; Ong, P. C.; Ong, M. T.
2003-01-01
The nature of normal state charge-carriers' dynamics and the transition in conduction and gap dimensionalities between 2D and 3D for YBa2Cu3O7-δ and Bi2Sr2Ca1- xYxCu2O8 high-Tc superconductors were described by computing and fitting the resistivity curves, /ρ(T,δ,x). These were carried out by utilizing the 2D and 3D Fermi liquid and ionization energy (EI) based resistivity models coupled with charge-spin separation based /t-/J model (Phys. Rev. B 64 (2001) 104516). /ρ(T,δ,x) curves of Y123 and Bi2212 samples indicate the beginning of the transition of conduction and gap from 2D to 3D with reduction in oxygen content /(7-δ) and Ca2+(1-x) as such, /c-axis pseudogap could be a different phenomenon from superconductor and spin gaps. These models also indicate that the recent MgB2 superconductor is at least not Y123 or Bi2212 type.
LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS
Einstein, Daniel R.; Dyedov, Vladimir
2010-01-01
Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546
Thin layer drying of cassava starch using continuous vibrated fluidized bed dryer
NASA Astrophysics Data System (ADS)
Suherman, Trisnaningtyas, Rona
2015-12-01
This paper present the experimental work and thin layer modelling of cassava starch drying in continuous vibrated fluidized bed dryer. The experimental data was used to validate nine thin layer models of drying curve. Cassava starch with 0.21 initial moisture content was dried in different air drying temperature (50°C, 55°C, 60°C, 65°C, 70°C), different weir height in bed (0 and 1 cm), and different solid feed flow (10 and 30 gr.minute-1). The result showed air dryer temperature has a significant effect on drying curve, while the weir height and solid flow rate are slightly. Based on value of R2, χ2, and RMSE, Page Model is the most accurate simulation for thin layer drying model of cassava starch.
Modeling Pacing Behavior and Test Speededness Using Latent Growth Curve Models
ERIC Educational Resources Information Center
Kahraman, Nilufer; Cuddy, Monica M.; Clauser, Brian E.
2013-01-01
This research explores the usefulness of latent growth curve modeling in the study of pacing behavior and test speededness. Examinee response times from a high-stakes, computerized examination, collected before and after the examination was subjected to a timing change, were analyzed using a series of latent growth curve models to detect…
Modeling Error Distributions of Growth Curve Models through Bayesian Methods
ERIC Educational Resources Information Center
Zhang, Zhiyong
2016-01-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is…
NASA Astrophysics Data System (ADS)
Ragno, Elisa; AghaKouchak, Amir; Love, Charlotte A.; Cheng, Linyin; Vahedifard, Farshid; Lima, Carlos H. R.
2018-03-01
During the last century, we have observed a warming climate with more intense precipitation extremes in some regions, likely due to increases in the atmosphere's water holding capacity. Traditionally, infrastructure design and rainfall-triggered landslide models rely on the notion of stationarity, which assumes that the statistics of extremes do not change significantly over time. However, in a warming climate, infrastructures and natural slopes will likely face more severe climatic conditions, with potential human and socioeconomical consequences. Here we outline a framework for quantifying climate change impacts based on the magnitude and frequency of extreme rainfall events using bias corrected historical and multimodel projected precipitation extremes. The approach evaluates changes in rainfall Intensity-Duration-Frequency (IDF) curves and their uncertainty bounds using a nonstationary model based on Bayesian inference. We show that highly populated areas across the United States may experience extreme precipitation events up to 20% more intense and twice as frequent, relative to historical records, despite the expectation of unchanged annual mean precipitation. Since IDF curves are widely used for infrastructure design and risk assessment, the proposed framework offers an avenue for assessing resilience of infrastructure and landslide hazard in a warming climate.
Joel W. Homan; Charles H. Luce; James P. McNamara; Nancy F. Glenn
2011-01-01
Describing the spatial variability of heterogeneous snowpacks at a watershed or mountain-front scale is important for improvements in large-scale snowmelt modelling. Snowmelt depletion curves, which relate fractional decreases in snowcovered area (SCA) against normalized decreases in snow water equivalent (SWE), are a common approach to scale-up snowmelt models....
Comparing The Effectiveness of a90/95 Calculations (Preprint)
2006-09-01
Nachtsheim, John Neter, William Li, Applied Linear Statistical Models , 5th ed., McGraw-Hill/Irwin, 2005 5. Mood, Graybill and Boes, Introduction...curves is based on methods that are only valid for ordinary linear regression. Requirements for a valid Ordinary Least-Squares Regression Model There... linear . For example is a linear model ; is not. 2. Uniform variance (homoscedasticity
Ecological quality boundary-setting procedures: the Gulf of Riga case study.
Aigars, Juris; Müller-Karulis, Bärbel; Martin, Georg; Jermakovs, Vadims
2008-03-01
Two approaches for setting ecological class boundaries, response curves and a simplified mathematical boundary-setting protocol, were tested for coastal, transitional and open waters in the Gulf of Riga, Baltic Sea. The simplified mathematical boundary-setting protocol defines acceptable ecological status based on expert judgment by a uniform relative deviation from reference conditions. In contrast, response curves derive class boundary definitions from observed changes in biological quality elements along environmental pressure gradients for class boundary definitions. Identification of relevant environmental pressures for the construction of response curves was based on a conceptual model of eutrophication in the Gulf of Riga. Response curves were successfully established for summer chlorophyll a and transparency, as well as for macrozoobenthos abundance in the Central Gulf, macrozoobenthos biotic coefficient in the Southern Gulf, and maximum depth of phytobenthos in the Northern Gulf. In the Gulf of Riga response curves almost always permitted a larger deviation from reference conditions than the 50% deviation applied for the simplified mathematical boundary-setting protocol. The case study clearly demonstrated that class boundary definitions should take into account the sensitivity of the target water body. Also, the class boundaries for different ecological quality elements were internally more consistent than those derived by the simplified mathematical boundary-setting protocol.
NASA Astrophysics Data System (ADS)
Perrier, C.; Breysacher, J.; Rauw, G.
2009-09-01
Aims: We present a technique to determine the orbital and physical parameters of eclipsing eccentric Wolf-Rayet + O-star binaries, where one eclipse is produced by the absorption of the O-star light by the stellar wind of the W-R star. Methods: Our method is based on the use of the empirical moments of the light curve that are integral transforms evaluated from the observed light curves. The optical depth along the line of sight and the limb darkening of the W-R star are modelled by simple mathematical functions, and we derive analytical expressions for the moments of the light curve as a function of the orbital parameters and the key parameters of the transparency and limb-darkening functions. These analytical expressions are then inverted in order to derive the values of the orbital inclination, the stellar radii, the fractional luminosities, and the parameters of the wind transparency and limb-darkening laws. Results: The method is applied to the SMC W-R eclipsing binary HD 5980, a remarkable object that underwent an LBV-like event in August 1994. The analysis refers to the pre-outburst observational data. A synthetic light curve based on the elements derived for the system allows a quality assessment of the results obtained.
Modeling the Propagation of Shock Waves in Metals
NASA Astrophysics Data System (ADS)
Howard, W. Michael
2005-07-01
We present modeling results for the propagation of strong shock waves in metals. In particular, we use an arbitrary Lagrange Eulerian (ALE3D) code to model the propagation of strong pressure waves (P ˜300 to 400 kbars) generated with high explosives in contact with aluminum cylinders. The aluminum cylinders are assumed to be both flat-topped and have large-amplitude curved surfaces. We use 3D Lagrange mechanics. For the aluminum we use a rate-independent Steinberg-Guinan model, where the yield strength and bulk modulus depends on pressure, density and temperature. The calculation of the melt temperature is based on the Lindermann law. At melt the yield strength and bulk modulus is set to zero. The pressure is represented as a seven-term polynomial as a function of density. For the HMX-based high explosive, we use a JWL, with a program burn model that gives the correct detonation velocity and C-J pressure (P ˜ 390 kbars). For the case of the large-amplitude curved surface, we discuss the evolving shock structure in terms of the early shock propagation experiments by Sakharov. We also discuss the dependence of our results upon our material model for aluminum.
Helix-coil transition of a four-way DNA junction observed by multiple fluorescence parameters.
Vámosi, György; Clegg, Robert M
2008-10-16
The thermal denaturation of immobile four-way DNA ("Holliday-") junctions with 17 base pair arms was studied via fluorescence spectroscopic measurements. Two arms of the molecule were labeled at the 5'-end with fluorescein and tetramethylrhodamine, respectively. Melting was monitored by the fluorescence intensity of the dyes, the fluorescence anisotropy of tetramethylrhodamine, and Forster resonance energy transfer (FRET) between fluorescein and rhodamine. To fit the thermal denaturation curves of the four-way junctions, two basic thermodynamic models were tested: (1) all-or-none transitions assuming a molecularity of one, two, or four and (2) a statistical "zipper" model. The all-or-none models correspond to reaction mechanisms assuming that the cooperative melting unit (that is, the structure changing from complete helix to complete coil) consists of (1) one arm, (2) two neighboring arms (which have one continuous strand common to the two arms), or (3) all four arms. In each case, the melting of the cooperative unit takes place in a single step. The tetramolecular reaction model (four-arm melting) yielded unrealistically low van't Hoff enthalpy and entropy values, whereas the monomolecular model (one-arm melting) resulted in a poor fit to the experimental data. The all-or-none bimolecular (two neighboring arm model) fit gave intermediate standard enthalpy change (Delta H) values between those expected for the melting of a duplex with a total length between the helix lengths of one and two arms (17 and 34 base pairs). Simulations according to the zipper model fit the experimental curves best when the length of the simulated duplex was assumed to be 34 base pairs, the length of a single strand. This suggests that the most important parameter determining the melting behavior of the molecule is the end-to-end distance of the strands (34 bases) rather than the length of the individual arms (17 base pairs) and that the equilibrium concentration of partially denatured intermediate states has to be taken into account. These findings are in good agreement with results obtained for three-way DNA junctions ( Stuhmeier, F. ; Lilley, D. M. ; Clegg, R. M. Biochemistry 1997, 36, 13539 ). An interesting result is that the extent-of-melting curves derived from the fluorescence intensity and anisotropy nearly agree, whereas the curve derived from the FRET data shows a change prior to the melting. This may be an indication of a conformational change leaving the double-stranded structure intact but changing the end-to-end distance of the different arms in a way consistent with the transition to the extended square configuration ( Clegg, R. M. ; Murchie, A. I. ; Lilley, D. M. Biophys. J. 1994, 66, 99 ) of this branched molecule.
CyberShake: Running Seismic Hazard Workflows on Distributed HPC Resources
NASA Astrophysics Data System (ADS)
Callaghan, S.; Maechling, P. J.; Graves, R. W.; Gill, D.; Olsen, K. B.; Milner, K. R.; Yu, J.; Jordan, T. H.
2013-12-01
As part of its program of earthquake system science research, the Southern California Earthquake Center (SCEC) has developed a simulation platform, CyberShake, to perform physics-based probabilistic seismic hazard analysis (PSHA) using 3D deterministic wave propagation simulations. CyberShake performs PSHA by simulating a tensor-valued wavefield of Strain Green Tensors, and then using seismic reciprocity to calculate synthetic seismograms for about 415,000 events per site of interest. These seismograms are processed to compute ground motion intensity measures, which are then combined with probabilities from an earthquake rupture forecast to produce a site-specific hazard curve. Seismic hazard curves for hundreds of sites in a region can be used to calculate a seismic hazard map, representing the seismic hazard for a region. We present a recently completed PHSA study in which we calculated four CyberShake seismic hazard maps for the Southern California area to compare how CyberShake hazard results are affected by different SGT computational codes (AWP-ODC and AWP-RWG) and different community velocity models (Community Velocity Model - SCEC (CVM-S4) v11.11 and Community Velocity Model - Harvard (CVM-H) v11.9). We present our approach to running workflow applications on distributed HPC resources, including systems without support for remote job submission. We show how our approach extends the benefits of scientific workflows, such as job and data management, to large-scale applications on Track 1 and Leadership class open-science HPC resources. We used our distributed workflow approach to perform CyberShake Study 13.4 on two new NSF open-science HPC computing resources, Blue Waters and Stampede, executing over 470 million tasks to calculate physics-based hazard curves for 286 locations in the Southern California region. For each location, we calculated seismic hazard curves with two different community velocity models and two different SGT codes, resulting in over 1100 hazard curves. We will report on the performance of this CyberShake study, four times larger than previous studies. Additionally, we will examine the challenges we face applying these workflow techniques to additional open-science HPC systems and discuss whether our workflow solutions continue to provide value to our large-scale PSHA calculations.
Mocho, Pierre; Desauziers, Valérie
2011-05-01
Solid-phase microextraction (SPME) is a powerful technique, easy to implement for on-site static sampling of indoor VOCs emitted by building materials. However, a major constraint lies in the establishment of calibration curves which requires complex generation of standard atmospheres. Thus, the purpose of this paper is to propose a model to predict adsorption kinetics (i.e., calibration curves) of four model VOCs. The model is based on Fick's laws for the gas phase and on the equilibrium or the solid diffusion model for the adsorptive phase. Two samplers (the FLEC® and a home-made cylindrical emission cell), coupled to SPME for static sampling of material emissions, were studied. A good agreement between modeling and experimental data is observed and results show the influence of sampling rate on mass transfer mode in function of sample volume. The equilibrium model is adapted to quite large volume sampler (cylindrical cell) while the solid diffusion model is dedicated to small volume sampler (FLEC®). The limiting steps of mass transfer are the diffusion in gas phase for the cylindrical cell and the pore surface diffusion for the FLEC®. In the future, this modeling approach could be a useful tool for time-saving development of SPME to study building material emission in static mode sampling.
Clinical prognostic rules for severe acute respiratory syndrome in low- and high-resource settings.
Cowling, Benjamin J; Muller, Matthew P; Wong, Irene O L; Ho, Lai-Ming; Lo, Su-Vui; Tsang, Thomas; Lam, Tai Hing; Louie, Marie; Leung, Gabriel M
2006-07-24
An accurate prognostic model for patients with severe acute respiratory syndrome (SARS) could provide a practical clinical decision aid. We developed and validated prognostic rules for both high- and low-resource settings based on data available at the time of admission. We analyzed data on all 1755 and 291 patients with SARS in Hong Kong (derivation cohort) and Toronto (validation cohort), respectively, using a multivariable logistic scoring method with internal and external validation. Scores were assigned on the basis of patient history in a basic model, and a full model additionally incorporated radiological and laboratory results. The main outcome measure was death. Predictors for mortality in the basic model included older age, male sex, and the presence of comorbid conditions. Additional predictors in the full model included haziness or infiltrates on chest radiography, less than 95% oxygen saturation on room air, high lactate dehydrogenase level, and high neutrophil and low platelet counts. The basic model had an area under the receiver operating characteristic (ROC) curve of 0.860 in the derivation cohort, which was maintained on external validation with an area under the ROC curve of 0.882. The full model improved discrimination with areas under the ROC curve of 0.877 and 0.892 in the derivation and validation cohorts, respectively. The model performs well and could be useful in assessing prognosis for patients who are infected with re-emergent SARS.
Transient pressure analysis of a volume fracturing well in fractured tight oil reservoirs
NASA Astrophysics Data System (ADS)
Lu, Cheng; Wang, Jiahang; Zhang, Cong; Cheng, Minhua; Wang, Xiaodong; Dong, Wenxiu; Zhou, Yingfang
2017-12-01
This paper presents a semi-analytical model to simulate transient pressure curves for a vertical well with a reconstructed fracture network in fractured tight oil reservoirs. In the proposed model, the reservoir is a composite system and contains two regions. The inner region is described as a formation with a finite conductivity hydraulic fracture network and the flow in the fracture is assumed to be linear, while the outer region is modeled using the classical Warren-Root model where radial flow is applied. The transient pressure curves of a vertical well in the proposed reservoir model are calculated semi-analytically using the Laplace transform and Stehfest numerical inversion. As shown in the type curves, the flow is divided into several regimes: (a) linear flow in artificial main fractures; (b) coupled boundary flow; (c) early linear flow in a fractured formation; (d) mid radial flow in the semi-fractures of the formation; (e) mid radial flow or pseudo steady flow; (f) mid cross-flow; (g) closed boundary flow. Based on our newly proposed model, the effects of some sensitive parameters, such as elastic storativity ratio, cross-flow coefficient, fracture conductivity and skin factor, on the type curves were also analyzed extensively. The simulated type curves show that for a vertical fractured well in a tight reservoir, the elastic storativity ratios and crossflow coefficients affect the time and the degree of crossflow respectively. The pressure loss increases with an increase in the fracture conductivity. To a certain extent, the effect of the fracture conductivity is more obvious than that of the half length of the fracture on improving the production effect. With an increase in the wellbore storage coefficient, the fluid compressibility is so large that it might cover the early stage fracturing characteristics. Linear or bilinear flow may not be recognized, and the pressure and pressure derivative gradually shift to the right. With an increase in the skin effect, the pressure loss increases gradually.
Falgreen, Steffen; Laursen, Maria Bach; Bødker, Julie Støve; Kjeldsen, Malene Krag; Schmitz, Alexander; Nyegaard, Mette; Johnsen, Hans Erik; Dybkær, Karen; Bøgsted, Martin
2014-06-05
In vitro generated dose-response curves of human cancer cell lines are widely used to develop new therapeutics. The curves are summarised by simplified statistics that ignore the conventionally used dose-response curves' dependency on drug exposure time and growth kinetics. This may lead to suboptimal exploitation of data and biased conclusions on the potential of the drug in question. Therefore we set out to improve the dose-response assessments by eliminating the impact of time dependency. First, a mathematical model for drug induced cell growth inhibition was formulated and used to derive novel dose-response curves and improved summary statistics that are independent of time under the proposed model. Next, a statistical analysis workflow for estimating the improved statistics was suggested consisting of 1) nonlinear regression models for estimation of cell counts and doubling times, 2) isotonic regression for modelling the suggested dose-response curves, and 3) resampling based method for assessing variation of the novel summary statistics. We document that conventionally used summary statistics for dose-response experiments depend on time so that fast growing cell lines compared to slowly growing ones are considered overly sensitive. The adequacy of the mathematical model is tested for doxorubicin and found to fit real data to an acceptable degree. Dose-response data from the NCI60 drug screen were used to illustrate the time dependency and demonstrate an adjustment correcting for it. The applicability of the workflow was illustrated by simulation and application on a doxorubicin growth inhibition screen. The simulations show that under the proposed mathematical model the suggested statistical workflow results in unbiased estimates of the time independent summary statistics. Variance estimates of the novel summary statistics are used to conclude that the doxorubicin screen covers a significant diverse range of responses ensuring it is useful for biological interpretations. Time independent summary statistics may aid the understanding of drugs' action mechanism on tumour cells and potentially renew previous drug sensitivity evaluation studies.
Sorption and reemission of formaldehyde by gypsum wallboard. Report for June 1990-August 1992
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J.C.S.
1993-01-01
The paper gives results of an analysis of the sorption and desorption of formaldehyde by unpainted wallboard, using a mass transfer model based on the Langmuir sorption isotherm. The sorption and desorption rate constants are determined by short-term experimental data. Long-term sorption and desorption curves are developed by the mass transfer model without any adjustable parameters. Compared with other empirically developed models, the mass transfer model has more extensive applicability and provides an elucidation of the sorption and desorption mechanism that empirical models cannot. The mass transfer model is also more feasible and accurate than empirical models for applications suchmore » as scale-up and exposure assessment. For a typical indoor environment, the model predicts that gypsum wallboard is a much stronger sink for formaldehyde than for other indoor air pollutants such as tetrachloroethylene and ethylbenzene. The strong sink effects are reflected by the high equilibrium capacity and slow decay of the desorption curve.« less
NASA Astrophysics Data System (ADS)
Cantrell, Andrew Glenn
We consider two types of anomalous observations which have arisen from efforts to measure dynamical masses of X-ray binary stars: (1) Radial velocity curves which seemingly show the primary and the secondary out of antiphase in most systems, and (2) The observation of double-waved light curves which deviate significantly from the ellipsoidal modulations expected for a Roche lobe filling star. We consider both problems with the joint goals of understanding the physical origins of the anomalous observations, and using this understanding to allow robust dynamical determinations of mass in X-ray binary systems. In our analysis of phase-shifted radial velocity curves, we discuss a comprehensive sample of X-ray binaries with published phase-shifted radial velocity curves. We show that the most commonly adopted explanation for phase shifts is contradicted by many observations, and consider instead a generalized form of a model proposed by Smak in 1970. We show that this model is well supported by a range of observations, including some systems which had previously been considered anomalous. We lay the groundwork for the derivation of mass ratios based on our explanation for phase shifts, and we discuss the work necessary to produce more detailed physical models of the phase shift. In our analysis of non-ellipsoidal light curves, we focus on the very well-studied system A0620-00. We present new VIH SMARTS photometry spanning 1999-2007, and supplement this with a comprehensive collection of archival data obtained since 1981. We show that A0620-00 undergoes optical state changes within X-ray quiescence and argue that not all quiescent data should be used for determinations of the inclination. We identify twelve light curves which may reliably be used for determining the inclination. We show that the accretion disk contributes significantly to all twelve curves and is the dominant source of nonellipsoidal variations. We derive the disk fraction for each of the twelve curves and show that, after correcting for the disk component, these twelve curves point to a consistent inclination. Finally, we consider the very different binary system V4641 Sgr and show that it has some qualitative similarities to A0620-00, suggesting that the phenomena we find in A0620-00 are likely to be widespread.
Sitek, Aneta; Rosset, Iwona; Żądzińska, Elżbieta; Kasielska-Trojan, Anna; Neskoromna-Jędrzejczak, Aneta; Antoszewski, Bogusław
2016-04-01
Light skin pigmentation is a known risk factor for skin cancer. Skin color parameters and Fitzpatrick phototypes were evaluated in terms of their usefulness in predicting the risk of skin cancer. A case-control study involved 133 individuals with skin cancer (100 with basal cell carcinoma, 21 with squamous cell carcinoma, 12 with melanoma) and 156 healthy individuals. All of them had skin phototype determined and spectrophotometric skin color measurements were done on the inner surfaces of their arms and on the buttock. Using those data, prediction models were built and subjected to 17-fold stratified cross-validation. A model, based on skin phototypes, was characterized by area under the receiver operating characteristic curve = 0.576 and exhibited a lower predictive power than the models, which were mostly based on spectrophotometric variables describing pigmentation levels. The best predictors of skin cancer were R coordinate of RGB color space (area under the receiver operating characteristic curve 0.687) and melanin index (area under the receiver operating characteristic curve 0.683) for skin on the buttock. A small number of patients were studied. Models were not externally validated. Skin color parameters are more accurate predictors of skin cancer occurrence than skin phototypes. Spectrophotometry is a quick, easy, and affordable method offering relatively good predictive power. Copyright © 2015 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.
Light Curve Simulation Using Spacecraft CAD Models and Empirical Material Spectral BRDFS
NASA Astrophysics Data System (ADS)
Willison, A.; Bedard, D.
This paper presents a Matlab-based light curve simulation software package that uses computer-aided design (CAD) models of spacecraft and the spectral bidirectional reflectance distribution function (sBRDF) of their homogenous surface materials. It represents the overall optical reflectance of objects as a sBRDF, a spectrometric quantity, obtainable during an optical ground truth experiment. The broadband bidirectional reflectance distribution function (BRDF), the basis of a broadband light curve, is produced by integrating the sBRDF over the optical wavelength range. Colour-filtered BRDFs, the basis of colour-filtered light curves, are produced by first multiplying the sBRDF by colour filters, and integrating the products. The software package's validity is established through comparison of simulated reflectance spectra and broadband light curves with those measured of the CanX-1 Engineering Model (EM) nanosatellite, collected during an optical ground truth experiment. It is currently being extended to simulate light curves of spacecraft in Earth orbit, using spacecraft Two-Line-Element (TLE) sets, yaw/pitch/roll angles, and observer coordinates. Measured light curves of the NEOSSat spacecraft will be used to validate simulated quantities. The sBRDF was chosen to represent material reflectance as it is spectrometric and a function of illumination and observation geometry. Homogeneous material sBRDFs were obtained using a goniospectrometer for a range of illumination and observation geometries, collected in a controlled environment. The materials analyzed include aluminum alloy, two types of triple-junction photovoltaic (TJPV) cell, white paint, and multi-layer insulation (MLI). Interpolation and extrapolation methods were used to determine the sBRDF for all possible illumination and observation geometries not measured in the laboratory, resulting in empirical look-up tables. These look-up tables are referenced when calculating the overall sBRDF of objects, where the contribution of each facet is proportionally integrated.
Numerical simulations of flow fields through conventionally controlled wind turbines & wind farms
NASA Astrophysics Data System (ADS)
Emre Yilmaz, Ali; Meyers, Johan
2014-06-01
In the current study, an Actuator-Line Model (ALM) is implemented in our in-house pseudo-spectral LES solver SP-WIND, including a turbine controller. Below rated wind speed, turbines are controlled by a standard-torque-controller aiming at maximum power extraction from the wind. Above rated wind speed, the extracted power is limited by a blade pitch controller which is based on a proportional-integral type control algorithm. This model is used to perform a series of single turbine and wind farm simulations using the NREL 5MW turbine. First of all, we focus on below-rated wind speed, and investigate the effect of the farm layout on the controller calibration curves. These calibration curves are expressed in terms of nondimensional torque and rotational speed, using the mean turbine-disk velocity as reference. We show that this normalization leads to calibration curves that are independent of wind speed, but the calibration curves do depend on the farm layout, in particular for tightly spaced farms. Compared to turbines in a lone-standing set-up, turbines in a farm experience a different wind distribution over the rotor due to the farm boundary-layer interaction. We demonstrate this for fully developed wind-farm boundary layers with aligned turbine arrangements at different spacings (5D, 7D, 9D). Further we also compare calibration curves obtained from full farm simulations with calibration curves that can be obtained at a much lower cost using a minimal flow unit.
The prediction of acoustical particle motion using an efficient polynomial curve fit procedure
NASA Technical Reports Server (NTRS)
Marshall, S. E.; Bernhard, R.
1984-01-01
A procedure is examined whereby the acoustic model parameters, natural frequencies and mode shapes, in the cavities of transportation vehicles are determined experimentally. The acoustic model shapes are described in terms of the particle motion. The acoustic modal analysis procedure is tailored to existing minicomputer based spectral analysis systems.
Busetto, Gian Maria; De Berardinis, Ettore; Sciarra, Alessandro; Panebianco, Valeria; Giovannone, Riccardo; Rosato, Stefano; D'Errigo, Paola; Di Silverio, Franco; Gentile, Vincenzo; Salciccia, Stefano
2013-12-01
To overcome the well-known prostate-specific antigen limits, several new biomarkers have been proposed. Since its introduction in clinical practice, the urinary prostate cancer gene 3 (PCA3) assay has shown promising results for prostate cancer (PC) detection. Furthermore, multiparametric magnetic resonance imaging (mMRI) has the ability to better describe several aspects of PC. A prospective study of 171 patients with negative prostate biopsy findings and a persistent high prostate-specific antigen level was conducted to assess the role of mMRI and PCA3 in identifying PC. All patients underwent the PCA3 test and mMRI before a second transrectal ultrasound-guided prostate biopsy. The accuracy and reliability of PCA3 (3 different cutoff points) and mMRI were evaluated. Four multivariate logistic regression models were analyzed, in terms of discrimination and the cost benefit, to assess the clinical role of PCA3 and mMRI in predicting the biopsy outcome. A decision curve analysis was also plotted. Repeated transrectal ultrasound-guided biopsy identified 68 new cases (41.7%) of PC. The sensitivity and specificity of the PCA3 test and mMRI was 68% and 49% and 74% and 90%, respectively. Evaluating the regression models, the best discrimination (area under the curve 0.808) was obtained using the full model (base clinical model plus mMRI and PCA3). The decision curve analysis, to evaluate the cost/benefit ratio, showed good performance in predicting PC with the model that included mMRI and PCA3. mMRI increased the accuracy and sensitivity of the PCA3 test, and the use of the full model significantly improved the cost/benefit ratio, avoiding unnecessary biopsies. Copyright © 2013 Elsevier Inc. All rights reserved.
Neural network modeling for surgical decisions on traumatic brain injury patients.
Li, Y C; Liu, L; Chiu, W T; Jian, W S
2000-01-01
Computerized medical decision support systems have been a major research topic in recent years. Intelligent computer programs were implemented to aid physicians and other medical professionals in making difficult medical decisions. This report compares three different mathematical models for building a traumatic brain injury (TBI) medical decision support system (MDSS). These models were developed based on a large TBI patient database. This MDSS accepts a set of patient data such as the types of skull fracture, Glasgow Coma Scale (GCS), episode of convulsion and return the chance that a neurosurgeon would recommend an open-skull surgery for this patient. The three mathematical models described in this report including a logistic regression model, a multi-layer perceptron (MLP) neural network and a radial-basis-function (RBF) neural network. From the 12,640 patients selected from the database. A randomly drawn 9480 cases were used as the training group to develop/train our models. The other 3160 cases were in the validation group which we used to evaluate the performance of these models. We used sensitivity, specificity, areas under receiver-operating characteristics (ROC) curve and calibration curves as the indicator of how accurate these models are in predicting a neurosurgeon's decision on open-skull surgery. The results showed that, assuming equal importance of sensitivity and specificity, the logistic regression model had a (sensitivity, specificity) of (73%, 68%), compared to (80%, 80%) from the RBF model and (88%, 80%) from the MLP model. The resultant areas under ROC curve for logistic regression, RBF and MLP neural networks are 0.761, 0.880 and 0.897, respectively (P < 0.05). Among these models, the logistic regression has noticeably poorer calibration. This study demonstrated the feasibility of applying neural networks as the mechanism for TBI decision support systems based on clinical databases. The results also suggest that neural networks may be a better solution for complex, non-linear medical decision support systems than conventional statistical techniques such as logistic regression.
NASA Astrophysics Data System (ADS)
Neretnieks, Ivars; Eriksen, Tryggve; TäHtinen, PäIvi
1982-08-01
Radionuclide migration was studied in a natural fissure in a granite core. The fissure was oriented parallel to the axis in a cylindrical core 30 cm long and 20 cm in diameter. The traced solution was injected at one end of the core and collected at the other. Breakthrough curves were obtained for the nonsorbing tracers, tritiated water, and a large-molecular-weight lignosulphonate molecule and for the sorbing tracers, cesium and strontium. From the breakthrough curves for the nonsorbing tracers it could be concluded that channeling occurs in the single fissure. A `dispersion' model based on channeling is presented. The results from the sorbing tracers indicate that there is substantial diffusion into and sorption in the rock matrix. Sorption on the surface of the fissure also accounts for a part of the retardation effect of the sorbing species. A model which includes the mechanisms of channeling, surface sorption, matrix diffusion, and matrix sorption is presented. The experimental breakthrough curves can be fitted fairly well by this model by use of independently obtained data on diffusivities and matrix sorption.
Zhou, Jingwei; Wu, Jinglan; Liu, Yanan; Zou, Fengxia; Wu, Jian; Li, Kechun; Chen, Yong; Xie, Jingjing; Ying, Hanjie
2013-09-01
The adsorption of quaternary mixtures of ethanol/glycerol/glucose/acetic acid onto a microporous hyper-cross-linked resin HD-01 was studied in fixed beds. A mass transport model based on film solid linear driving force and the competitive Langmuir isotherm equation for the equilibrium relationship was used to develop theoretical fixed bed breakthrough curves. It was observed that the outlet concentration of glucose and glycerol exceeded the inlet concentration (c/c0>1), which is an evidence of competitive adsorption. This phenomenon can be explained by the displacement of glucose and glycerol by ethanol molecules, owing to more intensive interactions with the resin surface. The model proposed was validated using experimental data and can be capable of foresee reasonably the breakthrough curve of specific component under different operating conditions. The results show that HD-01 is a promising adsorbent for recovery of ethanol from the fermentation broth due to its large capacity, high selectivity, and rapid adsorption rate. Copyright © 2013 Elsevier Ltd. All rights reserved.
Optimization of CO2 laser cutting parameters on Austenitic type Stainless steel sheet
NASA Astrophysics Data System (ADS)
Parthiban, A.; Sathish, S.; Chandrasekaran, M.; Ravikumar, R.
2017-03-01
Thin AISI 316L stainless steel sheet widely used in sheet metal processing industries for specific applications. CO2 laser cutting is one of the most popular sheet metal cutting processes for cutting of sheets in different profile. In present work various cutting parameters such as laser power (2000 watts-4000 watts), cutting speed (3500mm/min - 5500 mm/min) and assist gas pressure (0.7 Mpa-0.9Mpa) for cutting of AISI 316L 2mm thickness stainless sheet. This experimentation was conducted based on Box-Behenken design. The aim of this work is to develop a mathematical model kerf width for straight and curved profile through response surface methodology. The developed mathematical models for straight and curved profile have been compared. The Quadratic models have the best agreement with experimental data, and also the shape of the profile a substantial role in achieving to minimize the kerf width. Finally the numerical optimization technique has been used to find out best optimum laser cutting parameter for both straight and curved profile cut.
Xiao, Zhiyan; Zou, Wei J; Chen, Ting; Yue, Ning J; Jabbour, Salma K; Parikh, Rahul; Zhang, Miao
2018-03-01
The goal of this study was to exam the efficacy of current DVH based clinical guidelines draw from photon experience for lung cancer radiation therapy on proton therapy. Comparison proton plans and IMRT plans were generated for 10 lung patients treated in our proton facility. A gEUD based plan evaluation method was developed for plan evaluation. This evaluation method used normal lung gEUD(a) curve in which the model parameter "a" was sampled from the literature reported value. For all patients, the proton plans delivered lower normal lung V 5 Gy with similar V 20 Gy and similar target coverage. Based on current clinical guidelines, proton plans were ranked superior to IMRT plans for all 10 patients. However, the proton and IMRT normal lung gEUD(a) curves crossed for 8 patients within the tested range of "a", which means there was a possibility that proton plan would be worse than IMRT plan for lung sparing. A concept of deficiency index (DI) was introduced to quantify the probability of proton plans doing worse than IMRT plans. By applying threshold on DI, four patients' proton plan was ranked inferior to the IMRT plan. Meanwhile if a threshold to the location of curve crossing was applied, 6 patients' proton plan was ranked inferior to the IMRT plan. The contradictory ranking results between the current clinical guidelines and the gEUD(a) curve analysis demonstrated there is potential pitfalls by applying photon experience directly to the proton world. A comprehensive plan evaluation based on radio-biological models should be carried out to decide if a lung patient would really be benefit from proton therapy. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Shao, Quanxi; Dutta, Dushmanta; Karim, Fazlul; Petheram, Cuan
2018-01-01
Streamflow discharge is a fundamental dataset required to effectively manage water and land resources. However, developing robust stage - discharge relationships called rating curves, from which streamflow discharge is derived, is time consuming and costly, particularly in remote areas and especially at high stage levels. As a result stage - discharge relationships are often heavily extrapolated. Hydrodynamic (HD) models are physically based models used to simulate the flow of water along river channels and over adjacent floodplains. In this paper we demonstrate a method by which a HD model can be used to generate a 'synthetic' stage - discharge relationship at high stages. The method uses a both-side Box-Cox transformation to calibrate the synthetic rating curve such that the regression residuals are as close to the normal distribution as possible. By doing this both-side transformation, the statistical uncertainty in the synthetically derived stage - discharge relationship can be calculated. This enables people trying to make decisions to determine whether the uncertainty in the synthetically generated rating curve at high stage levels is acceptable for their decision. The proposed method is demonstrated in two streamflow gauging stations in north Queensland, Australia.
NASA Astrophysics Data System (ADS)
Yang, Duo; Zhang, Xu; Pan, Rui; Wang, Yujie; Chen, Zonghai
2018-04-01
The state-of-health (SOH) estimation is always a crucial issue for lithium-ion batteries. In order to provide an accurate and reliable SOH estimation, a novel Gaussian process regression (GPR) model based on charging curve is proposed in this paper. Different from other researches where SOH is commonly estimated by cycle life, in this work four specific parameters extracted from charging curves are used as inputs of the GPR model instead of cycle numbers. These parameters can reflect the battery aging phenomenon from different angles. The grey relational analysis method is applied to analyze the relational grade between selected features and SOH. On the other hand, some adjustments are made in the proposed GPR model. Covariance function design and the similarity measurement of input variables are modified so as to improve the SOH estimate accuracy and adapt to the case of multidimensional input. Several aging data from NASA data repository are used for demonstrating the estimation effect by the proposed method. Results show that the proposed method has high SOH estimation accuracy. Besides, a battery with dynamic discharging profile is used to verify the robustness and reliability of this method.
NASA Astrophysics Data System (ADS)
Lowe, David; Machin, Graham
2012-06-01
The future mise en pratique for the realization of the kelvin will be founded on the melting temperatures of particular metal-carbon eutectic alloys as thermodynamic temperature references. However, at the moment there is no consensus on what should be taken as the melting temperature. An ideal melting or freezing curve should be a completely flat plateau at a specific temperature. Any departure from the ideal is due to shortcomings in the realization and should be accommodated within the uncertainty budget. However, for the proposed alloy-based fixed points, melting takes place over typically some hundreds of millikelvins. Including the entire melting range within the uncertainties would lead to an unnecessarily pessimistic view of the utility of these as reference standards. Therefore, detailed analysis of the shape of the melting curve is needed to give a value associated with some identifiable aspect of the phase transition. A range of approaches are or could be used; some purely practical, determining the point of inflection (POI) of the melting curve, some attempting to extrapolate to the liquidus temperature just at the end of melting, and a method that claims to give the liquidus temperature and an impurity correction based on the analytical Scheil model of solidification that has not previously been applied to eutectic melting. The different methods have been applied to cobalt-carbon melting curves that were obtained under conditions for which the Scheil model might be valid. In the light of the findings of this study it is recommended that the POI continue to be used as a pragmatic measure of temperature but where required a specified limits approach should be used to define and determine the melting temperature.
Han, Hyung Joon; Choi, Sae Byeol; Park, Man Sik; Lee, Jin Suk; Kim, Wan Bae; Song, Tae Jin; Choi, Sang Yong
2011-07-01
Single port laparoscopic surgery has come to the forefront of minimally invasive surgery. For those familiar with conventional techniques, however, this type of operation demands a different type of eye/hand coordination and involves unfamiliar working instruments. Herein, the authors describe the learning curve and the clinical outcomes of single port laparoscopic cholecystectomy for 150 consecutive patients with benign gallbladder disease. All patients underwent single port laparoscopic cholecystectomy using a homemade glove port by one of five operators with different levels of experiences of laparoscopic surgery. The learning curve for each operator was fitted using the non-linear ordinary least squares method based on a non-linear regression model. Mean operating time was 77.6 ± 28.5 min. Fourteen patients (6.0%) were converted to conventional laparoscopic cholecystectomy. Complications occurred in 15 patients (10.0%), as follows: bile duct injury (n = 2), surgical site infection (n = 8), seroma (n = 2), and wound pain (n = 3). One operator achieved a learning curve plateau at 61.4 min per procedure after 8.5 cases and his time improved by 95.3 min as compared with initial operation time. Younger surgeons showed significant decreases in mean operation time and achieved stable mean operation times. In particular, younger surgeons showed significant decreases in operation times after 20 cases. Experienced laparoscopic surgeons can safely perform single port laparoscopic cholecystectomy using conventional or angled laparoscopic instruments. The present study shows that an operator can overcome the single port laparoscopic cholecystectomy learning curve in about eight cases.
Llopis-Castelló, David; Camacho-Torregrosa, Francisco Javier; García, Alfredo
2018-05-26
One of every four road fatalities occurs on horizontal curves of two-lane rural roads. To this regard, many studies have been undertaken to analyze the crash risk on this road element. Most of them were based on the concept of geometric design consistency, which can be defined as how drivers' expectancies and road behavior relate. However, none of these studies included a variable which represents and estimates drivers' expectancies. This research presents a new local consistency model based on the Inertial Consistency Index (ICI). This consistency parameter is defined as the difference between the inertial operating speed, which represents drivers' expectations, and the operating speed, which represents road behavior. The inertial operating speed was defined as the weighted average operating speed of the preceding road section. In this way, different lengths, periods of time, and weighting distributions were studied to identify how the inertial operating speed should be calculated. As a result, drivers' expectancies should be estimated considering 15 s along the segment and a linear weighting distribution. This was consistent with drivers' expectancies acquirement process, which is closely related to Short-Term Memory. A Safety Performance Function was proposed to predict the number of crashes on a horizontal curve and consistency thresholds were defined based on the ICI. To this regard, the crash rate increased as the ICI increased. Finally, the proposed consistency model was compared with previous models. As a conclusion, the new Inertial Consistency Index allowed a more accurate estimation of the number of crashes and a better assessment of the consistency level on horizontal curves. Therefore, highway engineers have a new tool to identify where road crashes are more likely to occur during the design stage of both new two-lane rural roads and improvements of existing highways. Copyright © 2018 Elsevier Ltd. All rights reserved.
On the required complexity of vehicle dynamic models for use in simulation-based highway design.
Brown, Alexander; Brennan, Sean
2014-06-01
This paper presents the results of a comprehensive project whose goal is to identify roadway design practices that maximize the margin of safety between the friction supply and friction demand. This study is motivated by the concern for increased accident rates on curves with steep downgrades, geometries that contain features that interact in all three dimensions - planar curves, grade, and superelevation. This complexity makes the prediction of vehicle skidding quite difficult, particularly for simple simulation models that have historically been used for road geometry design guidance. To obtain estimates of friction margin, this study considers a range of vehicle models, including: a point-mass model used by the American Association of State Highway Transportation Officials (AASHTO) design policy, a steady-state "bicycle model" formulation that considers only per-axle forces, a transient formulation of the bicycle model commonly used in vehicle stability control systems, and finally, a full multi-body simulation (CarSim and TruckSim) regularly used in the automotive industry for high-fidelity vehicle behavior prediction. The presence of skidding--the friction demand exceeding supply--was calculated for each model considering a wide range of vehicles and road situations. The results indicate that the most complicated vehicle models are generally unnecessary for predicting skidding events. However, there are specific maneuvers, namely braking events within lane changes and curves, which consistently predict the worst-case friction margins across all models. This suggests that any vehicle model used for roadway safety analysis should include the effects of combined cornering and braking. The point-mass model typically used by highway design professionals may not be appropriate to predict vehicle behavior on high-speed curves during braking in low-friction situations. However, engineers can use the results of this study to help select the appropriate vehicle dynamic model complexity to use in the highway design process. Copyright © 2014 Elsevier Ltd. All rights reserved.
PERIODOGRAMS FOR MULTIBAND ASTRONOMICAL TIME SERIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
VanderPlas, Jacob T.; Ivezic, Željko
This paper introduces the multiband periodogram, a general extension of the well-known Lomb–Scargle approach for detecting periodic signals in time-domain data. In addition to advantages of the Lomb–Scargle method such as treatment of non-uniform sampling and heteroscedastic errors, the multiband periodogram significantly improves period finding for randomly sampled multiband light curves (e.g., Pan-STARRS, DES, and LSST). The light curves in each band are modeled as arbitrary truncated Fourier series, with the period and phase shared across all bands. The key aspect is the use of Tikhonov regularization which drives most of the variability into the so-called base model common tomore » all bands, while fits for individual bands describe residuals relative to the base model and typically require lower-order Fourier series. This decrease in the effective model complexity is the main reason for improved performance. After a pedagogical development of the formalism of least-squares spectral analysis, which motivates the essential features of the multiband model, we use simulated light curves and randomly subsampled SDSS Stripe 82 data to demonstrate the superiority of this method compared to other methods from the literature and find that this method will be able to efficiently determine the correct period in the majority of LSST’s bright RR Lyrae stars with as little as six months of LSST data, a vast improvement over the years of data reported to be required by previous studies. A Python implementation of this method, along with code to fully reproduce the results reported here, is available on GitHub.« less
Learning to predict where human gaze is using quaternion DCT based regional saliency detection
NASA Astrophysics Data System (ADS)
Li, Ting; Xu, Yi; Zhang, Chongyang
2014-09-01
Many current visual attention approaches used semantic features to accurately capture human gaze. However, these approaches demand high computational cost and can hardly be applied to daily use. Recently, some quaternion-based saliency detection models, such as PQFT (phase spectrum of Quaternion Fourier Transform), QDCT (Quaternion Discrete Cosine Transform), have been proposed to meet real-time requirement of human gaze tracking tasks. However, current saliency detection methods used global PQFT and QDCT to locate jump edges of the input, which can hardly detect the object boundaries accurately. To address the problem, we improved QDCT-based saliency detection model by introducing superpixel-wised regional saliency detection mechanism. The local smoothness of saliency value distribution is emphasized to distinguish noises of background from salient regions. Our algorithm called saliency confidence can distinguish the patches belonging to the salient object and those of the background. It decides whether the image patches belong to the same region. When an image patch belongs to a region consisting of other salient patches, this patch should be salient as well. Therefore, we use saliency confidence map to get background weight and foreground weight to do the optimization on saliency map obtained by QDCT. The optimization is accomplished by least square method. The optimization approach we proposed unifies local and global saliency by combination of QDCT and measuring the similarity between each image superpixel. We evaluate our model on four commonly-used datasets (Toronto, MIT, OSIE and ASD) using standard precision-recall curves (PR curves), the mean absolute error (MAE) and area under curve (AUC) measures. In comparison with most state-of-art models, our approach can achieve higher consistency with human perception without training. It can get accurate human gaze even in cluttered background. Furthermore, it achieves better compromise between speed and accuracy.
A FEM-based method to determine the complex material properties of piezoelectric disks.
Pérez, N; Carbonari, R C; Andrade, M A B; Buiochi, F; Adamowski, J C
2014-08-01
Numerical simulations allow modeling piezoelectric devices and ultrasonic transducers. However, the accuracy in the results is limited by the precise knowledge of the elastic, dielectric and piezoelectric properties of the piezoelectric material. To introduce the energy losses, these properties can be represented by complex numbers, where the real part of the model essentially determines the resonance frequencies and the imaginary part determines the amplitude of each resonant mode. In this work, a method based on the Finite Element Method (FEM) is modified to obtain the imaginary material properties of piezoelectric disks. The material properties are determined from the electrical impedance curve of the disk, which is measured by an impedance analyzer. The method consists in obtaining the material properties that minimize the error between experimental and numerical impedance curves over a wide range of frequencies. The proposed methodology starts with a sensitivity analysis of each parameter, determining the influence of each parameter over a set of resonant modes. Sensitivity results are used to implement a preliminary algorithm approaching the solution in order to avoid the search to be trapped into a local minimum. The method is applied to determine the material properties of a Pz27 disk sample from Ferroperm. The obtained properties are used to calculate the electrical impedance curve of the disk with a Finite Element algorithm, which is compared with the experimental electrical impedance curve. Additionally, the results were validated by comparing the numerical displacement profile with the displacements measured by a laser Doppler vibrometer. The comparison between the numerical and experimental results shows excellent agreement for both electrical impedance curve and for the displacement profile over the disk surface. The agreement between numerical and experimental displacement profiles shows that, although only the electrical impedance curve is considered in the adjustment procedure, the obtained material properties allow simulating the displacement amplitude accurately. Copyright © 2014 Elsevier B.V. All rights reserved.
Longitudinal Models of Reliability and Validity: A Latent Curve Approach.
ERIC Educational Resources Information Center
Tisak, John; Tisak, Marie S.
1996-01-01
Dynamic generalizations of reliability and validity that will incorporate longitudinal or developmental models, using latent curve analysis, are discussed. A latent curve model formulated to depict change is incorporated into the classical definitions of reliability and validity. The approach is illustrated with sociological and psychological…
The Effects of Autocorrelation on the Curve-of-Factors Growth Model
ERIC Educational Resources Information Center
Murphy, Daniel L.; Beretvas, S. Natasha; Pituch, Keenan A.
2011-01-01
This simulation study examined the performance of the curve-of-factors model (COFM) when autocorrelation and growth processes were present in the first-level factor structure. In addition to the standard curve-of factors growth model, 2 new models were examined: one COFM that included a first-order autoregressive autocorrelation parameter, and a…
The Biasing Effects of Unmodeled ARMA Time Series Processes on Latent Growth Curve Model Estimates
ERIC Educational Resources Information Center
Sivo, Stephen; Fan, Xitao; Witta, Lea
2005-01-01
The purpose of this study was to evaluate the robustness of estimated growth curve models when there is stationary autocorrelation among manifest variable errors. The results suggest that when, in practice, growth curve models are fitted to longitudinal data, alternative rival hypotheses to consider would include growth models that also specify…
Development and Assessment of a New Empirical Model for Predicting Full Creep Curves
Gray, Veronica; Whittaker, Mark
2015-01-01
This paper details the development and assessment of a new empirical creep model that belongs to the limited ranks of models reproducing full creep curves. The important features of the model are that it is fully standardised and is universally applicable. By standardising, the user no longer chooses functions but rather fits one set of constants only. Testing it on 7 contrasting materials, reproducing 181 creep curves we demonstrate its universality. New model and Theta Projection curves are compared to one another using an assessment tool developed within this paper. PMID:28793458
ERIC Educational Resources Information Center
Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill
Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…
Retiring the Short-Run Aggregate Supply Curve
ERIC Educational Resources Information Center
Elwood, S. Kirk
2010-01-01
The author argues that the aggregate demand/aggregate supply (AD/AS) model is significantly improved--although certainly not perfected--by trimming it of the short-run aggregate supply (SRAS) curve. Problems with the SRAS curve are shown first for the AD/AS model that casts the AD curve as identifying the equilibrium level of output associated…
Beyond the SCS curve number: A new stochastic spatial runoff approach
NASA Astrophysics Data System (ADS)
Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.
2015-12-01
The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.
An hourglass model for the flare of HST-1 in M87
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wen-Po; Zhao, Guang-Yao; Chen, Yong Jun
To explain the multi-wavelength light curves (from radio to X-ray) of HST-1 in the M87 jet, we propose an hourglass model that is a modified two-zone system of Tavecchio and Ghisellini (hereafter TG08): a slow hourglass-shaped or Laval-nozzle-shaped layer connected by two revolving exponential surfaces surrounding a fast spine through which plasma blobs flow. Based on the conservation of magnetic flux, the magnetic field changes along the axis of the hourglass. We adopt the result of TG08—the high-energy emission from GeV to TeV can be produced through inverse Compton by the two-zone system, and the photons from radio to X-raymore » are mainly radiated by the fast inner zone system. Here, we only discuss the light curves of the fast inner blob from radio to X-ray. When a compressible blob travels down the axis of the first bulb in the hourglass, because of magnetic flux conservation, its cross section experiences an adiabatic compression process, which results in particle acceleration and the brightening of HST-1. When the blob moves into the second bulb of the hourglass, because of magnetic flux conservation, the dimming of the knot occurs along with an adiabatic expansion of its cross section. A similar broken exponential function could fit the TeV peaks in M87, which may imply a correlation between the TeV flares of M87 and the light curves from radio to X-ray in HST-1. The Very Large Array (VLA) 22 GHz radio light curve of HST-1 verifies our prediction based on the model fit to the main peak of the VLA 15 GHz radio one.« less
Intrinsic Bayesian Active Contours for Extraction of Object Boundaries in Images
Srivastava, Anuj
2010-01-01
We present a framework for incorporating prior information about high-probability shapes in the process of contour extraction and object recognition in images. Here one studies shapes as elements of an infinite-dimensional, non-linear quotient space, and statistics of shapes are defined and computed intrinsically using differential geometry of this shape space. Prior models on shapes are constructed using probability distributions on tangent bundles of shape spaces. Similar to the past work on active contours, where curves are driven by vector fields based on image gradients and roughness penalties, we incorporate the prior shape knowledge in the form of vector fields on curves. Through experimental results, we demonstrate the use of prior shape models in the estimation of object boundaries, and their success in handling partial obscuration and missing data. Furthermore, we describe the use of this framework in shape-based object recognition or classification. PMID:21076692
Koenig, Agnès; Bügler, Jürgen; Kirsch, Dieter; Köhler, Fritz; Weyermann, Céline
2015-01-01
An ink dating method based on solvent analysis was recently developed using thermal desorption followed by gas chromatography/mass spectrometry (GC/MS) and is currently implemented in several forensic laboratories. The main aims of this work were to implement this method in a new laboratory to evaluate whether results were comparable at three levels: (i) validation criteria, (ii) aging curves, and (iii) results interpretation. While the results were indeed comparable in terms of validation, the method proved to be very sensitive to maintenances. Moreover, the aging curves were influenced by ink composition, as well as storage conditions (particularly when the samples were not stored in "normal" room conditions). Finally, as current interpretation models showed limitations, an alternative model based on slope calculation was proposed. However, in the future, a probabilistic approach may represent a better solution to deal with ink sample inhomogeneity. © 2014 American Academy of Forensic Science.
Time-independent Anisotropic Plastic Behavior by Mechanical Subelement Models
NASA Technical Reports Server (NTRS)
Pian, T. H. H.
1983-01-01
The paper describes a procedure for modelling the anisotropic elastic-plastic behavior of metals in plane stress state by the mechanical sub-layer model. In this model the stress-strain curves along the longitudinal and transverse directions are represented by short smooth segments which are considered as piecewise linear for simplicity. The model is incorporated in a finite element analysis program which is based on the assumed stress hybrid element and the iscoplasticity-theory.
A new method to predict anatomical outcome after idiopathic macular hole surgery.
Liu, Peipei; Sun, Yaoyao; Dong, Chongya; Song, Dan; Jiang, Yanrong; Liang, Jianhong; Yin, Hong; Li, Xiaoxin; Zhao, Mingwei
2016-04-01
To investigate whether a new macular hole closure index (MHCI) could predict anatomic outcome of macular hole surgery. A vitrectomy with internal limiting membrane peeling, air-fluid exchange, and gas tamponade were performed on all patients. The postoperative anatomic status of the macular hole was defined by spectral-domain OCT. MHCI was calculated as (M+N)/BASE based on the preoperative OCT status. M and N were the curve lengths of the detached photoreceptor arms, and BASE was the length of the retinal pigment epithelial layer (RPE layer) detaching from the photoreceptors. Postoperative anatomical outcomes were divided into three grades: A (bridge-like closure), B (good closure), and C (poor closure or no closure). Correlation analysis was performed between anatomical outcomes and MHCI. Receiver operating characteristic (ROC) curves were derived for MHCI, indicating good model discrimination. ROC curves were also assessed by the area under the curve, and cut-offs were calculated. Other predictive parameters reported previously, which included the MH minimum, the MH height, the macular hole index (MHI), the diameter hole index (DHI), and the tractional hole index (THI) had been compared as well. MHCI correlated significantly with postoperative anatomical outcomes (r = 0.543, p = 0.000), but other predictive parameters did not. The areas under the curves indicated that MHCI could be used as an effective predictor of anatomical outcome. Cut-off values of 0.7 and 1.0 were obtained for MHCI from ROC curve analysis. MHCI demonstrated a better predictive effect than other parameters, both in the correlation analysis and ROC analysis. MHCI could be an easily measured and accurate predictive index for postoperative anatomical outcomes.
Eshraghi, Iman; Jalali, Seyed K.; Pugno, Nicola Maria
2016-01-01
Imperfection sensitivity of large amplitude vibration of curved single-walled carbon nanotubes (SWCNTs) is considered in this study. The SWCNT is modeled as a Timoshenko nano-beam and its curved shape is included as an initial geometric imperfection term in the displacement field. Geometric nonlinearities of von Kármán type and nonlocal elasticity theory of Eringen are employed to derive governing equations of motion. Spatial discretization of governing equations and associated boundary conditions is performed using differential quadrature (DQ) method and the corresponding nonlinear eigenvalue problem is iteratively solved. Effects of amplitude and location of the geometric imperfection, and the nonlocal small-scale parameter on the nonlinear frequency for various boundary conditions are investigated. The results show that the geometric imperfection and non-locality play a significant role in the nonlinear vibration characteristics of curved SWCNTs. PMID:28773911
A numerical investigation of the effect of surface wettability on the boiling curve.
Hsu, Hua-Yi; Lin, Ming-Chieh; Popovic, Bridget; Lin, Chii-Ruey; Patankar, Neelesh A
2017-01-01
Surface wettability is recognized as playing an important role in pool boiling and the corresponding heat transfer curve. In this work, a systematic study of pool boiling heat transfer on smooth surfaces of varying wettability (contact angle range of 5° - 180°) has been conducted and reported. Based on numerical simulations, boiling curves are calculated and boiling dynamics in each regime are studied using a volume-of-fluid method with contact angle model. The calculated trends in critical heat flux and Leidenfrost point as functions of surface wettability are obtained and compared with prior experimental and theoretical predictions, giving good agreement. For the first time, the effect of contact angle on the complete boiling curve is shown. It is demonstrated that the simulation methodology can be used for studying pool boiling and related dynamics and providing more physical insights.
A numerical investigation of the effect of surface wettability on the boiling curve
Lin, Ming-Chieh; Popovic, Bridget; Lin, Chii-Ruey; Patankar, Neelesh A.
2017-01-01
Surface wettability is recognized as playing an important role in pool boiling and the corresponding heat transfer curve. In this work, a systematic study of pool boiling heat transfer on smooth surfaces of varying wettability (contact angle range of 5° − 180°) has been conducted and reported. Based on numerical simulations, boiling curves are calculated and boiling dynamics in each regime are studied using a volume-of-fluid method with contact angle model. The calculated trends in critical heat flux and Leidenfrost point as functions of surface wettability are obtained and compared with prior experimental and theoretical predictions, giving good agreement. For the first time, the effect of contact angle on the complete boiling curve is shown. It is demonstrated that the simulation methodology can be used for studying pool boiling and related dynamics and providing more physical insights. PMID:29125847
Nonlinear Growth Models in M"plus" and SAS
ERIC Educational Resources Information Center
Grimm, Kevin J.; Ram, Nilam
2009-01-01
Nonlinear growth curves or growth curves that follow a specified nonlinear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this article we describe how a variety of sigmoid curves can be fit using the M"plus" structural modeling program and the nonlinear…
ASYMPTOTICS FOR CHANGE-POINT MODELS UNDER VARYING DEGREES OF MIS-SPECIFICATION
SONG, RUI; BANERJEE, MOULINATH; KOSOROK, MICHAEL R.
2015-01-01
Change-point models are widely used by statisticians to model drastic changes in the pattern of observed data. Least squares/maximum likelihood based estimation of change-points leads to curious asymptotic phenomena. When the change–point model is correctly specified, such estimates generally converge at a fast rate (n) and are asymptotically described by minimizers of a jump process. Under complete mis-specification by a smooth curve, i.e. when a change–point model is fitted to data described by a smooth curve, the rate of convergence slows down to n1/3 and the limit distribution changes to that of the minimizer of a continuous Gaussian process. In this paper we provide a bridge between these two extreme scenarios by studying the limit behavior of change–point estimates under varying degrees of model mis-specification by smooth curves, which can be viewed as local alternatives. We find that the limiting regime depends on how quickly the alternatives approach a change–point model. We unravel a family of ‘intermediate’ limits that can transition, at least qualitatively, to the limits in the two extreme scenarios. The theoretical results are illustrated via a set of carefully designed simulations. We also demonstrate how inference for the change-point parameter can be performed in absence of knowledge of the underlying scenario by resorting to subsampling techniques that involve estimation of the convergence rate. PMID:26681814
Dark matter and MOND dynamical models of the massive spiral galaxy NGC 2841
NASA Astrophysics Data System (ADS)
Samurović, S.; Vudragović, A.; Jovanović, M.
2015-08-01
We study dynamical models of the massive spiral galaxy NGC 2841 using both the Newtonian models with Navarro-Frenk-White (NFW) and isothermal dark haloes, as well as various MOND (MOdified Newtonian Dynamics) models. We use the observations coming from several publicly available data bases: we use radio data, near-infrared photometry as well as spectroscopic observations. In our models, we find that both tested Newtonian dark matter approaches can successfully fit the observed rotational curve of NGC 2841. The three tested MOND models (standard, simple and, for the first time applied to another spiral galaxy than the Milky Way, Bekenstein's toy model) provide fits of the observed rotational curve with various degrees of success: the best result was obtained with the standard MOND model. For both approaches, Newtonian and MOND, the values of the mass-to-light ratios of the bulge are consistent with the predictions from the stellar population synthesis (SPS) based on the Salpeter initial mass function (IMF). Also, for Newtonian and simple and standard MOND models, the estimated stellar mass-to-light ratios of the disc agree with the predictions from the SPS models based on the Kroupa IMF, whereas the toy MOND model provides too low a value of the stellar mass-to-light ratio, incompatible with the predictions of the tested SPS models. In all our MOND models, we vary the distance to NGC 2841, and our best-fitting standard and toy models use the values higher than the Cepheid-based distance to the galaxy NGC 2841, and the best-fitting simple MOND model is based on the lower value of the distance. The best-fitting NFW model is inconsistent with the predictions of the Λ cold dark matter cosmology, because the inferred concentration index is too high for the established virial mass.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jenkin, Thomas J; Larson, Andrew; Ruth, Mark F
In light of the changing electricity resource mixes across the United States, an important question in electricity modeling is how additions and retirements of generation, including additions in variable renewable energy (VRE) generation could impact markets by changing hourly wholesale energy prices. Instead of using resource-intensive production cost models (PCMs) or building and using simple generator supply curves, this analysis uses a 'top-down' approach based on regression analysis of hourly historical energy and load data to estimate the impact of supply changes on wholesale electricity prices, provided the changes are not so substantial that they fundamentally alter the market andmore » dispatch-order driven behavior of non-retiring units. The rolling supply curve (RSC) method used in this report estimates the shape of the supply curve that fits historical hourly price and load data for given time intervals, such as two-weeks, and then repeats this on a rolling basis through the year. These supply curves can then be modified on an hourly basis to reflect the impact of generation retirements or additions, including VRE and then reapplied to the same load data to estimate the change in hourly electricity price. The choice of duration over which these RSCs are estimated has a significant impact on goodness of fit. For example, in PJM in 2015, moving from fitting one curve per year to 26 rolling two-week supply curves improves the standard error of the regression from 16 dollars/MWh to 6 dollars/MWh and the R-squared of the estimate from 0.48 to 0.76. We illustrate the potential use and value of the RSC method by estimating wholesale price effects under various generator retirement and addition scenarios, and we discuss potential limits of the technique, some of which are inherent. The ability to do this type of analysis is important to a wide range of market participants and other stakeholders, and it may have a role in complementing use of or providing calibrating insights to PCMs.« less
A multiple biomarker risk score for guiding clinical decisions using a decision curve approach.
Hughes, Maria F; Saarela, Olli; Blankenberg, Stefan; Zeller, Tanja; Havulinna, Aki S; Kuulasmaa, Kari; Yarnell, John; Schnabel, Renate B; Tiret, Laurence; Salomaa, Veikko; Evans, Alun; Kee, Frank
2012-08-01
We assessed whether a cardiovascular risk model based on classic risk factors (e.g. cholesterol, blood pressure) could refine disease prediction if it included novel biomarkers (C-reactive protein, N-terminal pro-B-type natriuretic peptide, troponin I) using a decision curve approach which can incorporate clinical consequences. We evaluated whether a model including biomarkers and classic risk factors could improve prediction of 10 year risk of cardiovascular disease (CVD; chronic heart disease and ischaemic stroke) against a classic risk factor model using a decision curve approach in two prospective MORGAM cohorts. This included 7739 men and women with 457 CVD cases from the FINRISK97 cohort; and 2524 men with 259 CVD cases from PRIME Belfast. The biomarker model improved disease prediction in FINRISK across the high-risk group (20-40%) but not in the intermediate risk group, at the 23% risk threshold net benefit was 0.0033 (95% CI 0.0013-0.0052). However, in PRIME Belfast the net benefit of decisions guided by the decision curve was improved across intermediate risk thresholds (10-20%). At p(t) = 10% in PRIME, the net benefit was 0.0059 (95% CI 0.0007-0.0112) with a net increase in 6 true positive cases per 1000 people screened and net decrease of 53 false positive cases per 1000 potentially leading to 5% fewer treatments in patients not destined for an event. The biomarker model improves 10-year CVD prediction at intermediate and high-risk thresholds and in particular, could be clinically useful at advising middle-aged European males of their CVD risk.
Advancing reservoir operation description in physically based hydrological models
NASA Astrophysics Data System (ADS)
Anghileri, Daniela; Giudici, Federico; Castelletti, Andrea; Burlando, Paolo
2016-04-01
Last decades have seen significant advances in our capacity of characterizing and reproducing hydrological processes within physically based models. Yet, when the human component is considered (e.g. reservoirs, water distribution systems), the associated decisions are generally modeled with very simplistic rules, which might underperform in reproducing the actual operators' behaviour on a daily or sub-daily basis. For example, reservoir operations are usually described by a target-level rule curve, which represents the level that the reservoir should track during normal operating conditions. The associated release decision is determined by the current state of the reservoir relative to the rule curve. This modeling approach can reasonably reproduce the seasonal water volume shift due to reservoir operation. Still, it cannot capture more complex decision making processes in response, e.g., to the fluctuations of energy prices and demands, the temporal unavailability of power plants or varying amount of snow accumulated in the basin. In this work, we link a physically explicit hydrological model with detailed hydropower behavioural models describing the decision making process by the dam operator. In particular, we consider two categories of behavioural models: explicit or rule-based behavioural models, where reservoir operating rules are empirically inferred from observational data, and implicit or optimization based behavioural models, where, following a normative economic approach, the decision maker is represented as a rational agent maximising a utility function. We compare these two alternate modelling approaches on the real-world water system of Lake Como catchment in the Italian Alps. The water system is characterized by the presence of 18 artificial hydropower reservoirs generating almost 13% of the Italian hydropower production. Results show to which extent the hydrological regime in the catchment is affected by different behavioural models and reservoir operating strategies.
Testing gamma-ray burst models with the afterglow of GRB 090102
NASA Astrophysics Data System (ADS)
Gendre, B.; Klotz, A.; Palazzi, E.; Krühler, T.; Covino, S.; Afonso, P.; Antonelli, L. A.; Atteia, J. L.; D'Avanzo, P.; Boër, M.; Greiner, J.; Klose, S.
2010-07-01
We present the observations of the afterglow of gamma-ray burst GRB 090102. Optical data taken by the Telescope a Action Rapide pour les Objets Transitoires (TAROT), Rapid Eye Mount (REM), Gamma-Ray burst Optical/Near-Infrared Detector (GROND), together with publicly available data from Palomar, Instituto de Astrofísica de Canarias (IAC) and Nordic Optical Telescope (NOT) telescopes, and X-ray data taken by the XRT instrument on board the Swift spacecraft were used. This event features an unusual light curve. In X-rays, it presents a constant decrease with no hint of temporal break from 0.005 to 6d after the burst. In the optical, the light curve presents a flattening after 1ks. Before this break, the optical light curve is steeper than that of the X-ray. In the optical, no further break is observed up to 10d after the burst. We failed to explain these observations in light of the standard fireball model. Several other models, including the cannonball model were investigated. The explanation of the broad-band data by any model requires some fine-tuning when taking into account both optical and X-ray bands. Based on observations obtained with TAROT, REM, GROND. E-mail: bruce.gendre@asdc.asi.it ‡ Present address: ASDC, Via Galileo Galilei, 00044 Frascati, Italy.
NASA Astrophysics Data System (ADS)
Müller, M. F.; Thompson, S. E.
2015-09-01
The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drives of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by a strong wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are strongly favored over statistical models.
Li, Yubo; Wang, Lei; Ju, Liang; Deng, Haoyue; Zhang, Zhenzhu; Hou, Zhiguo; Xie, Jiabin; Wang, Yuming; Zhang, Yanjun
2016-04-01
Current studies that evaluate toxicity based on metabolomics have primarily focused on the screening of biomarkers while largely neglecting further verification and biomarker applications. For this reason, we used drug-induced hepatotoxicity as an example to establish a systematic strategy for screening specific biomarkers and applied these biomarkers to evaluate whether the drugs have potential hepatotoxicity toxicity. Carbon tetrachloride (5 ml/kg), acetaminophen (1500 mg/kg), and atorvastatin (5 mg/kg) are established as rat hepatotoxicity models. Fifteen common biomarkers were screened by multivariate statistical analysis and integration analysis-based metabolomics data. The receiver operating characteristic curve was used to evaluate the sensitivity and specificity of the biomarkers. We obtained 10 specific biomarker candidates with an area under the curve greater than 0.7. Then, a support vector machine model was established by extracting specific biomarker candidate data from the hepatotoxic drugs and nonhepatotoxic drugs; the accuracy of the model was 94.90% (92.86% sensitivity and 92.59% specificity) and the results demonstrated that those ten biomarkers are specific. 6 drugs were used to predict the hepatotoxicity by the support vector machines model; the prediction results were consistent with the biochemical and histopathological results, demonstrating that the model was reliable. Thus, this support vector machine model can be applied to discriminate the between the hepatic or nonhepatic toxicity of drugs. This approach not only presents a new strategy for screening-specific biomarkers with greater diagnostic significance but also provides a new evaluation pattern for hepatotoxicity, and it will be a highly useful tool in toxicity estimation and disease diagnoses. © The Author 2016. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Deriving injury risk curves using survival analysis from biomechanical experiments.
Yoganandan, Narayan; Banerjee, Anjishnu; Hsu, Fang-Chi; Bass, Cameron R; Voo, Liming; Pintar, Frank A; Gayzik, F Scott
2016-10-03
Injury risk curves from biomechanical experimental data analysis are used in automotive studies to improve crashworthiness and advance occupant safety. Metrics such as acceleration and deflection coupled with outcomes such as fractures and anatomical disruptions from impact tests are used in simple binary regression models. As an improvement, the International Standards Organization suggested a different approach. It was based on survival analysis. While probability curves for side-impact-induced thorax and abdominal injuries and frontal impact-induced foot-ankle-leg injuries are developed using this approach, deficiencies are apparent. The objective of this study is to present an improved, robust and generalizable methodology in an attempt to resolve these issues. It includes: (a) statistical identification of the most appropriate independent variable (metric) from a pool of candidate metrics, measured and or derived during experimentation and analysis processes, based on the highest area under the receiver operator curve, (b) quantitative determination of the most optimal probability distribution based on the lowest Akaike information criterion, (c) supplementing the qualitative/visual inspection method for comparing the selected distribution with a non-parametric distribution with objective measures, (d) identification of overly influential observations using different methods, and (e) estimation of confidence intervals using techniques more appropriate to the underlying survival statistical model. These clear and quantified details can be easily implemented with commercial/open source packages. They can be used in retrospective analysis and prospective design of experiments, and in applications to different loading scenarios such as underbody blast events. The feasibility of the methodology is demonstrated using post mortem human subject experiments and 24 metrics associated with thoracic/abdominal injuries in side-impacts. Published by Elsevier Ltd.
Improved Evolutionary Programming with Various Crossover Techniques for Optimal Power Flow Problem
NASA Astrophysics Data System (ADS)
Tangpatiphan, Kritsana; Yokoyama, Akihiko
This paper presents an Improved Evolutionary Programming (IEP) for solving the Optimal Power Flow (OPF) problem, which is considered as a non-linear, non-smooth, and multimodal optimization problem in power system operation. The total generator fuel cost is regarded as an objective function to be minimized. The proposed method is an Evolutionary Programming (EP)-based algorithm with making use of various crossover techniques, normally applied in Real Coded Genetic Algorithm (RCGA). The effectiveness of the proposed approach is investigated on the IEEE 30-bus system with three different types of fuel cost functions; namely the quadratic cost curve, the piecewise quadratic cost curve, and the quadratic cost curve superimposed by sine component. These three cost curves represent the generator fuel cost functions with a simplified model and more accurate models of a combined-cycle generating unit and a thermal unit with value-point loading effect respectively. The OPF solutions by the proposed method and Pure Evolutionary Programming (PEP) are observed and compared. The simulation results indicate that IEP requires less computing time than PEP with better solutions in some cases. Moreover, the influences of important IEP parameters on the OPF solution are described in details.
Incorporating Experience Curves in Appliance Standards Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garbesi, Karina; Chan, Peter; Greenblatt, Jeffery
2011-10-31
The technical analyses in support of U.S. energy conservation standards for residential appliances and commercial equipment have typically assumed that manufacturing costs and retail prices remain constant during the projected 30-year analysis period. There is, however, considerable evidence that this assumption does not reflect real market prices. Costs and prices generally fall in relation to cumulative production, a phenomenon known as experience and modeled by a fairly robust empirical experience curve. Using price data from the Bureau of Labor Statistics, and shipment data obtained as part of the standards analysis process, we present U.S. experience curves for room air conditioners,more » clothes dryers, central air conditioners, furnaces, and refrigerators and freezers. These allow us to develop more representative appliance price projections than the assumption-based approach of constant prices. These experience curves were incorporated into recent energy conservation standards for these products. The impact on the national modeling can be significant, often increasing the net present value of potential standard levels in the analysis. In some cases a previously cost-negative potential standard level demonstrates a benefit when incorporating experience. These results imply that past energy conservation standards analyses may have undervalued the economic benefits of potential standard levels.« less
Stability of the body-centred-cubic phase of iron in the Earth's inner core.
Belonoshko, Anatoly B; Ahuja, Rajeev; Johansson, Börje
2003-08-28
Iron is thought to be the main constituent of the Earth's core, and considerable efforts have therefore been made to understand its properties at high pressure and temperature. While these efforts have expanded our knowledge of the iron phase diagram, there remain some significant inconsistencies, the most notable being the difference between the 'low' and 'high' melting curves. Here we report the results of molecular dynamics simulations of iron based on embedded atom models fitted to the results of two implementations of density functional theory. We tested two model approximations and found that both point to the stability of the body-centred-cubic (b.c.c.) iron phase at high temperature and pressure. Our calculated melting curve is in agreement with the 'high' melting curve, but our calculated phase boundary between the hexagonal close packed (h.c.p.) and b.c.c. iron phases is in good agreement with the 'low' melting curve. We suggest that the h.c.p.-b.c.c. transition was previously misinterpreted as a melting transition, similar to the case of xenon, and that the b.c.c. phase of iron is the stable phase in the Earth's inner core.
Analysis of diffusion in curved surfaces and its application to tubular membranes
Klaus, Colin James Stockdale; Raghunathan, Krishnan; DiBenedetto, Emmanuele; Kenworthy, Anne K.
2016-01-01
Diffusion of particles in curved surfaces is inherently complex compared with diffusion in a flat membrane, owing to the nonplanarity of the surface. The consequence of such nonplanar geometry on diffusion is poorly understood but is highly relevant in the case of cell membranes, which often adopt complex geometries. To address this question, we developed a new finite element approach to model diffusion on curved membrane surfaces based on solutions to Fick’s law of diffusion and used this to study the effects of geometry on the entry of surface-bound particles into tubules by diffusion. We show that variations in tubule radius and length can distinctly alter diffusion gradients in tubules over biologically relevant timescales. In addition, we show that tubular structures tend to retain concentration gradients for a longer time compared with a comparable flat surface. These findings indicate that sorting of particles along the surfaces of tubules can arise simply as a geometric consequence of the curvature without any specific contribution from the membrane environment. Our studies provide a framework for modeling diffusion in curved surfaces and suggest that biological regulation can emerge purely from membrane geometry. PMID:27733625
Conservative management of idiopathic scoliosis--guidelines based on SOSORT 2006 Consensus.
Kotwicki, Tomasz; Durmała, Jacek; Czaprowski, Dariusz; Głowacki, Maciej; Kołban, Maciej; Snela, Sławomir; Sliwiński, Zbigniew; Kowalski, Ireneusz M
2009-01-01
Idiopathic scoliosis, defined as a lateral curvature of the spine of above 10 degrees (Cobb angle), is seen in 2-3% of the growing age population, while curves above 20 degrees , requiring conservative treatment, are found in 0.3-0.5%. In our observation, both under-treatment of progressive curves and over-treatment of stable cases are common during conservative management of scoliosis. A model of therapeutic management is presented based on the experience of Polish clinicians specialising in the treatment of scoliosis as well as the effects of work of a panel of experts of SOSORT (Society on Scoliosis Orthopaedic and Rehabilitation Treatment). The model comprises the indications for conservative treatment according to age, curve type and size and Risser grading. The aetiology, classifications, usefulness of the Lonstein and Carlson factor of progression and other methods of determining the probability of scoliosis progression, as well as the psychological aspects of conservative management are presented. Based on the knowledge of the natural history of idiopathic scoliosis, factors of progression and on the SOSORT experts' opinion, guidelines are proposed for clinicians treating children and adolescents with idiopathic scoliosis, including the timing and course of brace treatment and the types of exercises. Uniform practical guidelines developed by experts may represent an essential step towards establishing standards of conservative scoliosis care in our country.
Yoon, Yong-Jin; Steele, Charles R; Puria, Sunil
2011-01-05
The high sensitivity and wide bandwidth of mammalian hearing are thought to derive from an active process involving the somatic and hair-bundle motility of the thousands of outer hair cells uniquely found in mammalian cochleae. To better understand this, a biophysical three-dimensional cochlear fluid model was developed for gerbil, chinchilla, cat, and human, featuring an active "push-pull" cochlear amplifier mechanism based on the cytoarchitecture of the organ of Corti and using the time-averaged Lagrangian method. Cochlear responses are simulated and compared with in vivo physiological measurements for the basilar membrane (BM) velocity, V(BM), frequency tuning of the BM vibration, and Q₁₀ values representing the sharpness of the cochlear tuning curves. The V(BM) simulation results for gerbil and chinchilla are consistent with in vivo cochlea measurements. Simulated mechanical tuning curves based on maintaining a constant V(BM) value agree with neural-tuning threshold measurements better than those based on a constant displacement value, which implies that the inner hair cells are more sensitive to V(BM) than to BM displacement. The Q₁₀ values of the V(BM) tuning curve agree well with those of cochlear neurons across species, and appear to be related in part to the width of the basilar membrane. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Development of brain injury criteria (BrIC).
Takhounts, Erik G; Craig, Matthew J; Moorhouse, Kevin; McFadden, Joe; Hasija, Vikas
2013-11-01
Rotational motion of the head as a mechanism for brain injury was proposed back in the 1940s. Since then a multitude of research studies by various institutions were conducted to confirm/reject this hypothesis. Most of the studies were conducted on animals and concluded that rotational kinematics experienced by the animal's head may cause axonal deformations large enough to induce their functional deficit. Other studies utilized physical and mathematical models of human and animal heads to derive brain injury criteria based on deformation/pressure histories computed from their models. This study differs from the previous research in the following ways: first, it uses two different detailed mathematical models of human head (SIMon and GHBMC), each validated against various human brain response datasets; then establishes physical (strain and stress based) injury criteria for various types of brain injury based on scaled animal injury data; and finally, uses Anthropomorphic Test Devices (ATDs) (Hybrid III 50th Male, Hybrid III 5th Female, THOR 50th Male, ES-2re, SID-IIs, WorldSID 50th Male, and WorldSID 5th Female) test data (NCAP, pendulum, and frontal offset tests) to establish a kinematically based brain injury criterion (BrIC) for all ATDs. Similar procedures were applied to college football data where thousands of head impacts were recorded using a six degrees of freedom (6 DOF) instrumented helmet system. Since animal injury data used in derivation of BrIC were predominantly for diffuse axonal injury (DAI) type, which is currently an AIS 4+ injury, cumulative strain damage measure (CSDM) and maximum principal strain (MPS) were used to derive risk curves for AIS 4+ anatomic brain injuries. The AIS 1+, 2+, 3+, and 5+ risk curves for CSDM and MPS were then computed using the ratios between corresponding risk curves for head injury criterion (HIC) at a 50% risk. The risk curves for BrIC were then obtained from CSDM and MPS risk curves using the linear relationship between CSDM - BrIC and MPS - BrIC respectively. AIS 3+, 4+ and 5+ field risk of anatomic brain injuries was also estimated using the National Automotive Sampling System - Crashworthiness Data System (NASS-CDS) database for crash conditions similar to the frontal NCAP and side impact conditions that the ATDs were tested in. This was done to assess the risk curve ratios derived from HIC risk curves. The results of the study indicated that: (1) the two available human head models - SIMon and GHBMC - were found to be highly correlated when CSDMs and max principal strains were compared; (2) BrIC correlates best to both - CSDM and MPS, and rotational velocity (not rotational acceleration) is the mechanism for brain injuries; and (3) the critical values for angular velocity are directionally dependent, and are independent of the ATD used for measuring them. The newly developed brain injury criterion is a complement to the existing HIC, which is based on translational accelerations. Together, the two criteria may be able to capture most brain injuries and skull fractures occurring in automotive or any other impact environment. One of the main limitations for any brain injury criterion, including BrIC, is the lack of human injury data to validate the criteria against, although some approximation for AIS 2+ injury is given based on the angular velocities calculated at 50% probability of concussion in college football players instrumented with 5 DOF helmet system. Despite the limitations, a new kinematic rotational brain injury criterion - BrIC - may offer a way to capture brain injuries in situations when using translational accelerations based HIC alone may not be sufficient.