Gas Chromatography Data Classification Based on Complex Coefficients of an Autoregressive Model
Zhao, Weixiang; Morgan, Joshua T.; Davis, Cristina E.
2008-01-01
This paper introduces autoregressive (AR) modeling as a novel method to classify outputs from gas chromatography (GC). The inverse Fourier transformation was applied to the original sensor data, and then an AR model was applied to transform data to generate AR model complex coefficients. This series of coefficients effectively contains a compressed version of all of the information in the original GC signal output. We applied this method to chromatograms resulting from proliferating bacteria species grown in culture. Three types of neural networks were used to classify the AR coefficients: backward propagating neural network (BPNN), radial basis function-principal component analysismore » (RBF-PCA) approach, and radial basis function-partial least squares regression (RBF-PLSR) approach. This exploratory study demonstrates the feasibility of using complex root coefficient patterns to distinguish various classes of experimental data, such as those from the different bacteria species. This cognition approach also proved to be robust and potentially useful for freeing us from time alignment of GC signals.« less
Recursive formulas for the partial fraction expansion of a rational function with multiple poles.
NASA Technical Reports Server (NTRS)
Chang, F.-C.
1973-01-01
The coefficients in the partial fraction expansion considered are given by Heaviside's formula. The evaluation of the coefficients involves the differential of a quotient of two polynomials. A simplified approach for the evaluation of the coefficients is discussed. Leibniz rule is applied and a recurrence formula is derived. A coefficient can also be determined from a system of simultaneous equations. Practical methods for the performance of the computational operations involved in both approaches are considered.
Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan
2016-04-01
Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.
Determination of aerodynamic sensitivity coefficients in the transonic and supersonic regimes
NASA Technical Reports Server (NTRS)
Elbanna, Hesham M.; Carlson, Leland A.
1989-01-01
The quasi-analytical approach is developed to compute airfoil aerodynamic sensitivity coefficients in the transonic and supersonic flight regimes. Initial investigation verifies the feasibility of this approach as applied to the transonic small perturbation residual expression. Results are compared to those obtained by the direct (finite difference) approach and both methods are evaluated to determine their computational accuracies and efficiencies. The quasi-analytical approach is shown to be superior and worth further investigation.
Covariate-free and Covariate-dependent Reliability.
Bentler, Peter M
2016-12-01
Classical test theory reliability coefficients are said to be population specific. Reliability generalization, a meta-analysis method, is the main procedure for evaluating the stability of reliability coefficients across populations. A new approach is developed to evaluate the degree of invariance of reliability coefficients to population characteristics. Factor or common variance of a reliability measure is partitioned into parts that are, and are not, influenced by control variables, resulting in a partition of reliability into a covariate-dependent and a covariate-free part. The approach can be implemented in a single sample and can be applied to a variety of reliability coefficients.
Liao, Fuyuan; Jan, Yih-Kuen
2012-06-01
This paper presents a recurrence network approach for the analysis of skin blood flow dynamics in response to loading pressure. Recurrence is a fundamental property of many dynamical systems, which can be explored in phase spaces constructed from observational time series. A visualization tool of recurrence analysis called recurrence plot (RP) has been proved to be highly effective to detect transitions in the dynamics of the system. However, it was found that delay embedding can produce spurious structures in RPs. Network-based concepts have been applied for the analysis of nonlinear time series recently. We demonstrate that time series with different types of dynamics exhibit distinct global clustering coefficients and distributions of local clustering coefficients and that the global clustering coefficient is robust to the embedding parameters. We applied the approach to study skin blood flow oscillations (BFO) response to loading pressure. The results showed that global clustering coefficients of BFO significantly decreased in response to loading pressure (p<0.01). Moreover, surrogate tests indicated that such a decrease was associated with a loss of nonlinearity of BFO. Our results suggest that the recurrence network approach can practically quantify the nonlinear dynamics of BFO.
Determination of aerodynamic sensitivity coefficients for wings in transonic flow
NASA Technical Reports Server (NTRS)
Carlson, Leland A.; El-Banna, Hesham M.
1992-01-01
The quasianalytical approach is applied to the 3-D full potential equation to compute wing aerodynamic sensitivity coefficients in the transonic regime. Symbolic manipulation is used to reduce the effort associated with obtaining the sensitivity equations, and the large sensitivity system is solved using 'state of the art' routines. The quasianalytical approach is believed to be reasonably accurate and computationally efficient for 3-D problems.
NASA Astrophysics Data System (ADS)
Gryanik, Vladimir M.; Lüpkes, Christof
2018-02-01
In climate and weather prediction models the near-surface turbulent fluxes of heat and momentum and related transfer coefficients are usually parametrized on the basis of Monin-Obukhov similarity theory (MOST). To avoid iteration, required for the numerical solution of the MOST equations, many models apply parametrizations of the transfer coefficients based on an approach relating these coefficients to the bulk Richardson number Rib. However, the parametrizations that are presently used in most climate models are valid only for weaker stability and larger surface roughnesses than those documented during the Surface Heat Budget of the Arctic Ocean campaign (SHEBA). The latter delivered a well-accepted set of turbulence data in the stable surface layer over polar sea-ice. Using stability functions based on the SHEBA data, we solve the MOST equations applying a new semi-analytic approach that results in transfer coefficients as a function of Rib and roughness lengths for momentum and heat. It is shown that the new coefficients reproduce the coefficients obtained by the numerical iterative method with a good accuracy in the most relevant range of stability and roughness lengths. For small Rib, the new bulk transfer coefficients are similar to the traditional coefficients, but for large Rib they are much smaller than currently used coefficients. Finally, a possible adjustment of the latter and the implementation of the new proposed parametrizations in models are discussed.
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…
NASA Astrophysics Data System (ADS)
Pachhai, S.; Masters, G.; Laske, G.
2017-12-01
Earth's normal-mode spectra are crucial to studying the long wavelength structure of the Earth. Such observations have been used extensively to estimate "splitting coefficients" which, in turn, can be used to determine the three-dimensional velocity and density structure. Most past studies apply a non-linear iterative inversion to estimate the splitting coefficients which requires that the earthquake source is known. However, it is challenging to know the source details, particularly for big events as used in normal-mode analyses. Additionally, the final solution of the non-linear inversion can depend on the choice of damping parameter and starting model. To circumvent the need to know the source, a two-step linear inversion has been developed and successfully applied to many mantle and core sensitive modes. The first step takes combinations of the data from a single event to produce spectra known as "receiver strips". The autoregressive nature of the receiver strips can then be used to estimate the structure coefficients without the need to know the source. Based on this approach, we recently employed a neighborhood algorithm to measure the splitting coefficients for an isolated inner-core sensitive mode (13S2). This approach explores the parameter space efficiently without any need of regularization and finds the structure coefficients which best fit the observed strips. Here, we implement a Bayesian approach to data collected for earthquakes from early 2000 and more recent. This approach combines the data (through likelihood) and prior information to provide rigorous parameter values and their uncertainties for both isolated and coupled modes. The likelihood function is derived from the inferred errors of the receiver strips which allows us to retrieve proper uncertainties. Finally, we apply model selection criteria that balance the trade-offs between fit (likelihood) and model complexity to investigate the degree and type of structure (elastic and anelastic) required to explain the data.
A Structural Modeling Approach to a Multilevel Random Coefficients Model.
ERIC Educational Resources Information Center
Rovine, Michael J.; Molenaar, Peter C. M.
2000-01-01
Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)
Development of uncertainty-based work injury model using Bayesian structural equation modelling.
Chatterjee, Snehamoy
2014-01-01
This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.
Comparison of different phase retrieval algorithms
NASA Astrophysics Data System (ADS)
Kaufmann, Rolf; Plamondon, Mathieu; Hofmann, Jürgen; Neels, Antonia
2017-09-01
X-ray phase contrast imaging is attracting more and more interest. Since the phase cannot be measured directly an indirect method using e.g. a grating interferometer has to be applied. This contribution compares three different approaches to calculate the phase from Talbot-Lau interferometer measurements using a phase-stepping approach. Besides the usually applied Fourier coefficient method also a linear fitting technique and Taylor series expansion method are applied and compared.
Least-squares Minimization Approaches to Interpret Total Magnetic Anomalies Due to Spheres
NASA Astrophysics Data System (ADS)
Abdelrahman, E. M.; El-Araby, T. M.; Soliman, K. S.; Essa, K. S.; Abo-Ezz, E. R.
2007-05-01
We have developed three different least-squares approaches to determine successively: the depth, magnetic angle, and amplitude coefficient of a buried sphere from a total magnetic anomaly. By defining the anomaly value at the origin and the nearest zero-anomaly distance from the origin on the profile, the problem of depth determination is transformed into the problem of finding a solution of a nonlinear equation of the form f(z)=0. Knowing the depth and applying the least-squares method, the magnetic angle and amplitude coefficient are determined using two simple linear equations. In this way, the depth, magnetic angle, and amplitude coefficient are determined individually from all observed total magnetic data. The method is applied to synthetic examples with and without random errors and tested on a field example from Senegal, West Africa. In all cases, the depth solutions are in good agreement with the actual ones.
NASA Technical Reports Server (NTRS)
Elbanna, Hesham M.; Carlson, Leland A.
1992-01-01
The quasi-analytical approach is applied to the three-dimensional full potential equation to compute wing aerodynamic sensitivity coefficients in the transonic regime. Symbolic manipulation is used to reduce the effort associated with obtaining the sensitivity equations, and the large sensitivity system is solved using 'state of the art' routines. Results are compared to those obtained by the direct finite difference approach and both methods are evaluated to determine their computational accuracy and efficiency. The quasi-analytical approach is shown to be accurate and efficient for large aerodynamic systems.
NASA Technical Reports Server (NTRS)
Lee, Zhong-Ping; Carder, Kendall L.
2001-01-01
A multi-band analytical (MBA) algorithm is developed to retrieve absorption and backscattering coefficients for optically deep waters, which can be applied to data from past and current satellite sensors, as well as data from hyperspectral sensors. This MBA algorithm applies a remote-sensing reflectance model derived from the Radiative Transfer Equation, and values of absorption and backscattering coefficients are analytically calculated from values of remote-sensing reflectance. There are only limited empirical relationships involved in the algorithm, which implies that this MBA algorithm could be applied to a wide dynamic range of waters. Applying the algorithm to a simulated non-"Case 1" data set, which has no relation to the development of the algorithm, the percentage error for the total absorption coefficient at 440 nm a (sub 440) is approximately 12% for a range of 0.012 - 2.1 per meter (approximately 6% for a (sub 440) less than approximately 0.3 per meter), while a traditional band-ratio approach returns a percentage error of approximately 30%. Applying it to a field data set ranging from 0.025 to 2.0 per meter, the result for a (sub 440) is very close to that using a full spectrum optimization technique (9.6% difference). Compared to the optimization approach, the MBA algorithm cuts the computation time dramatically with only a small sacrifice in accuracy, making it suitable for processing large data sets such as satellite images. Significant improvements over empirical algorithms have also been achieved in retrieving the optical properties of optically deep waters.
Campbell, J Elliott; Moen, Jeremie C; Ney, Richard A; Schnoor, Jerald L
2008-03-01
Estimates of forest soil organic carbon (SOC) have applications in carbon science, soil quality studies, carbon sequestration technologies, and carbon trading. Forest SOC has been modeled using a regression coefficient methodology that applies mean SOC densities (mass/area) to broad forest regions. A higher resolution model is based on an approach that employs a geographic information system (GIS) with soil databases and satellite-derived landcover images. Despite this advancement, the regression approach remains the basis of current state and federal level greenhouse gas inventories. Both approaches are analyzed in detail for Wisconsin forest soils from 1983 to 2001, applying rigorous error-fixing algorithms to soil databases. Resulting SOC stock estimates are 20% larger when determined using the GIS method rather than the regression approach. Average annual rates of increase in SOC stocks are 3.6 and 1.0 million metric tons of carbon per year for the GIS and regression approaches respectively.
The conversion of exposures due to radon into the effective dose: the epidemiological approach.
Beck, T R
2017-11-01
The risks and dose conversion coefficients for residential and occupational exposures due to radon were determined with applying the epidemiological risk models to ICRP representative populations. The dose conversion coefficient for residential radon was estimated with a value of 1.6 mSv year -1 per 100 Bq m -3 (3.6 mSv per WLM), which is significantly lower than the corresponding value derived from the biokinetic and dosimetric models. The dose conversion coefficient for occupational exposures with applying the risk models for miners was estimated with a value of 14 mSv per WLM, which is in good accordance with the results of the dosimetric models. To resolve the discrepancy regarding residential radon, the ICRP approaches for the determination of risks and doses were reviewed. It could be shown that ICRP overestimates the risk for lung cancer caused by residential radon. This can be attributed to a wrong population weighting of the radon-induced risks in its epidemiological approach. With the approach in this work, the average risks for lung cancer were determined, taking into account the age-specific risk contributions of all individuals in the population. As a result, a lower risk coefficient for residential radon was obtained. The results from the ICRP biokinetic and dosimetric models for both, the occupationally exposed working age population and the whole population exposed to residential radon, can be brought in better accordance with the corresponding results of the epidemiological approach, if the respective relative radiation detriments and a radiation-weighting factor for alpha particles of about ten are used.
Piezoelectric shear wave resonator and method of making same
Wang, Jin S.; Lakin, Kenneth M.; Landin, Allen R.
1988-01-01
An acoustic shear wave resonator comprising a piezoelectric film having its C-axis substantially inclined from the film normal such that the shear wave coupling coefficient significantly exceeds the longitudinal wave coupling coefficient, whereby the film is capable of shear wave resonance, and means for exciting said film to resonate. The film is prepared by deposition in a dc planar magnetron sputtering system to which a supplemental electric field is applied. The resonator structure may also include a semiconductor material having a positive temperature coefficient of resonance such that the resonator has a temperature coefficient of resonance approaching 0 ppm/.degree.C.
Method of making a piezoelectric shear wave resonator
Wang, Jin S.; Lakin, Kenneth M.; Landin, Allen R.
1987-02-03
An acoustic shear wave resonator comprising a piezoelectric film having its C-axis substantially inclined from the film normal such that the shear wave coupling coefficient significantly exceeds the longitudinal wave coupling coefficient, whereby the film is capable of shear wave resonance, and means for exciting said film to resonate. The film is prepared by deposition in a dc planar magnetron sputtering system to which a supplemental electric field is applied. The resonator structure may also include a semiconductor material having a positive temperature coefficient of resonance such that the resonator has a temperature coefficient of resonance approaching 0 ppm/.degree.C.
NASA Astrophysics Data System (ADS)
Paço, Teresa A.; Pôças, Isabel; Cunha, Mário; Silvestre, José C.; Santos, Francisco L.; Paredes, Paula; Pereira, Luís S.
2014-11-01
The estimation of crop evapotranspiration (ETc) from the reference evapotranspiration (ETo) and a standard crop coefficient (Kc) in olive orchards requires that the latter be adjusted to planting density and height. The use of the dual Kc approach may be the best solution because the basal crop coefficient Kcb represents plant transpiration and the evaporation coefficient reproduces the soil coverage conditions and the frequency of wettings. To support related computations for a super intensive olive orchard, the model SIMDualKc was adopted because it uses the dual Kc approach. Alternatively, to consider the physical characteristics of the vegetation, the satellite-based surface energy balance model METRIC™ - Mapping EvapoTranspiration at high Resolution using Internalized Calibration - was used to estimate ETc and to derive crop coefficients. Both approaches were compared in this study. SIMDualKc model was calibrated and validated using sap-flow measurements of the transpiration for 2011 and 2012. In addition, eddy covariance estimation of ETc was also used. In the current study, METRIC™ was applied to Landsat images from 2011 to 2012. Adaptations for incomplete cover woody crops were required to parameterize METRIC. It was observed that ETc obtained from both approaches was similar and that crop coefficients derived from both models showed similar patterns throughout the year. Although the two models use distinct approaches, their results are comparable and they are complementary in spatial and temporal scales.
Collaborative sparse priors for multi-view ATR
NASA Astrophysics Data System (ADS)
Li, Xuelu; Monga, Vishal
2018-04-01
Recent work has seen a surge of sparse representation based classification (SRC) methods applied to automatic target recognition problems. While traditional SRC approaches used l0 or l1 norm to quantify sparsity, spike and slab priors have established themselves as the gold standard for providing general tunable sparse structures on vectors. In this work, we employ collaborative spike and slab priors that can be applied to matrices to encourage sparsity for the problem of multi-view ATR. That is, target images captured from multiple views are expanded in terms of a training dictionary multiplied with a coefficient matrix. Ideally, for a test image set comprising of multiple views of a target, coefficients corresponding to its identifying class are expected to be active, while others should be zero, i.e. the coefficient matrix is naturally sparse. We develop a new approach to solve the optimization problem that estimates the sparse coefficient matrix jointly with the sparsity inducing parameters in the collaborative prior. ATR problems are investigated on the mid-wave infrared (MWIR) database made available by the US Army Night Vision and Electronic Sensors Directorate, which has a rich collection of views. Experimental results show that the proposed joint prior and coefficient estimation method (JPCEM) can: 1.) enable improved accuracy when multiple views vs. a single one are invoked, and 2.) outperform state of the art alternatives particularly when training imagery is limited.
Multiscale image contrast amplification (MUSICA)
NASA Astrophysics Data System (ADS)
Vuylsteke, Pieter; Schoeters, Emile P.
1994-05-01
This article presents a novel approach to the problem of detail contrast enhancement, based on multiresolution representation of the original image. The image is decomposed into a weighted sum of smooth, localized, 2D basis functions at multiple scales. Each transform coefficient represents the amount of local detail at some specific scale and at a specific position in the image. Detail contrast is enhanced by non-linear amplification of the transform coefficients. An inverse transform is then applied to the modified coefficients. This yields a uniformly contrast- enhanced image without artefacts. The MUSICA-algorithm is being applied routinely to computed radiography images of chest, skull, spine, shoulder, pelvis, extremities, and abdomen examinations, with excellent acceptance. It is useful for a wide range of applications in the medical, graphical, and industrial area.
Piezoelectric shear wave resonator and method of making same
Wang, J.S.; Lakin, K.M.; Landin, A.R.
1985-05-20
An acoustic shear wave resonator comprising a piezoelectric film having its C-axis substantially inclined from the film normal such that the shear wave coupling coefficient significantly exceeds the longitudinal wave coupling coefficient, whereby the film is capable of shear wave resonance, and means for exciting said film to resonate. The film is prepared by deposition in a dc planar magnetron sputtering system to which a supplemental electric field is applied. The resonator structure may also include a semiconductor material having a positive temperature coefficient of resonance such that the resonator has a temperature coefficient of resonance approaching 0 ppM//sup 0/C.
Piezoelectric shear wave resonator and method of making same
Wang, J.S.; Lakin, K.M.; Landin, A.R.
1983-10-25
An acoustic shear wave resonator comprising a piezoelectric film having its C-axis substantially inclined from the film normal such that the shear wave coupling coefficient significantly exceeds the longitudinal wave coupling coefficient, whereby the film is capable of shear wave resonance, and means for exciting said film to resonate. The film is prepared by deposition in a dc planar magnetron sputtering system to which a supplemental electric field is applied. The resonator structure may also include a semiconductor material having a positive temperature coefficient of resonance such that the resonator has a temperature coefficient of resonance approaching 0 ppM//sup 0/C.
Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan
2012-01-01
Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.
NASA Astrophysics Data System (ADS)
Bakoban, Rana A.
2017-08-01
The coefficient of variation [CV] has several applications in applied statistics. So in this paper, we adopt Bayesian and non-Bayesian approaches for the estimation of CV under type-II censored data from extension exponential distribution [EED]. The point and interval estimate of the CV are obtained for each of the maximum likelihood and parametric bootstrap techniques. Also the Bayesian approach with the help of MCMC method is presented. A real data set is presented and analyzed, hence the obtained results are used to assess the obtained theoretical results.
Agreement in functional assessment: graphic approaches to displaying respondent effects.
Haley, Stephen M; Ni, Pengsheng; Coster, Wendy J; Black-Schaffer, Randie; Siebens, Hilary; Tao, Wei
2006-09-01
The objective of this study was to examine the agreement between respondents of summary scores from items representing three functional content areas (physical and mobility, personal care and instrumental, applied cognition) within the Activity Measure for Postacute Care (AM-PAC). We compare proxy vs. patient report in both hospital and community settings as represented by intraclass correlation coefficients and two graphic approaches. The authors conducted a prospective, cohort study of a convenience sample of adults (n = 47) receiving rehabilitation services either in hospital (n = 31) or community (n = 16) settings. In addition to using intraclass correlation coefficients (ICC) as indices of agreement, we applied two graphic approaches to serve as complements to help interpret the direction and magnitude of respondent disagreements. We created a "mountain plot" based on a cumulative distribution curve and a "survival-agreement plot" with step functions used in the analysis of survival data. ICCs on summary scores between patient and proxy report were physical and mobility ICC = 0.92, personal care and instrumental ICC = 0.93, and applied cognition ICC = 0.77. Although combined respondent agreement was acceptable, graphic approaches helped interpret differences in separate analyses of clinician and family agreement. Graphic analyses allow for a simple interpretation of agreement data and may be useful in determining the meaningfulness of the amount and direction of interrespondent variation.
Lu, Shao Hua; Li, Bao Qiong; Zhai, Hong Lin; Zhang, Xin; Zhang, Zhuo Yong
2018-04-25
Terahertz time-domain spectroscopy has been applied to many fields, however, it still encounters drawbacks in multicomponent mixtures analysis due to serious spectral overlapping. Here, an effective approach to quantitative analysis was proposed, and applied on the determination of the ternary amino acids in foxtail millet substrate. Utilizing three parameters derived from the THz-TDS, the images were constructed and the Tchebichef image moments were used to extract the information of target components. Then the quantitative models were obtained by stepwise regression. The correlation coefficients of leave-one-out cross-validation (R loo-cv 2 ) were more than 0.9595. As for external test set, the predictive correlation coefficients (R p 2 ) were more than 0.8026 and the root mean square error of prediction (RMSE p ) were less than 1.2601. Compared with the traditional methods (PLS and N-PLS methods), our approach is more accurate, robust and reliable, and can be a potential excellent approach to quantify multicomponent with THz-TDS spectroscopy. Copyright © 2017 Elsevier Ltd. All rights reserved.
A theoretical study of electron multiplication coefficient in a cold-cathode Penning ion generator
NASA Astrophysics Data System (ADS)
Noori, H.; Ranjbar, A. H.; Rahmanipour, R.
2017-11-01
The discharge mechanism of a Penning ion generator (PIG) is seriously influenced by the electron ionization process. A theoretical approach has been proposed to formulate the electron multiplication coefficient, M, of a PIG as a function of the axial magnetic field and the applied voltage. A numerical simulation was used to adjust the free parameters of expression M. Using the coefficient M, the values of the effective secondary electron emission coefficient, γeff, were obtained to be from 0.09 to 0.22. In comparison to the experimental results, the average value of γeff differs from the secondary coefficient of clean and dirty metals by the factors 1.4 and 0.5, respectively.
Determination of Scaled Wind Turbine Rotor Characteristics from Three Dimensional RANS Calculations
NASA Astrophysics Data System (ADS)
Burmester, S.; Gueydon, S.; Make, M.
2016-09-01
Previous studies have shown the importance of 3D effects when calculating the performance characteristics of a scaled down turbine rotor [1-4]. In this paper the results of 3D RANS (Reynolds-Averaged Navier-Stokes) computations by Make and Vaz [1] are taken to calculate 2D lift and drag coefficients. These coefficients are assigned to FAST (Blade Element Momentum Theory (BEMT) tool from NREL) as input parameters. Then, the rotor characteristics (power and thrust coefficients) are calculated using BEMT. This coupling of RANS and BEMT was previously applied by other parties and is termed here the RANS-BEMT coupled approach. Here the approach is compared to measurements carried out in a wave basin at MARIN applying Froude scaled wind, and the direct 3D RANS computation. The data of both a model and full scale wind turbine are used for the validation and verification. The flow around a turbine blade at full scale has a more 2D character than the flow properties around a turbine blade at model scale (Make and Vaz [1]). Since BEMT assumes 2D flow behaviour, the results of the RANS-BEMT coupled approach agree better with the results of the CFD (Computational Fluid Dynamics) simulation at full- than at model-scale.
An Efficient Image Compressor for Charge Coupled Devices Camera
Li, Jin; Xing, Fei; You, Zheng
2014-01-01
Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977
Determination of sedimentation coefficients for small peptides.
Schuck, P; MacPhee, C E; Howlett, G J
1998-01-01
Direct fitting of sedimentation velocity data with numerical solutions of the Lamm equations has been exploited to obtain sedimentation coefficients for single solutes under conditions where solvent and solution plateaus are either not available or are transient. The calculated evolution was initialized with the first experimental scan and nonlinear regression was employed to obtain best-fit values for the sedimentation and diffusion coefficients. General properties of the Lamm equations as data analysis tools were examined. This method was applied to study a set of small peptides containing amphipathic heptad repeats with the general structure Ac-YS-(AKEAAKE)nGAR-NH2, n = 2, 3, or 4. Sedimentation velocity analysis indicated single sedimenting species with sedimentation coefficients (s(20,w) values) of 0.37, 0.45, and 0.52 S, respectively, in good agreement with sedimentation coefficients predicted by hydrodynamic theory. The described approach can be applied to synthetic boundary and conventional loading experiments, and can be extended to analyze sedimentation data for both large and small macromolecules in order to define shape, heterogeneity, and state of association. PMID:9449347
NASA Astrophysics Data System (ADS)
Sun, Y.; Ditmar, P.; Riva, R.
2016-12-01
Time-varying gravity field solutions of the GRACE satellite mission enable an observation of Earth's mass transport on a monthly basis since 2002. One of the remaining challenges is how to complement these solutions with sufficiently accurate estimates of very low-degree spherical harmonic coefficients, particularly degree-1 coefficients and C20. An absence or inaccurate estimation of these coefficients may result in strong biases in mass transports estimates. Variations in degree-1 coefficients reflect geocenter motion and variations in the C20coefficients describe changes in the Earth's dynamic oblateness (ΔJ2). In this study, we developed a novel methodology to estimate monthly variations in degree-1 and C20coefficients by combing GRACE data with oceanic mass anomalies (combination approach). Unlike the method by Swenson et al. (2008), the proposed approach exploits noise covariance information of both input datasets and thus produces stochastically optimal solutions. A numerical simulation study is carried out to verify the correctness and performance of the proposed approach. We demonstrate that solutions obtained with the proposed approach have a significantly higher quality, as compared to the method by Swenson et al. Finally, we apply the proposed approach to real monthly GRACE solutions. To evaluate the obtained results, we calculate mass transport time-series over selected regions where minimal mass anomalies are expected. A clear reduction in the RMS of the mass transport time-series (more than 50 %) is observed there when the degree-1 and C20 coefficients obtained with the proposed approach are used. In particular, the seasonal pattern in the mass transport time-series disappears almost entirely. The traditional approach (degree-1 coefficients based on Swenson et al. (2008) and C20 based on SLR data), in contrast, does not reduce that RMS or even makes it larger (e.g., over the Sahara desert). We further show that the degree-1 variations play a major role in the observed improvement. At the same time, the usage of the C20 solutions obtained with the combination approach yields a similar accuracy of mass anomaly estimates, as compared to the results based on SLR analysis. The computed degree-1 and C20 coefficients will be made publicly available.
Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong
2016-07-01
Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja
Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less
NASA Technical Reports Server (NTRS)
Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong
2011-01-01
Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.
Elsayed, Mustafa M A; Vierl, Ulrich; Cevc, Gregor
2009-06-01
Potentiometric lipid membrane-water partition coefficient studies neglect electrostatic interactions to date; this leads to incorrect results. We herein show how to account properly for such interactions in potentiometric data analysis. We conducted potentiometric titration experiments to determine lipid membrane-water partition coefficients of four illustrative drugs, bupivacaine, diclofenac, ketoprofen and terbinafine. We then analyzed the results conventionally and with an improved analytical approach that considers Coulombic electrostatic interactions. The new analytical approach delivers robust partition coefficient values. In contrast, the conventional data analysis yields apparent partition coefficients of the ionized drug forms that depend on experimental conditions (mainly the lipid-drug ratio and the bulk ionic strength). This is due to changing electrostatic effects originating either from bound drug and/or lipid charges. A membrane comprising 10 mol-% mono-charged molecules in a 150 mM (monovalent) electrolyte solution yields results that differ by a factor of 4 from uncharged membranes results. Allowance for the Coulombic electrostatic interactions is a prerequisite for accurate and reliable determination of lipid membrane-water partition coefficients of ionizable drugs from potentiometric titration data. The same conclusion applies to all analytical methods involving drug binding to a surface.
Closed-form confidence intervals for functions of the normal mean and standard deviation.
Donner, Allan; Zou, G Y
2012-08-01
Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.
Load-based approaches for modelling visual clarity in streams at regional scale.
Elliott, A H; Davies-Colley, R J; Parshotam, A; Ballantine, D
2013-01-01
Reduction of visual clarity in streams by diffuse sources of fine sediment is a cause of water quality impairment in New Zealand and internationally. In this paper we introduce the concept of a load of optical cross section (LOCS), which can be used for load-based management of light-attenuating substances and for water quality models that are based on mass accounting. In this approach, the beam attenuation coefficient (units of m(-1)) is estimated from the inverse of the visual clarity (units of m) measured with a black disc. This beam attenuation coefficient can also be considered as an optical cross section (OCS) per volume of water, analogous to a concentration. The instantaneous 'flux' of cross section is obtained from the attenuation coefficient multiplied by the water discharge, and this can be accumulated over time to give an accumulated 'load' of cross section (LOCS). Moreover, OCS is a conservative quantity, in the sense that the OCS of two combined water volumes is the sum of the OCS of the individual water volumes (barring effects such as coagulation, settling, or sorption). The LOCS can be calculated for a water quality station using rating curve methods applied to measured time series of visual clarity and flow. This approach was applied to the sites in New Zealand's National Rivers Water Quality Network (NRWQN). Although the attenuation coefficient follows roughly a power relation with flow at some sites, more flexible loess rating curves are required at other sites. The hybrid mechanistic-statistical catchment model SPARROW (SPAtially Referenced Regressions On Watershed attributes), which is based on a mass balance for mean annual load, was then applied to the NRWQN dataset. Preliminary results from this model are presented, highlighting the importance of factors related to erosion, such as rainfall, slope, hardness of catchment rock types, and the influence of pastoral development on the load of optical cross section.
Sell, Andrew; Fadaei, Hossein; Kim, Myeongsub; Sinton, David
2013-01-02
Predicting carbon dioxide (CO(2)) security and capacity in sequestration requires knowledge of CO(2) diffusion into reservoir fluids. In this paper we demonstrate a microfluidic based approach to measuring the mutual diffusion coefficient of carbon dioxide in water and brine. The approach enables formation of fresh CO(2)-liquid interfaces; the resulting diffusion is quantified by imaging fluorescence quenching of a pH-dependent dye, and subsequent analyses. This method was applied to study the effects of site-specific variables--CO(2) pressure and salinity levels--on the diffusion coefficient. In contrast to established, macro-scale pressure-volume-temperature cell methods that require large sample volumes and testing periods of hours/days, this approach requires only microliters of sample, provides results within minutes, and isolates diffusive mass transport from convective effects. The measured diffusion coefficient of CO(2) in water was constant (1.86 [± 0.26] × 10(-9) m(2)/s) over the range of pressures (5-50 bar) tested at 26 °C, in agreement with existing models. The effects of salinity were measured with solutions of 0-5 M NaCl, where the diffusion coefficient varied up to 3 times. These experimental data support existing theory and demonstrate the applicability of this method for reservoir-specific testing.
Band-edge absorption coefficients from photoluminescence in semiconductor multiple quantum wells
NASA Technical Reports Server (NTRS)
Kost, Alan; Zou, Yao; Dapkus, P. D.; Garmire, Elsa; Lee, H. C.
1989-01-01
A novel approach to determining absorption coefficients in thin films using luminescence is described. The technique avoids many of the difficulties typically encountered in measurements of thin samples, Fabry-Perot effects, for example, and can be applied to a variety of materials. The absorption edge for GaAs/AlGaAs multiple quantum well structures, with quantum well widths ranging from 54 to 193 A is examined. Urbach (1953) parameters and excitonic linewidths are tabulated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lanza, Mathieu; Lique, François, E-mail: francois.lique@univ-lehavre.fr
The determination of hyperfine structure resolved excitation cross sections and rate coefficients due to H{sub 2} collisions is required to interpret astronomical spectra. In this paper, we present several theoretical approaches to compute these data. An almost exact recoupling approach and approximate sudden methods are presented. We apply these different approaches to the HCl–H{sub 2} collisional system in order to evaluate their respective accuracy. HCl–H{sub 2} hyperfine structure resolved cross sections and rate coefficients are then computed using recoupling and approximate sudden methods. As expected, the approximate sudden approaches are more accurate when the collision energy increases and the resultsmore » suggest that these approaches work better for para-H{sub 2} than for ortho-H{sub 2} colliding partner. For the first time, we present HCl–H{sub 2} hyperfine structure resolved rate coefficients, computed here for temperatures ranging from 5 to 300 K. The usual Δj{sub 1} = ΔF{sub 1} propensity rules are observed for the hyperfine transitions. The new rate coefficients will significantly help the interpretation of interstellar HCl emission lines observed with current and future telescopes. We expect that these new data will allow a better determination of the HCl abundance in the interstellar medium, that is crucial to understand the interstellar chlorine chemistry.« less
Output MSE and PSNR prediction in DCT-based lossy compression of remote sensing images
NASA Astrophysics Data System (ADS)
Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem
2017-10-01
Amount and size of remote sensing (RS) images acquired by modern systems are so large that data have to be compressed in order to transfer, save and disseminate them. Lossy compression becomes more popular for aforementioned situations. But lossy compression has to be applied carefully with providing acceptable level of introduced distortions not to lose valuable information contained in data. Then introduced losses have to be controlled and predicted and this is problematic for many coders. In this paper, we analyze possibilities of predicting mean square error or, equivalently, PSNR for coders based on discrete cosine transform (DCT) applied either for compressing singlechannel RS images or multichannel data in component-wise manner. The proposed approach is based on direct dependence between distortions introduced due to DCT coefficient quantization and losses in compressed data. One more innovation deals with possibility to employ a limited number (percentage) of blocks for which DCT-coefficients have to be calculated. This accelerates prediction and makes it considerably faster than compression itself. There are two other advantages of the proposed approach. First, it is applicable for both uniform and non-uniform quantization of DCT coefficients. Second, the approach is quite general since it works for several analyzed DCT-based coders. The simulation results are obtained for standard test images and then verified for real-life RS data.
NASA Astrophysics Data System (ADS)
Baccar, D.; Söffker, D.
2017-11-01
Acoustic Emission (AE) is a suitable method to monitor the health of composite structures in real-time. However, AE-based failure mode identification and classification are still complex to apply due to the fact that AE waves are generally released simultaneously from all AE-emitting damage sources. Hence, the use of advanced signal processing techniques in combination with pattern recognition approaches is required. In this paper, AE signals generated from laminated carbon fiber reinforced polymer (CFRP) subjected to indentation test are examined and analyzed. A new pattern recognition approach involving a number of processing steps able to be implemented in real-time is developed. Unlike common classification approaches, here only CWT coefficients are extracted as relevant features. Firstly, Continuous Wavelet Transform (CWT) is applied to the AE signals. Furthermore, dimensionality reduction process using Principal Component Analysis (PCA) is carried out on the coefficient matrices. The PCA-based feature distribution is analyzed using Kernel Density Estimation (KDE) allowing the determination of a specific pattern for each fault-specific AE signal. Moreover, waveform and frequency content of AE signals are in depth examined and compared with fundamental assumptions reported in this field. A correlation between the identified patterns and failure modes is achieved. The introduced method improves the damage classification and can be used as a non-destructive evaluation tool.
Electron transport in electrically biased inverse parabolic double-barrier structure
NASA Astrophysics Data System (ADS)
M, Bati; S, Sakiroglu; I, Sokmen
2016-05-01
A theoretical study of resonant tunneling is carried out for an inverse parabolic double-barrier structure subjected to an external electric field. Tunneling transmission coefficient and density of states are analyzed by using the non-equilibrium Green’s function approach based on the finite difference method. It is found that the resonant peak of the transmission coefficient, being unity for a symmetrical case, reduces under the applied electric field and depends strongly on the variation of the structure parameters.
NASA Astrophysics Data System (ADS)
Alawadi, Wisam; Al-Rekabi, Wisam S.; Al-Aboodi, Ali H.
2018-03-01
The Shiono and Knight Method (SKM) is widely used to predict the lateral distribution of depth-averaged velocity and boundary shear stress for flows in compound channels. Three calibrating coefficients need to be estimated for applying the SKM, namely eddy viscosity coefficient ( λ), friction factor ( f) and secondary flow coefficient ( k). There are several tested methods which can satisfactorily be used to estimate λ, f. However, the calibration of secondary flow coefficients k to account for secondary flow effects correctly is still problematic. In this paper, the calibration of secondary flow coefficients is established by employing two approaches to estimate correct values of k for simulating asymmetric compound channel with different side slopes of the internal wall. The first approach is based on Abril and Knight (2004) who suggest fixed values for main channel and floodplain regions. In the second approach, the equations developed by Devi and Khatua (2017) that relate the variation of the secondary flow coefficients with the relative depth ( β) and width ratio ( α) are used. The results indicate that the calibration method developed by Devi and Khatua (2017) is a better choice for calibrating the secondary flow coefficients than using the first approach which assumes a fixed value of k for different flow depths. The results also indicate that the boundary condition based on the shear force continuity can successfully be used for simulating rectangular compound channels, while the continuity of depth-averaged velocity and its gradient is accepted boundary condition in simulations of trapezoidal compound channels. However, the SKM performance for predicting the boundary shear stress over the shear layer region may not be improved by only imposing the suitable calibrated values of secondary flow coefficients. This is because difficulties of modelling the complex interaction that develops between the flows in the main channel and on the floodplain in this region.
NASA Astrophysics Data System (ADS)
Noori, H.; Ranjbar, A. H.; Mahjour-Shafiei, M.
2017-11-01
A cold-cathode Penning ion generator (PIG) has been developed in our laboratory to study the interaction of charged particles with matter. The ignition voltage was measured in the presence of the axial magnetic field in the range of 460-580 G. The performed measurements with stainless steel cathodes were in argon gas at pressure of 4 × 10-2 mbar. A PIC-MCC (particle-in-cell, Monte Carlo collision) technique has been used to calculate the electron multiplication coefficient M for various strength of axial magnetic field and applied voltage. An approach based on the coefficient M and the experimental values of the secondary electron emission coefficient γ, was proposed to determine the ignition voltages, theoretically. Applying the values of secondary coefficient γ leads to the average value of γM(V, B) to be = 1.05 ± 0.03 at the ignition of the PIG which satisfies the proposed ignition criterion. Thus, the ion-induced secondary electrons emitted from the cathode have dominant contribution to self-sustaining of the discharge process in a PIG.
Nedea, S V; van Steenhoven, A A; Markvoort, A J; Spijker, P; Giordano, D
2014-05-01
The influence of gas-surface interactions of a dilute gas confined between two parallel walls on the heat flux predictions is investigated using a combined Monte Carlo (MC) and molecular dynamics (MD) approach. The accommodation coefficients are computed from the temperature of incident and reflected molecules in molecular dynamics and used as effective coefficients in Maxwell-like boundary conditions in Monte Carlo simulations. Hydrophobic and hydrophilic wall interactions are studied, and the effect of the gas-surface interaction potential on the heat flux and other characteristic parameters like density and temperature is shown. The heat flux dependence on the accommodation coefficient is shown for different fluid-wall mass ratios. We find that the accommodation coefficient is increasing considerably when the mass ratio is decreased. An effective map of the heat flux depending on the accommodation coefficient is given and we show that MC heat flux predictions using Maxwell boundary conditions based on the accommodation coefficient give good results when compared to pure molecular dynamics heat predictions. The accommodation coefficients computed for a dilute gas for different gas-wall interaction parameters and mass ratios are transferred to compute the heat flux predictions for a dense gas. Comparison of the heat fluxes derived using explicit MD, MC with Maxwell-like boundary conditions based on the accommodation coefficients, and pure Maxwell boundary conditions are discussed. A map of the heat flux dependence on the accommodation coefficients for a dense gas, and the effective accommodation coefficients for different gas-wall interactions are given. In the end, this approach is applied to study the gas-surface interactions of argon and xenon molecules on a platinum surface. The derived accommodation coefficients are compared with values of experimental results.
An instrumental variable random-coefficients model for binary outcomes
Chesher, Andrew; Rosen, Adam M
2014-01-01
In this paper, we study a random-coefficients model for a binary outcome. We allow for the possibility that some or even all of the explanatory variables are arbitrarily correlated with the random coefficients, thus permitting endogeneity. We assume the existence of observed instrumental variables Z that are jointly independent with the random coefficients, although we place no structure on the joint determination of the endogenous variable X and instruments Z, as would be required for a control function approach. The model fits within the spectrum of generalized instrumental variable models, and we thus apply identification results from our previous studies of such models to the present context, demonstrating their use. Specifically, we characterize the identified set for the distribution of random coefficients in the binary response model with endogeneity via a collection of conditional moment inequalities, and we investigate the structure of these sets by way of numerical illustration. PMID:25798048
AllergenFP: allergenicity prediction by descriptor fingerprints.
Dimitrov, Ivan; Naneva, Lyudmila; Doytchinova, Irini; Bangov, Ivan
2014-03-15
Allergenicity, like antigenicity and immunogenicity, is a property encoded linearly and non-linearly, and therefore the alignment-based approaches are not able to identify this property unambiguously. A novel alignment-free descriptor-based fingerprint approach is presented here and applied to identify allergens and non-allergens. The approach was implemented into a four step algorithm. Initially, the protein sequences are described by amino acid principal properties as hydrophobicity, size, relative abundance, helix and β-strand forming propensities. Then, the generated strings of different length are converted into vectors with equal length by auto- and cross-covariance (ACC). The vectors were transformed into binary fingerprints and compared in terms of Tanimoto coefficient. The approach was applied to a set of 2427 known allergens and 2427 non-allergens and identified correctly 88% of them with Matthews correlation coefficient of 0.759. The descriptor fingerprint approach presented here is universal. It could be applied for any classification problem in computational biology. The set of E-descriptors is able to capture the main structural and physicochemical properties of amino acids building the proteins. The ACC transformation overcomes the main problem in the alignment-based comparative studies arising from the different length of the aligned protein sequences. The conversion of protein ACC values into binary descriptor fingerprints allows similarity search and classification. The algorithm described in the present study was implemented in a specially designed Web site, named AllergenFP (FP stands for FingerPrint). AllergenFP is written in Python, with GIU in HTML. It is freely accessible at http://ddg-pharmfac.net/Allergen FP. idoytchinova@pharmfac.net or ivanbangov@shu-bg.net.
Electromagnetic Inverse Methods and Applications for Inhomogeneous Media Probing and Synthesis.
NASA Astrophysics Data System (ADS)
Xia, Jake Jiqing
The electromagnetic inverse scattering problems concerned in this thesis are to find unknown inhomogeneous permittivity and conductivity profiles in a medium from the scattering data. Both analytical and numerical methods are studied in the thesis. The inverse methods can be applied to geophysical medium probing, non-destructive testing, medical imaging, optical waveguide synthesis and material characterization. An introduction is given in Chapter 1. The first part of the thesis presents inhomogeneous media probing. The Riccati equation approach is discussed in Chapter 2 for a one-dimensional planar profile inversion problem. Two types of the Riccati equations are derived and distinguished. New renormalized formulae based inverting one specific type of the Riccati equation are derived. Relations between the inverse methods of Green's function, the Riccati equation and the Gel'fand-Levitan-Marchenko (GLM) theory are studied. In Chapter 3, the renormalized source-type integral equation (STIE) approach is formulated for inversion of cylindrically inhomogeneous permittivity and conductivity profiles. The advantages of the renormalized STIE approach are demonstrated in numerical examples. The cylindrical profile inversion problem has an application for borehole inversion. In Chapter 4 the renormalized STIE approach is extended to a planar case where the two background media are different. Numerical results have shown fast convergence. This formulation is applied to inversion of the underground soil moisture profiles in remote sensing. The second part of the thesis presents the synthesis problem of inhomogeneous dielectric waveguides using the electromagnetic inverse methods. As a particular example, the rational function representation of reflection coefficients in the GLM theory is used. The GLM method is reviewed in Chapter 5. Relations between modal structures and transverse reflection coefficients of an inhomogeneous medium are established in Chapter 6. A stratified medium model is used to derive the guidance condition and the reflection coefficient. Results obtained in Chapter 6 provide the physical foundation for applying the inverse methods for the waveguide design problem. In Chapter 7, a global guidance condition for continuously varying medium is derived using the Riccati equation. It is further shown that the discrete modes in an inhomogeneous medium have the same wave vectors as the poles of the transverse reflection coefficient. An example of synthesizing an inhomogeneous dielectric waveguide using a rational reflection coefficient is presented. A summary of the thesis is given in Chapter 8. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.).
NASA Astrophysics Data System (ADS)
Kundu, Arpan; Alrefae, Majed A.; Fisher, Timothy S.
2017-03-01
Using a semiclassical Boltzmann transport equation approach, we derive analytical expressions for electric and thermoelectric transport coefficients of graphene in the presence and absence of a magnetic field. Scattering due to acoustic phonons, charged impurities, and vacancies is considered in the model. Seebeck (Sxx) and Nernst (N) coefficients are evaluated as functions of carrier density, temperature, scatterer concentration, magnetic field, and induced band gap, and the results are compared to experimental data. Sxx is an odd function of Fermi energy, while N is an even function, as observed in experiments. The peak values of both coefficients are found to increase with the decreasing scatterer concentration and increasing temperature. Furthermore, opening a band gap decreases N but increases Sxx. Applying a magnetic field introduces an asymmetry in the variation of Sxx with Fermi energy across the Dirac point. The formalism is more accurate and computationally efficient than the conventional Green's function approach used to model transport coefficients and can be used to explore transport properties of other materials with Dirac cones such as Weyl semimetals.
Artificial viscosity in Godunov-type schemes to cure the carbuncle phenomenon
NASA Astrophysics Data System (ADS)
Rodionov, Alexander V.
2017-09-01
This work presents a new approach for curing the carbuncle instability. The idea underlying the approach is to introduce some dissipation in the form of right-hand sides of the Navier-Stokes equations into the basic method of solving Euler equations; in so doing, we replace the molecular viscosity coefficient by the artificial viscosity coefficient and calculate heat conductivity assuming that the Prandtl number is constant. For the artificial viscosity coefficient we have chosen a formula that is consistent with the von Neumann and Richtmyer artificial viscosity, but has its specific features (extension to multidimensional simulations, introduction of a threshold compression intensity that restricts additional dissipation to the shock layer only). The coefficients and the expression for the characteristic mesh size in this formula are chosen from a large number of Quirk-type problem computations. The new cure for the carbuncle flaw has been tested on first-order schemes (Godunov, Roe, HLLC and AUSM+ schemes) as applied to one- and two-dimensional simulations on smooth structured grids. Its efficiency has been demonstrated on several well-known test problems.
Shear viscosity in monatomic liquids: a simple mode-coupling approach
NASA Astrophysics Data System (ADS)
Balucani, Umberto
The value of the shear-viscosity coefficient in fluids is controlled by the dynamical processes affecting the time decay of the associated Green-Kubo integrand, the stress autocorrelation function (SACF). These processes are investigated in monatomic liquids by means of a microscopic approach with a minimum use of phenomenological assumptions. In particular, mode-coupling effects (responsible for the presence in the SACF of a long-lasting 'tail') are accounted for by a simplified approach where the only requirement is knowledge of the structural properties. The theory readily yields quantitative predictions in its domain of validity, which comprises ordinary and moderately supercooled 'simple' liquids. The framework is applied to liquid Ar and Rb near their melting points, and quite satisfactory agreement with the simulation data is found for both the details of the SACF and the value of the shear-viscosity coefficient.
NASA Astrophysics Data System (ADS)
Ye, Huping; Li, Junsheng; Zhu, Jianhua; Shen, Qian; Li, Tongji; Zhang, Fangfang; Yue, Huanyin; Zhang, Bing; Liao, Xiaohan
2017-10-01
The absorption coefficient of water is an important bio-optical parameter for water optics and water color remote sensing. However, scattering correction is essential to obtain accurate absorption coefficient values in situ using the nine-wavelength absorption and attenuation meter AC9. Establishing the correction always fails in Case 2 water when the correction assumes zero absorption in the near-infrared (NIR) region and underestimates the absorption coefficient in the red region, which affect processes such as semi-analytical remote sensing inversion. In this study, the scattering contribution was evaluated by an exponential fitting approach using AC9 measurements at seven wavelengths (412, 440, 488, 510, 532, 555, and 715 nm) and by applying scattering correction. The correction was applied to representative in situ data of moderately turbid coastal water, highly turbid coastal water, eutrophic inland water, and turbid inland water. The results suggest that the absorption levels in the red and NIR regions are significantly higher than those obtained using standard scattering error correction procedures. Knowledge of the deviation between this method and the commonly used scattering correction methods will facilitate the evaluation of the effect on satellite remote sensing of water constituents and general optical research using different scattering-correction methods.
Hispanic Population Growth and Rural Income Inequality
ERIC Educational Resources Information Center
Parrado, Emilio A.; Kandel, William A.
2010-01-01
We analyze the relationship between Hispanic population growth and changes in U.S. rural income inequality from 1990 through 2000. Applying comparative approaches used for urban areas we disentangle Hispanic population growth's contribution to inequality by comparing and statistically modeling changes in the family income Gini coefficient across…
Aoyagi, Miki; Nagata, Kenji
2012-06-01
The term algebraic statistics arises from the study of probabilistic models and techniques for statistical inference using methods from algebra and geometry (Sturmfels, 2009 ). The purpose of our study is to consider the generalization error and stochastic complexity in learning theory by using the log-canonical threshold in algebraic geometry. Such thresholds correspond to the main term of the generalization error in Bayesian estimation, which is called a learning coefficient (Watanabe, 2001a , 2001b ). The learning coefficient serves to measure the learning efficiencies in hierarchical learning models. In this letter, we consider learning coefficients for Vandermonde matrix-type singularities, by using a new approach: focusing on the generators of the ideal, which defines singularities. We give tight new bound values of learning coefficients for the Vandermonde matrix-type singularities and the explicit values with certain conditions. By applying our results, we can show the learning coefficients of three-layered neural networks and normal mixture models.
Compaction trends of full stiffness tensor and fluid permeability in artificial shales
NASA Astrophysics Data System (ADS)
Beloborodov, Roman; Pervukhina, Marina; Lebedev, Maxim
2018-03-01
We present a methodology and describe a set-up that allows simultaneous acquisition of all five elastic coefficients of a transversely isotropic (TI) medium and its permeability in the direction parallel to the symmetry axis during mechanical compaction experiments. We apply the approach to synthetic shale samples and investigate the role of composition and applied stress on their elastic and transport properties. Compaction trends for the five elastic coefficients that fully characterize TI anisotropy of artificial shales are obtained for a porosity range from 40 per cent to 15 per cent. A linear increase of elastic coefficients with decreasing porosity is observed. The permeability acquired with the pressure-oscillation technique exhibits exponential decrease with decreasing porosity. Strong correlations are observed between an axial fluid permeability and seismic attributes, namely, VP/VS ratio and acoustic impedance, measured in the same direction. These correlations might be used to derive permeability of shales from seismic data given that their mineralogical composition is known.
Evaluating secular acceleration in geomagnetic field model GRIMM-3
NASA Astrophysics Data System (ADS)
Lesur, V.; Wardinski, I.
2012-12-01
Secular acceleration of the magnetic field is the rate of change of its secular variation. One of the main results of studying magnetic data collected by the German survey satellite CHAMP was the mapping of field acceleration and its evolution in time. Questions remain about the accuracy of the modeled acceleration and the effect of the applied regularization processes. We have evaluated to what extent the regularization affects the temporal variability of the Gauss coefficients. We also obtained results of temporal variability of the Gauss coefficients where alternative approaches to the usual smoothing norms have been applied for regularization. Except for the dipole term, the secular acceleration of the Gauss coefficients is fairly well described up to spherical harmonic degree 5 or 6. There is no clear evidence from observatory data that the spectrum of this acceleration is underestimated at the Earth surface. Assuming a resistive mantle, the observed acceleration supports a characteristic time scale for the secular variation of the order of 11 years.
Modal phase measuring deflectometry
Huang, Lei; Xue, Junpeng; Gao, Bo; ...
2016-10-14
Here in this work, a model based method is applied to phase measuring deflectometry, which is named as modal phase measuring deflectometry. The height and slopes of the surface under test are represented by mathematical models and updated by optimizing the model coefficients to minimize the discrepancy between the reprojection in ray tracing and the actual measurement. The pose of the screen relative to the camera is pre-calibrated and further optimized together with the shape coefficients of the surface under test. Simulations and experiments are conducted to demonstrate the feasibility of the proposed approach.
Optimum filter-based discrimination of neutrons and gamma rays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amiri, Moslem; Prenosil, Vaclav; Cvachovec, Frantisek
2015-07-01
An optimum filter-based method for discrimination of neutrons and gamma-rays in a mixed radiation field is presented. The existing filter-based implementations of discriminators require sample pulse responses in advance of the experiment run to build the filter coefficients, which makes them less practical. Our novel technique creates the coefficients during the experiment and improves their quality gradually. Applied to several sets of mixed neutron and photon signals obtained through different digitizers using stilbene scintillator, this approach is analyzed and its discrimination quality is measured. (authors)
Standardization of domestic frying processes by an engineering approach.
Franke, K; Strijowski, U
2011-05-01
An approach was developed to enable a better standardization of domestic frying of potato products. For this purpose, 5 domestic fryers differing in heating power and oil capacity were used. A very defined frying process using a highly standardized model product and a broad range of frying conditions was carried out in these fryers and the development of browning representing an important quality parameter was measured. Product-to-oil ratio, oil temperature, and frying time were varied. Quite different color changes were measured in the different fryers although the same frying process parameters were applied. The specific energy consumption for water evaporation (spECWE) during frying related to product amount was determined for all frying processes to define an engineering parameter for characterizing the frying process. A quasi-linear regression approach was applied to calculate this parameter from frying process settings and fryer properties. The high significance of the regression coefficients and a coefficient of determination close to unity confirmed the suitability of this approach. Based on this regression equation, curves for standard frying conditions (SFC curves) were calculated which describe the frying conditions required to obtain the same level of spECWE in the different domestic fryers. Comparison of browning results from the different fryers operated at conditions near the SFC curves confirmed the applicability of the approach. © 2011 Institute of Food Technologists®
Gilbert, Dorothea; Witt, Gesine; Smedes, Foppe; Mayer, Philipp
2016-06-07
Polymers are increasingly applied for the enrichment of hydrophobic organic chemicals (HOCs) from various types of samples and media in many analytical partitioning-based measuring techniques. We propose using polymers as a reference partitioning phase and introduce polymer-polymer partitioning as the basis for a deeper insight into partitioning differences of HOCs between polymers, calibrating analytical methods, and consistency checking of existing and calculation of new partition coefficients. Polymer-polymer partition coefficients were determined for polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), and organochlorine pesticides (OCPs) by equilibrating 13 silicones, including polydimethylsiloxane (PDMS) and low-density polyethylene (LDPE) in methanol-water solutions. Methanol as cosolvent ensured that all polymers reached equilibrium while its effect on the polymers' properties did not significantly affect silicone-silicone partition coefficients. However, we noticed minor cosolvent effects on determined polymer-polymer partition coefficients. Polymer-polymer partition coefficients near unity confirmed identical absorption capacities of several PDMS materials, whereas larger deviations from unity were indicated within the group of silicones and between silicones and LDPE. Uncertainty in polymer volume due to imprecise coating thickness or the presence of fillers was identified as the source of error for partition coefficients. New polymer-based (LDPE-lipid, PDMS-air) and multimedia partition coefficients (lipid-water, air-water) were calculated by applying the new concept of a polymer as reference partitioning phase and by using polymer-polymer partition coefficients as conversion factors. The present study encourages the use of polymer-polymer partition coefficients, recognizing that polymers can serve as a linking third phase for a quantitative understanding of equilibrium partitioning of HOCs between any two phases.
2013-01-01
Background The synthesis of information across microarray studies has been performed by combining statistical results of individual studies (as in a mosaic), or by combining data from multiple studies into a large pool to be analyzed as a single data set (as in a melting pot of data). Specific issues relating to data heterogeneity across microarray studies, such as differences within and between labs or differences among experimental conditions, could lead to equivocal results in a melting pot approach. Results We applied statistical theory to determine the specific effect of different means and heteroskedasticity across 19 groups of microarray data on the sign and magnitude of gene-to-gene Pearson correlation coefficients obtained from the pool of 19 groups. We quantified the biases of the pooled coefficients and compared them to the biases of correlations estimated by an effect-size model. Mean differences across the 19 groups were the main factor determining the magnitude and sign of the pooled coefficients, which showed largest values of bias as they approached ±1. Only heteroskedasticity across the pool of 19 groups resulted in less efficient estimations of correlations than did a classical meta-analysis approach of combining correlation coefficients. These results were corroborated by simulation studies involving either mean differences or heteroskedasticity across a pool of N > 2 groups. Conclusions The combination of statistical results is best suited for synthesizing the correlation between expression profiles of a gene pair across several microarray studies. PMID:23822712
Genome-scale cluster analysis of replicated microarrays using shrinkage correlation coefficient.
Yao, Jianchao; Chang, Chunqi; Salmi, Mari L; Hung, Yeung Sam; Loraine, Ann; Roux, Stanley J
2008-06-18
Currently, clustering with some form of correlation coefficient as the gene similarity metric has become a popular method for profiling genomic data. The Pearson correlation coefficient and the standard deviation (SD)-weighted correlation coefficient are the two most widely-used correlations as the similarity metrics in clustering microarray data. However, these two correlations are not optimal for analyzing replicated microarray data generated by most laboratories. An effective correlation coefficient is needed to provide statistically sufficient analysis of replicated microarray data. In this study, we describe a novel correlation coefficient, shrinkage correlation coefficient (SCC), that fully exploits the similarity between the replicated microarray experimental samples. The methodology considers both the number of replicates and the variance within each experimental group in clustering expression data, and provides a robust statistical estimation of the error of replicated microarray data. The value of SCC is revealed by its comparison with two other correlation coefficients that are currently the most widely-used (Pearson correlation coefficient and SD-weighted correlation coefficient) using statistical measures on both synthetic expression data as well as real gene expression data from Saccharomyces cerevisiae. Two leading clustering methods, hierarchical and k-means clustering were applied for the comparison. The comparison indicated that using SCC achieves better clustering performance. Applying SCC-based hierarchical clustering to the replicated microarray data obtained from germinating spores of the fern Ceratopteris richardii, we discovered two clusters of genes with shared expression patterns during spore germination. Functional analysis suggested that some of the genetic mechanisms that control germination in such diverse plant lineages as mosses and angiosperms are also conserved among ferns. This study shows that SCC is an alternative to the Pearson correlation coefficient and the SD-weighted correlation coefficient, and is particularly useful for clustering replicated microarray data. This computational approach should be generally useful for proteomic data or other high-throughput analysis methodology.
Stefl, Martin; Kułakowska, Anna; Hof, Martin
2009-08-05
A new (to our knowledge) robust approach for the determination of lateral diffusion coefficients of weakly bound proteins is applied for the phosphatidylserine specific membrane interaction of bovine prothrombin. It is shown that z-scan fluorescence correlation spectroscopy in combination with pulsed interleaved dual excitation allows simultaneous monitoring of the lateral diffusion of labeled protein and phospholipids. Moreover, from the dependencies of the particle numbers on the axial sample positions at different protein concentrations phosphatidylserine-dependent equilibrium dissociation constants are derived confirming literature values. Increasing the amount of membrane-bound prothrombin retards the lateral protein and lipid diffusion, indicating coupling of both processes. The lateral diffusion coefficients of labeled lipids are considerably larger than the simultaneously determined lateral diffusion coefficients of prothrombin, which contradicts findings reported for the isolated N-terminus of prothrombin.
Lee, Kil Yong; Burnett, William C
A simple method for the direct determination of the air-loop volume in a RAD7 system as well as the radon partition coefficient was developed allowing for an accurate measurement of the radon activity in any type of water. The air-loop volume may be measured directly using an external radon source and an empty bottle with a precisely measured volume. The partition coefficient and activity of radon in the water sample may then be determined via the RAD7 using the determined air-loop volume. Activity ratios instead of absolute activities were used to measure the air-loop volume and the radon partition coefficient. In order to verify this approach, we measured the radon partition coefficient in deionized water in the temperature range of 10-30 °C and compared the values to those calculated from the well-known Weigel equation. The results were within 5 % variance throughout the temperature range. We also applied the approach for measurement of the radon partition coefficient in synthetic saline water (0-75 ppt salinity) as well as tap water. The radon activity of the tap water sample was determined by this method as well as the standard RAD-H 2 O and BigBottle RAD-H 2 O. The results have shown good agreement between this method and the standard methods.
A new approach for beam hardening correction based on the local spectrum distributions
NASA Astrophysics Data System (ADS)
Rasoulpour, Naser; Kamali-Asl, Alireza; Hemmati, Hamidreza
2015-09-01
Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called "beam hardening". The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile.
Capture and dissociation in the complex-forming CH + H2 → CH2 + H, CH + H2 reactions.
González, Miguel; Saracibar, Amaia; Garcia, Ernesto
2011-02-28
The rate coefficients for the capture process CH + H(2)→ CH(3) and the reactions CH + H(2)→ CH(2) + H (abstraction), CH + H(2) (exchange) have been calculated in the 200-800 K temperature range, using the quasiclassical trajectory (QCT) method and the most recent global potential energy surface. The reactions, which are of interest in combustion and in astrochemistry, proceed via the formation of long-lived CH(3) collision complexes, and the three H atoms become equivalent. QCT rate coefficients for capture are in quite good agreement with experiments. However, an important zero point energy (ZPE) leakage problem occurs in the QCT calculations for the abstraction, exchange and inelastic exit channels. To account for this issue, a pragmatic but accurate approach has been applied, leading to a good agreement with experimental abstraction rate coefficients. Exchange rate coefficients have also been calculated using this approach. Finally, calculations employing QCT capture/phase space theory (PST) models have been carried out, leading to similar values for the abstraction rate coefficients as the QCT and previous quantum mechanical capture/PST methods. This suggests that QCT capture/PST models are a good alternative to the QCT method for this and similar systems.
Greer, Dennis H.
2012-01-01
Background and aims Grapevines growing in Australia are often exposed to very high temperatures and the question of how the gas exchange processes adjust to these conditions is not well understood. The aim was to develop a model of photosynthesis and transpiration in relation to temperature to quantify the impact of the growing conditions on vine performance. Methodology Leaf gas exchange was measured along the grapevine shoots in accordance with their growth and development over several growing seasons. Using a general linear statistical modelling approach, photosynthesis and transpiration were modelled against leaf temperature separated into bands and the model parameters and coefficients applied to independent datasets to validate the model. Principal results Photosynthesis, transpiration and stomatal conductance varied along the shoot, with early emerging leaves having the highest rates, but these declined as later emerging leaves increased their gas exchange capacities in accordance with development. The general linear modelling approach applied to these data revealed that photosynthesis at each temperature was additively dependent on stomatal conductance, internal CO2 concentration and photon flux density. The temperature-dependent coefficients for these parameters applied to other datasets gave a predicted rate of photosynthesis that was linearly related to the measured rates, with a 1 : 1 slope. Temperature-dependent transpiration was multiplicatively related to stomatal conductance and the leaf to air vapour pressure deficit and applying the coefficients also showed a highly linear relationship, with a 1 : 1 slope between measured and modelled rates, when applied to independent datasets. Conclusions The models developed for the grapevines were relatively simple but accounted for much of the seasonal variation in photosynthesis and transpiration. The goodness of fit in each case demonstrated that explicitly selecting leaf temperature as a model parameter, rather than including temperature intrinsically as is usually done in more complex models, was warranted. PMID:22567220
NASA Astrophysics Data System (ADS)
Wang, Haoqi; Chen, Jun; Brownjohn, James M. W.
2017-12-01
The spring-mass-damper (SMD) model with a pair of internal biomechanical forces is the simplest model for a walking pedestrian to represent his/her mechanical properties, and thus can be used in human-structure-interaction analysis in the vertical direction. However, the values of SMD stiffness and damping, though very important, are typically taken as those measured from stationary people due to lack of a parameter identification methods for a walking pedestrian. This study adopts a step-by-step system identification approach known as particle filter to simultaneously identify the stiffness, damping coefficient, and coefficients of the SMD model's biomechanical forces by ground reaction force (GRF) records. After a brief introduction of the SMD model, the proposed identification approach is explained in detail, with a focus on the theory of particle filter and its integration with the SMD model. A numerical example is first provided to verify the feasibility of the proposed approach which is then applied to several experimental GRF records. Identification results demonstrate that natural frequency and the damping ratio of a walking pedestrian are not constant but have a dependence of mean value and distribution on pacing frequency. The mean value first-order coefficient of the biomechanical force, which is expressed by the Fourier series function, also has a linear relationship with pacing frequency. Higher order coefficients do not show a clear relationship with pacing frequency but follow a logarithmic normal distribution.
NASA Astrophysics Data System (ADS)
Li, Liangliang; Si, Yujuan; Jia, Zhenhong
2018-03-01
In this paper, a novel microscopy mineral image enhancement method based on adaptive threshold in non-subsampled shearlet transform (NSST) domain is proposed. First, the image is decomposed into one low-frequency sub-band and several high-frequency sub-bands. Second, the gamma correction is applied to process the low-frequency sub-band coefficients, and the improved adaptive threshold is adopted to suppress the noise of the high-frequency sub-bands coefficients. Third, the processed coefficients are reconstructed with the inverse NSST. Finally, the unsharp filter is used to enhance the details of the reconstructed image. Experimental results on various microscopy mineral images demonstrated that the proposed approach has a better enhancement effect in terms of objective metric and subjective metric.
Kucza, Witold
2013-07-25
Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Reddy, Ramakrushna; Nair, Rajesh R.
2013-10-01
This work deals with a methodology applied to seismic early warning systems which are designed to provide real-time estimation of the magnitude of an event. We will reappraise the work of Simons et al. (2006), who on the basis of wavelet approach predicted a magnitude error of ±1. We will verify and improve upon the methodology of Simons et al. (2006) by applying an SVM statistical learning machine on the time-scale wavelet decomposition methods. We used the data of 108 events in central Japan with magnitude ranging from 3 to 7.4 recorded at KiK-net network stations, for a source-receiver distance of up to 150 km during the period 1998-2011. We applied a wavelet transform on the seismogram data and calculating scale-dependent threshold wavelet coefficients. These coefficients were then classified into low magnitude and high magnitude events by constructing a maximum margin hyperplane between the two classes, which forms the essence of SVMs. Further, the classified events from both the classes were picked up and linear regressions were plotted to determine the relationship between wavelet coefficient magnitude and earthquake magnitude, which in turn helped us to estimate the earthquake magnitude of an event given its threshold wavelet coefficient. At wavelet scale number 7, we predicted the earthquake magnitude of an event within 2.7 seconds. This means that a magnitude determination is available within 2.7 s after the initial onset of the P-wave. These results shed light on the application of SVM as a way to choose the optimal regression function to estimate the magnitude from a few seconds of an incoming seismogram. This would improve the approaches from Simons et al. (2006) which use an average of the two regression functions to estimate the magnitude.
The biodegradation of organic contaminants in the subsurface has become a major focus of attention, in part, due to the tremendous interest in applying in situ biodegradation and natural attenuation approaches for site remediation. The biodegradation and trans...
NASA Astrophysics Data System (ADS)
Danaeifar, Mohammad; Granpayeh, Nosrat
2018-03-01
An analytical method is presented to analyze and synthesize bianisotropic metasurfaces. The equivalent parameters of metasurfaces in terms of meta-atom properties and other specifications of metasurfaces are derived. These parameters are related to electric, magnetic, and electromagnetic/magnetoelectric dipole moments of the bianisotropic media, and they can simplify the analysis of complicated and multilayer structures. A metasurface of split ring resonators is studied as an example demonstrating the proposed method. The optical properties of the meta-atom are explored, and the calculated polarizabilities are applied to find the reflection coefficient and the equivalent parameters of the metasurface. Finally, a structure consisting of two metasurfaces of the split ring resonators is provided, and the proposed analytical method is applied to derive the reflection coefficient. The validity of this analytical approach is verified by full-wave simulations which demonstrate good accuracy of the equivalent parameter method. This method can be used in the analysis and synthesis of bianisotropic metasurfaces with different materials and in different frequency ranges by considering electric, magnetic, and electromagnetic/magnetoelectric dipole moments.
Torrecilla, José S; García, Julián; García, Silvia; Rodríguez, Francisco
2011-03-04
The combination of lag-k autocorrelation coefficients (LCCs) and thermogravimetric analyzer (TGA) equipment is defined here as a tool to detect and quantify adulterations of extra virgin olive oil (EVOO) with refined olive (ROO), refined olive pomace (ROPO), sunflower (SO) or corn (CO) oils, when the adulterating agents concentration are less than 14%. The LCC is calculated from TGA scans of adulterated EVOO samples. Then, the standardized skewness of this coefficient has been applied to classify pure and adulterated samples of EVOO. In addition, this chaotic parameter has also been used to quantify the concentration of adulterant agents, by using successful linear correlation of LCCs and ROO, ROPO, SO or CO in 462 EVOO adulterated samples. In the case of detection, more than 82% of adulterated samples have been correctly classified. In the case of quantification of adulterant concentration, by an external validation process, the LCC/TGA approach estimates the adulterant agents concentration with a mean correlation coefficient (estimated versus real adulterant agent concentration) greater than 0.90 and a mean square error less than 4.9%. Copyright © 2011 Elsevier B.V. All rights reserved.
Gravity Field Recovery from the Cartwheel Formation by the Semi-analytical Approach
NASA Astrophysics Data System (ADS)
Li, Huishu; Reubelt, Tilo; Antoni, Markus; Sneeuw, Nico; Zhong, Min; Zhou, Zebing
2016-04-01
Past and current gravimetric satellite missions have contributed drastically to our knowledge of the Earth's gravity field. Nevertheless, several geoscience disciplines push for even higher requirements on accuracy, homogeneity and time- and space-resolution of the Earth's gravity field. Apart from better instruments or new observables, alternative satellite formations could improve the signal and error structure. With respect to other methods, one significant advantage of the semi-analytical approach is its effective pre-mission error assessment for gravity field missions. The semi-analytical approach builds a linear analytical relationship between the Fourier spectrum of the observables and the spherical harmonic spectrum of the gravity field. The spectral link between observables and gravity field parameters is given by the transfer coefficients, which constitutes the observation model. In connection with a stochastic model, it can be used for pre-mission error assessment of gravity field mission. The cartwheel formation is formed by two satellites on elliptic orbits in the same plane. The time dependent ranging will be considered in the transfer coefficients via convolution including the series expansion of the eccentricity functions. The transfer coefficients are applied to assess the error patterns, which are caused by different orientation of the cartwheel for range-rate and range acceleration. This work will present the isotropy and magnitude of the formal errors of the gravity field coefficients, for different orientations of the cartwheel.
A numerical model for boiling heat transfer coefficient of zeotropic mixtures
NASA Astrophysics Data System (ADS)
Barraza Vicencio, Rodrigo; Caviedes Aedo, Eduardo
2017-12-01
Zeotropic mixtures never have the same liquid and vapor composition in the liquid-vapor equilibrium. Also, the bubble and the dew point are separated; this gap is called glide temperature (Tglide). Those characteristics have made these mixtures suitable for cryogenics Joule-Thomson (JT) refrigeration cycles. Zeotropic mixtures as working fluid in JT cycles improve their performance in an order of magnitude. Optimization of JT cycles have earned substantial importance for cryogenics applications (e.g, gas liquefaction, cryosurgery probes, cooling of infrared sensors, cryopreservation, and biomedical samples). Heat exchangers design on those cycles is a critical point; consequently, heat transfer coefficient and pressure drop of two-phase zeotropic mixtures are relevant. In this work, it will be applied a methodology in order to calculate the local convective heat transfer coefficients based on the law of the wall approach for turbulent flows. The flow and heat transfer characteristics of zeotropic mixtures in a heated horizontal tube are investigated numerically. The temperature profile and heat transfer coefficient for zeotropic mixtures of different bulk compositions are analysed. The numerical model has been developed and locally applied in a fully developed, constant temperature wall, and two-phase annular flow in a duct. Numerical results have been obtained using this model taking into account continuity, momentum, and energy equations. Local heat transfer coefficient results are compared with available experimental data published by Barraza et al. (2016), and they have shown good agreement.
NASA Astrophysics Data System (ADS)
Liao, Meng; To, Quy-Dong; Léonard, Céline; Monchiet, Vincent
2018-03-01
In this paper, we use the molecular dynamics simulation method to study gas-wall boundary conditions. Discrete scattering information of gas molecules at the wall surface is obtained from collision simulations. The collision data can be used to identify the accommodation coefficients for parametric wall models such as Maxwell and Cercignani-Lampis scattering kernels. Since these scattering kernels are based on a limited number of accommodation coefficients, we adopt non-parametric statistical methods to construct the kernel to overcome these issues. Different from parametric kernels, the non-parametric kernels require no parameter (i.e. accommodation coefficients) and no predefined distribution. We also propose approaches to derive directly the Navier friction and Kapitza thermal resistance coefficients as well as other interface coefficients associated with moment equations from the non-parametric kernels. The methods are applied successfully to systems composed of CH4 or CO2 and graphite, which are of interest to the petroleum industry.
Bulk diffusion in a kinetically constrained lattice gas
NASA Astrophysics Data System (ADS)
Arita, Chikashi; Krapivsky, P. L.; Mallick, Kirone
2018-03-01
In the hydrodynamic regime, the evolution of a stochastic lattice gas with symmetric hopping rules is described by a diffusion equation with density-dependent diffusion coefficient encapsulating all microscopic details of the dynamics. This diffusion coefficient is, in principle, determined by a Green-Kubo formula. In practice, even when the equilibrium properties of a lattice gas are analytically known, the diffusion coefficient cannot be computed except when a lattice gas additionally satisfies the gradient condition. We develop a procedure to systematically obtain analytical approximations for the diffusion coefficient for non-gradient lattice gases with known equilibrium. The method relies on a variational formula found by Varadhan and Spohn which is a version of the Green-Kubo formula particularly suitable for diffusive lattice gases. Restricting the variational formula to finite-dimensional sub-spaces allows one to perform the minimization and gives upper bounds for the diffusion coefficient. We apply this approach to a kinetically constrained non-gradient lattice gas in two dimensions, viz. to the Kob-Andersen model on the square lattice.
Dettmer, Jan; Dosso, Stan E; Holland, Charles W
2008-03-01
This paper develops a joint time/frequency-domain inversion for high-resolution single-bounce reflection data, with the potential to resolve fine-scale profiles of sediment velocity, density, and attenuation over small seafloor footprints (approximately 100 m). The approach utilizes sequential Bayesian inversion of time- and frequency-domain reflection data, employing ray-tracing inversion for reflection travel times and a layer-packet stripping method for spherical-wave reflection-coefficient inversion. Posterior credibility intervals from the travel-time inversion are passed on as prior information to the reflection-coefficient inversion. Within the reflection-coefficient inversion, parameter information is passed from one layer packet inversion to the next in terms of marginal probability distributions rotated into principal components, providing an efficient approach to (partially) account for multi-dimensional parameter correlations with one-dimensional, numerical distributions. Quantitative geoacoustic parameter uncertainties are provided by a nonlinear Gibbs sampling approach employing full data error covariance estimation (including nonstationary effects) and accounting for possible biases in travel-time picks. Posterior examination of data residuals shows the importance of including data covariance estimates in the inversion. The joint inversion is applied to data collected on the Malta Plateau during the SCARAB98 experiment.
Huang, Shi; MacKinnon, David P.; Perrino, Tatiana; Gallo, Carlos; Cruden, Gracelyn; Brown, C Hendricks
2016-01-01
Mediation analysis often requires larger sample sizes than main effect analysis to achieve the same statistical power. Combining results across similar trials may be the only practical option for increasing statistical power for mediation analysis in some situations. In this paper, we propose a method to estimate: 1) marginal means for mediation path a, the relation of the independent variable to the mediator; 2) marginal means for path b, the relation of the mediator to the outcome, across multiple trials; and 3) the between-trial level variance-covariance matrix based on a bivariate normal distribution. We present the statistical theory and an R computer program to combine regression coefficients from multiple trials to estimate a combined mediated effect and confidence interval under a random effects model. Values of coefficients a and b, along with their standard errors from each trial are the input for the method. This marginal likelihood based approach with Monte Carlo confidence intervals provides more accurate inference than the standard meta-analytic approach. We discuss computational issues, apply the method to two real-data examples and make recommendations for the use of the method in different settings. PMID:28239330
NASA Astrophysics Data System (ADS)
Sadeghi, Arman
2018-03-01
Modeling of fluid flow in polyelectrolyte layer (PEL)-grafted microchannels is challenging due to their two-layer nature. Hence, the pertinent studies are limited only to circular and slit geometries for which matching the solutions for inside and outside the PEL is simple. In this paper, a simple variational-based approach is presented for the modeling of fully developed electroosmotic flow in PEL-grafted microchannels by which the whole fluidic area is considered as a single porous medium of variable properties. The model is capable of being applied to microchannels of a complex cross-sectional area. As an application of the method, it is applied to a rectangular microchannel of uniform PEL properties. It is shown that modeling a rectangular channel as a slit may lead to considerable overestimation of the mean velocity especially when both the PEL and electric double layer (EDL) are thick. It is also demonstrated that the mean velocity is an increasing function of the fixed charge density and PEL thickness and a decreasing function of the EDL thickness and PEL friction coefficient. The influence of the PEL thickness on the mean velocity, however, vanishes when both the PEL thickness and friction coefficient are sufficiently high.
Algebraic approach to small-world network models
NASA Astrophysics Data System (ADS)
Rudolph-Lilith, Michelle; Muller, Lyle E.
2014-01-01
We introduce an analytic model for directed Watts-Strogatz small-world graphs and deduce an algebraic expression of its defining adjacency matrix. The latter is then used to calculate the small-world digraph's asymmetry index and clustering coefficient in an analytically exact fashion, valid nonasymptotically for all graph sizes. The proposed approach is general and can be applied to all algebraically well-defined graph-theoretical measures, thus allowing for an analytical investigation of finite-size small-world graphs.
Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henricakson, Kristian C.; Xu, Maozeng; Wang, Yinhai
2016-01-01
This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior. PMID:26761209
Borman effect in resonant diffraction of X-rays
NASA Astrophysics Data System (ADS)
Oreshko, A. P.
2013-08-01
A dynamic theory of resonant diffraction (occurring when the energy of incident radiation is close to the energy of the absorption edge of an element in the composition of a given substance) of synchronous X-rays is developed in the two-wave approximation in the coplanar Laue geometry for large grazing angles in perfect crystals. A sharp decrease in the absorption coefficient in the substance with simultaneously satisfied diffraction conditions (Borman effect) is demonstrated, and the theoretical and first experimental results are compared. The calculations reveal the possibility of applying this approach in analyzing the quadrupole-quadrupole contribution to the absorption coefficient.
Miyamoto, Shuichi; Atsuyama, Kenji; Ekino, Keisuke; Shin, Takashi
2018-01-01
The isolation of useful microbes is one of the traditional approaches for the lead generation in drug discovery. As an effective technique for microbe isolation, we recently developed a multidimensional diffusion-based gradient culture system of microbes. In order to enhance the utility of the system, it is favorable to have diffusion coefficients of nutrients such as sugars in the culture medium beforehand. We have, therefore, built a simple and convenient experimental system that uses agar-gel to observe diffusion. Next, we performed computer simulations-based on random-walk concepts-of the experimental diffusion system and derived correlation formulas that relate observable diffusion data to diffusion coefficients. Finally, we applied these correlation formulas to our experimentally-determined diffusion data to estimate the diffusion coefficients of sugars. Our values for these coefficients agree reasonably well with values published in the literature. The effectiveness of our simple technique, which has elucidated the diffusion coefficients of some molecules which are rarely reported (e.g., galactose, trehalose, and glycerol) is demonstrated by the strong correspondence between the literature values and those obtained in our experiments.
Braun, Andreas Christian; Koch, Barbara
2016-10-01
Monitoring the impacts of land-use practices is of particular importance with regard to biodiversity hotspots in developing countries. Here, conserving the high level of unique biodiversity is challenged by limited possibilities for data collection on site. Especially for such scenarios, assisting biodiversity assessments by remote sensing has proven useful. Remote sensing techniques can be applied to interpolate between biodiversity assessments taken in situ. Through this approach, estimates of biodiversity for entire landscapes can be produced, relating land-use intensity to biodiversity conditions. Such maps are a valuable basis for developing biodiversity conservation plans. Several approaches have been published so far to interpolate local biodiversity assessments in remote sensing data. In the following, a new approach is proposed. Instead of inferring biodiversity using environmental variables or the variability of spectral values, a hypothesis-based approach is applied. Empirical knowledge about biodiversity in relation to land-use is formalized and applied as ascription rules for image data. The method is exemplified for a large study site (over 67,000 km(2)) in central Chile, where forest industry heavily impacts plant diversity. The proposed approach yields a coefficient of correlation of 0.73 and produces a convincing estimate of regional biodiversity. The framework is broad enough to be applied to other study sites.
Use of the Budyko Framework to Estimate the Virtual Water Content in Shijiazhuang Plain, North China
NASA Astrophysics Data System (ADS)
Zhang, E.; Yin, X.
2017-12-01
One of the most challenging steps in implementing analysis of virtual water content (VWC) of agricultural crops is how to properly assess the volume of consumptive water use (CWU) for crop production. In practice, CWU is considered equivalent to the crop evapotranspiration (ETc). Following the crop coefficient method, ETc can be calculated under standard or non-standard conditions by multiplying the reference evapotranspiration (ET0) by one or a few coefficients. However, when current crop growing conditions deviate from standard conditions, accurately determining the coefficients under non-standard conditions remains to be a complicated process and requires lots of field experimental data. Based on regional surface water-energy balance, this research integrates the Budyko framework into the traditional crop coefficient approach to simplify the coefficients determination. This new method enables us to assess the volume of agricultural VWC only based on some hydrometeorological data and agricultural statistic data in regional scale. To demonstrate the new method, we apply it to the Shijiazhuang Plain, which is an agricultural irrigation area in the North China Plain. The VWC of winter wheat and summer maize is calculated and we further subdivide VWC into blue and green water components. Compared with previous studies in this study area, VWC calculated by the Budyko-based crop coefficient approach uses less data and agrees well with some of the previous research. It shows that this new method may serve as a more convenient tool for assessing VWC.
Chiarelli, Antonio M.; Maclin, Edward L.; Low, Kathy A.; Fantini, Sergio; Fabiani, Monica; Gratton, Gabriele
2017-01-01
Abstract. Near infrared (NIR) light has been widely used for measuring changes in hemoglobin concentration in the human brain (functional NIR spectroscopy, fNIRS). fNIRS is based on the differential measurement and estimation of absorption perturbations, which, in turn, are based on correctly estimating the absolute parameters of light propagation. To do so, it is essential to accurately characterize the baseline optical properties of tissue (absorption and reduced scattering coefficients). However, because of the diffusive properties of the medium, separate determination of absorption and scattering across the head is challenging. The effective attenuation coefficient (EAC), which is proportional to the geometric mean of absorption and reduced scattering coefficients, can be estimated in a simpler fashion by multidistance light decay measurements. EAC mapping could be of interest for the scientific community because of its absolute information content, and because light propagation is governed by the EAC for source–detector distances exceeding 1 cm, which sense depths extending beyond the scalp and skull layers. Here, we report an EAC mapping procedure that can be applied to standard fNIRS recordings, yielding topographic maps with 2- to 3-cm resolution. Application to human data indicates the importance of venous sinuses in determining regional EAC variations, a factor often overlooked. PMID:28466026
Chiarelli, Antonio M; Maclin, Edward L; Low, Kathy A; Fantini, Sergio; Fabiani, Monica; Gratton, Gabriele
2017-04-01
Near infrared (NIR) light has been widely used for measuring changes in hemoglobin concentration in the human brain (functional NIR spectroscopy, fNIRS). fNIRS is based on the differential measurement and estimation of absorption perturbations, which, in turn, are based on correctly estimating the absolute parameters of light propagation. To do so, it is essential to accurately characterize the baseline optical properties of tissue (absorption and reduced scattering coefficients). However, because of the diffusive properties of the medium, separate determination of absorption and scattering across the head is challenging. The effective attenuation coefficient (EAC), which is proportional to the geometric mean of absorption and reduced scattering coefficients, can be estimated in a simpler fashion by multidistance light decay measurements. EAC mapping could be of interest for the scientific community because of its absolute information content, and because light propagation is governed by the EAC for source-detector distances exceeding 1 cm, which sense depths extending beyond the scalp and skull layers. Here, we report an EAC mapping procedure that can be applied to standard fNIRS recordings, yielding topographic maps with 2- to 3-cm resolution. Application to human data indicates the importance of venous sinuses in determining regional EAC variations, a factor often overlooked.
NASA Astrophysics Data System (ADS)
Warliani, Resti; Muslim, Setiawan, Wawan
2017-05-01
This study aims to determine the increase in the understanding achievement in senior high school students through the Learning Cycle 7E with technology based constructivist teaching approach (TBCT). This study uses a pretest-posttest control group design. The participants were 67 high school students of eleventh grade in Garut city with two class in control and experiment class. Experiment class applying the Learning Cycle 7E through TBCT approach and control class applying the 7E Learning Cycle through Constructivist Teaching approach (CT). Data collection tools from mechanical wave concept test with totally 22 questions with reability coefficient was found 0,86. The findings show the increase of the understanding achievement of the experiment class is in the amount of 0.51 was higher than the control class that is in the amount of 0.33.
Campacci, Natalia; de Lima, Juliana O; Carvalho, André L; Michelli, Rodrigo D; Haikel, Rafael; Mauad, Edmundo; Viana, Danilo V; Melendez, Matias E; Vazquez, Fabiana de L; Zanardo, Cleyton; Reis, Rui M; Rossi, Benedito M; Palmero, Edenir I
2017-12-01
One of the challenges for Latin American countries is to include in their healthcare systems technologies that can be applied to hereditary cancer detection and management. The aim of the study is to create and validate a questionnaire to identify individuals with possible risk for hereditary cancer predisposition syndromes (HCPS), using different strategies in a Cancer Prevention Service in Brazil. The primary screening questionnaire (PSQ) was developed to identify families at-risk for HCPS. The PSQ was validated using discrimination measures, and the reproducibility was estimated through kappa coefficient. Patients with at least one affirmative answer had the pedigree drawn using three alternative interview approaches: in-person, by telephone, or letter. Validation of these approaches was done. Kappa and intraclass correlation coefficients were used to analyze data's reproducibility considering the presence of clinical criteria for HCPS. The PSQ was applied to a convenience sample of 20,000 women of which 3121 (15.6%) answered at least one affirmative question and 1938 had their pedigrees drawn. The PSQ showed sensitivity and specificity scores of 94.4% and 75%, respectively, and a kappa of 0.64. The strategies for pedigree drawing had reproducibility coefficients of 0.976 and 0.850 for the telephone and letter approaches, respectively. Pedigree analysis allowed us to identify 465 individuals (24.0%) fulfilling at least one clinical criterion for HCPS. The PSQ fulfills its function, allowing the identification of HCPS at-risk families. The use of alternative screening methods may reduce the number of excluded at-risk individuals/families who live in locations where oncogenetic services are not established. © 2017 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.
Region Spherical Harmonic Magnetic Modeling from Near-Surface and Satellite-Altitude Anomlaies
NASA Technical Reports Server (NTRS)
Kim, Hyung Rae; von Frese, Ralph R. B.; Taylor, Patrick T.
2013-01-01
The compiled near-surface data and satellite crustal magnetic measured data are modeled with a regionally concentrated spherical harmonic presentation technique over Australia and Antarctica. Global crustal magnetic anomaly studies have used a spherical harmonic analysis to represent the Earth's magnetic crustal field. This global approach, however is best applied where the data are uniformly distributed over the entire Earth. Satellite observations generally meet this requirement, but unequally distributed data cannot be easily adapted in global modeling. Even for the satellite observations, due to the errors spread over the globe, data smoothing is inevitable in the global spherical harmonic presentations. In addition, global high-resolution modeling requires a great number of global spherical harmonic coefficients for the regional presentation of crustal magnetic anomalies, whereas a lesser number of localized spherical coefficients will satisfy. We compared methods in both global and regional approaches and for a case where the errors were propagated outside the region of interest. For observations from the upcoming Swarm constellation, the regional modeling will allow the production a lesser number of spherical coefficients that are relevant to the region of interest
Analysis of diffusion and binding in cells using the RICS approach.
Digman, Michelle A; Gratton, Enrico
2009-04-01
The movement of macromolecules in cells is assumed to occur either through active transport or by diffusion. However, the determination of the diffusion coefficients in cells using fluctuation methods or FRAP frequently give diffusion coefficient that are orders of magnitude smaller than the diffusion coefficients measured for the same macromolecule in solution. It is assumed that the cell internal viscosity is partially responsible for this decrease in the apparent diffusion. When the apparent diffusion is too slow to be due to cytoplasm viscosity, it is assumed that weak binding of the macromolecules to immobile or quasi immobile structures is taking place. In this article, we derive equations for fitting of the RICS (Raster-scan Image Correlations Spectroscopy) data in cells to a model that includes transient binding to immobile structures, and we show that under some conditions, the spatio-temporal correlation provided by the RICS approach can distinguish the process of diffusion and weak binding. We apply the method to determine the diffusion in the cytoplasm and binding of Focal Adhesion Kinase-EGFP to adhesions in MEF cells.
Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William
2014-03-01
The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.
Wave Period and Coastal Bathymetry Estimations from Satellite Images
NASA Astrophysics Data System (ADS)
Danilo, Celine; Melgani, Farid
2016-08-01
We present an approach for wave period and coastal water depth estimation. The approach based on wave observations, is entirely independent of ancillary data and can theoretically be applied to SAR or optical images. In order to demonstrate its feasibility we apply our method to more than 50 Sentinel-1A images of the Hawaiian Islands, well-known for its long waves. Six wave buoys are available to compare our results with in-situ measurements. The results on Sentinel-1A images show that half of the images were unsuitable for applying the method (no swell or wavelength too small to be captured by the SAR). On the other half, 78% of the estimated wave periods are in accordance with buoy measurements. In addition, we present preliminary results of the estimation of the coastal water depth on a Landsat-8 image (with characteristics close to Sentinel-2A). With a squared correlation coefficient of 0.7 for ground truth measurement, this approach reveals promising results for monitoring coastal bathymetry.
NASA Technical Reports Server (NTRS)
Brooke, D.; Vondrasek, D. V.
1978-01-01
The aerodynamic influence coefficients calculated using an existing linear theory program were used to modify the pressures calculated using impact theory. Application of the combined approach to several wing-alone configurations shows that the combined approach gives improved predictions of the local pressure and loadings over either linear theory alone or impact theory alone. The approach not only removes most of the short-comings of the individual methods, as applied in the Mach 4 to 8 range, but also provides the basis for an inverse design procedure applicable to high speed configurations.
Subspace techniques to remove artifacts from EEG: a quantitative analysis.
Teixeira, A R; Tome, A M; Lang, E W; Martins da Silva, A
2008-01-01
In this work we discuss and apply projective subspace techniques to both multichannel as well as single channel recordings. The single-channel approach is based on singular spectrum analysis(SSA) and the multichannel approach uses the extended infomax algorithm which is implemented in the opensource toolbox EEGLAB. Both approaches will be evaluated using artificial mixtures of a set of selected EEG signals. The latter were selected visually to contain as the dominant activity one of the characteristic bands of an electroencephalogram (EEG). The evaluation is performed both in the time and frequency domain by using correlation coefficients and coherence function, respectively.
Prediction of a service demand using combined forecasting approach
NASA Astrophysics Data System (ADS)
Zhou, Ling
2017-08-01
Forecasting facilitates cutting down operational and management costs while ensuring service level for a logistics service provider. Our case study here is to investigate how to forecast short-term logistic demand for a LTL carrier. Combined approach depends on several forecasting methods simultaneously, instead of a single method. It can offset the weakness of a forecasting method with the strength of another, which could improve the precision performance of prediction. Main issues of combined forecast modeling are how to select methods for combination, and how to find out weight coefficients among methods. The principles of method selection include that each method should apply to the problem of forecasting itself, also methods should differ in categorical feature as much as possible. Based on these principles, exponential smoothing, ARIMA and Neural Network are chosen to form the combined approach. Besides, least square technique is employed to settle the optimal weight coefficients among forecasting methods. Simulation results show the advantage of combined approach over the three single methods. The work done in the paper helps manager to select prediction method in practice.
Classification of EEG Signals Based on Pattern Recognition Approach.
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a "pattern recognition" approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90-7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11-89.63% and 91.60-81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy.
Classification of EEG Signals Based on Pattern Recognition Approach
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a “pattern recognition” approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90–7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11–89.63% and 91.60–81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy. PMID:29209190
Yang, Huan; Goudeli, Eirini; Hogan, Christopher J.
2018-04-24
In gas phase synthesis systems, clusters form and grow via condensation, in which a monomer binds to an existing cluster. While a hard sphere equation is frequently used to predict the condensation rate coefficient, this equation neglects the influences of potential interactions and cluster internal energy on the condensation process. Here, we present a collision rate theory-Molecular Dynamics simulation approach to calculate condensation probabilities and condensation rate coefficients; we use this approach to examine atomic condensation onto 6-56 atom Au and Mg clusters. The probability of condensation depends upon the initial relative velocity ( v) between atom and cluster andmore » the initial impact parameter ( b). In all cases there is a well-defined region of b-v space where condensation is highly probable, and outside of which the condensation probability drops to zero. For Au clusters with more than 10 atoms, we find that at gas temperatures in the 300-1200 K range, the condensation rate coefficient exceeds the hard sphere rate coefficient by a factor of 1.5-2.0. Conversely, for Au clusters with 10 or fewer atoms, and for 14 atom and 28 atom Mg clusters, as cluster equilibration temperature increases the condensation rate coefficient drops to values below the hard sphere rate coefficient. Calculations also yield the self-dissociation rate coefficient, which is found to vary considerably with gas temperature. Finally, calculations results reveal that grazing (high b) atom-cluster collisions at elevated velocity (> 1000 m s -1) can result in the colliding atom rebounding (bounce) from the cluster surface or binding while another atom dissociates (replacement). In conclusion, the presented method can be applied in developing rate equations to predict material formation and growth rates in vapor phase systems.« less
Yang, Huan; Goudeli, Eirini; Hogan, Christopher J
2018-04-28
In gas phase synthesis systems, clusters form and grow via condensation, in which a monomer binds to an existing cluster. While a hard-sphere equation is frequently used to predict the condensation rate coefficient, this equation neglects the influences of potential interactions and cluster internal energy on the condensation process. Here, we present a collision rate theory-molecular dynamics simulation approach to calculate condensation probabilities and condensation rate coefficients. We use this approach to examine atomic condensation onto 6-56-atom Au and Mg clusters. The probability of condensation depends upon the initial relative velocity (v) between atom and cluster and the initial impact parameter (b). In all cases, there is a well-defined region of b-v space where condensation is highly probable, and outside of which the condensation probability drops to zero. For Au clusters with more than 10 atoms, we find that at gas temperatures in the 300-1200 K range, the condensation rate coefficient exceeds the hard-sphere rate coefficient by a factor of 1.5-2.0. Conversely, for Au clusters with 10 or fewer atoms and for 14- and 28-atom Mg clusters, as cluster equilibration temperature increases, the condensation rate coefficient drops to values below the hard-sphere rate coefficient. Calculations also yield the self-dissociation rate coefficient, which is found to vary considerably with gas temperature. Finally, calculations results reveal that grazing (high b) atom-cluster collisions at elevated velocity (>1000 m s -1 ) can result in the colliding atom rebounding (bounce) from the cluster surface or binding while another atom dissociates (replacement). The presented method can be applied in developing rate equations to predict material formation and growth rates in vapor phase systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Huan; Goudeli, Eirini; Hogan, Christopher J.
In gas phase synthesis systems, clusters form and grow via condensation, in which a monomer binds to an existing cluster. While a hard sphere equation is frequently used to predict the condensation rate coefficient, this equation neglects the influences of potential interactions and cluster internal energy on the condensation process. Here, we present a collision rate theory-Molecular Dynamics simulation approach to calculate condensation probabilities and condensation rate coefficients; we use this approach to examine atomic condensation onto 6-56 atom Au and Mg clusters. The probability of condensation depends upon the initial relative velocity ( v) between atom and cluster andmore » the initial impact parameter ( b). In all cases there is a well-defined region of b-v space where condensation is highly probable, and outside of which the condensation probability drops to zero. For Au clusters with more than 10 atoms, we find that at gas temperatures in the 300-1200 K range, the condensation rate coefficient exceeds the hard sphere rate coefficient by a factor of 1.5-2.0. Conversely, for Au clusters with 10 or fewer atoms, and for 14 atom and 28 atom Mg clusters, as cluster equilibration temperature increases the condensation rate coefficient drops to values below the hard sphere rate coefficient. Calculations also yield the self-dissociation rate coefficient, which is found to vary considerably with gas temperature. Finally, calculations results reveal that grazing (high b) atom-cluster collisions at elevated velocity (> 1000 m s -1) can result in the colliding atom rebounding (bounce) from the cluster surface or binding while another atom dissociates (replacement). In conclusion, the presented method can be applied in developing rate equations to predict material formation and growth rates in vapor phase systems.« less
NASA Astrophysics Data System (ADS)
Yang, Huan; Goudeli, Eirini; Hogan, Christopher J.
2018-04-01
In gas phase synthesis systems, clusters form and grow via condensation, in which a monomer binds to an existing cluster. While a hard-sphere equation is frequently used to predict the condensation rate coefficient, this equation neglects the influences of potential interactions and cluster internal energy on the condensation process. Here, we present a collision rate theory-molecular dynamics simulation approach to calculate condensation probabilities and condensation rate coefficients. We use this approach to examine atomic condensation onto 6-56-atom Au and Mg clusters. The probability of condensation depends upon the initial relative velocity (v) between atom and cluster and the initial impact parameter (b). In all cases, there is a well-defined region of b-v space where condensation is highly probable, and outside of which the condensation probability drops to zero. For Au clusters with more than 10 atoms, we find that at gas temperatures in the 300-1200 K range, the condensation rate coefficient exceeds the hard-sphere rate coefficient by a factor of 1.5-2.0. Conversely, for Au clusters with 10 or fewer atoms and for 14- and 28-atom Mg clusters, as cluster equilibration temperature increases, the condensation rate coefficient drops to values below the hard-sphere rate coefficient. Calculations also yield the self-dissociation rate coefficient, which is found to vary considerably with gas temperature. Finally, calculations results reveal that grazing (high b) atom-cluster collisions at elevated velocity (>1000 m s-1) can result in the colliding atom rebounding (bounce) from the cluster surface or binding while another atom dissociates (replacement). The presented method can be applied in developing rate equations to predict material formation and growth rates in vapor phase systems.
Multiple imputation for cure rate quantile regression with censored data.
Wu, Yuanshan; Yin, Guosheng
2017-03-01
The main challenge in the context of cure rate analysis is that one never knows whether censored subjects are cured or uncured, or whether they are susceptible or insusceptible to the event of interest. Considering the susceptible indicator as missing data, we propose a multiple imputation approach to cure rate quantile regression for censored data with a survival fraction. We develop an iterative algorithm to estimate the conditionally uncured probability for each subject. By utilizing this estimated probability and Bernoulli sample imputation, we can classify each subject as cured or uncured, and then employ the locally weighted method to estimate the quantile regression coefficients with only the uncured subjects. Repeating the imputation procedure multiple times and taking an average over the resultant estimators, we obtain consistent estimators for the quantile regression coefficients. Our approach relaxes the usual global linearity assumption, so that we can apply quantile regression to any particular quantile of interest. We establish asymptotic properties for the proposed estimators, including both consistency and asymptotic normality. We conduct simulation studies to assess the finite-sample performance of the proposed multiple imputation method and apply it to a lung cancer study as an illustration. © 2016, The International Biometric Society.
Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.
2012-01-01
In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924
Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P
2012-10-01
In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.
Determination of the frictional coefficient of the implant-antler interface: experimental approach.
Hasan, Istabrak; Keilig, Ludger; Staat, Manfred; Wahl, Gerhard; Bourauel, Christoph
2012-10-01
The similar bone structure of reindeer antler to human bone permits studying the osseointegration of dental implants in the jawbone. As the friction is one of the major factors that have a significant influence on the initial stability of immediately loaded dental implants, it is essential to define the frictional coefficient of the implant-antler interface. In this study, the kinetic frictional forces at the implant-antler interface were measured experimentally using an optomechanical setup and a stepping motor controller under different axial loads and sliding velocities. The corresponding mean values of the static and kinetic frictional coefficients were within the range of 0.5-0.7 and 0.3-0.5, respectively. An increase in the frictional forces with increasing applied axial loads was registered. The measurements showed an evidence of a decrease in the magnitude of the frictional coefficient with increasing sliding velocity. The results of this study provide a considerable assessment to clarify the suitable frictional coefficient to be used in the finite element contact analysis of antler specimens.
NASA Astrophysics Data System (ADS)
Chen, Zhangqi; Liu, Zi-Kui; Zhao, Ji-Cheng
2018-05-01
Diffusion coefficients of seven binary systems (Ti-Mo, Ti-Nb, Ti-Ta, Ti-Zr, Zr-Mo, Zr-Nb, and Zr-Ta) at 1200 °C, 1000 °C, and 800 °C were experimentally determined using three Ti-Mo-Nb-Ta-Zr diffusion multiples. Electron probe microanalysis (EPMA) was performed to collect concentration profiles at the binary diffusion regions. Forward simulation analysis (FSA) was then applied to extract both impurity and interdiffusion coefficients in Ti-rich and Zr-rich part of the bcc phase. Excellent agreements between our results and most of the literature data validate the high-throughput approach combining FSA with diffusion multiples to obtain a large amount of systematic diffusion data, which will help establish the diffusion (mobility) databases for the design and development of biomedical and structural Ti alloys.
Identification of structural damage using wavelet-based data classification
NASA Astrophysics Data System (ADS)
Koh, Bong-Hwan; Jeong, Min-Joong; Jung, Uk
2008-03-01
Predicted time-history responses from a finite-element (FE) model provide a baseline map where damage locations are clustered and classified by extracted damage-sensitive wavelet coefficients such as vertical energy threshold (VET) positions having large silhouette statistics. Likewise, the measured data from damaged structure are also decomposed and rearranged according to the most dominant positions of wavelet coefficients. Having projected the coefficients to the baseline map, the true localization of damage can be identified by investigating the level of closeness between the measurement and predictions. The statistical confidence of baseline map improves as the number of prediction cases increases. The simulation results of damage detection in a truss structure show that the approach proposed in this study can be successfully applied for locating structural damage even in the presence of a considerable amount of process and measurement noise.
NASA Astrophysics Data System (ADS)
Chen, Zhangqi; Liu, Zi-Kui; Zhao, Ji-Cheng
2018-07-01
Diffusion coefficients of seven binary systems (Ti-Mo, Ti-Nb, Ti-Ta, Ti-Zr, Zr-Mo, Zr-Nb, and Zr-Ta) at 1200 °C, 1000 °C, and 800 °C were experimentally determined using three Ti-Mo-Nb-Ta-Zr diffusion multiples. Electron probe microanalysis (EPMA) was performed to collect concentration profiles at the binary diffusion regions. Forward simulation analysis (FSA) was then applied to extract both impurity and interdiffusion coefficients in Ti-rich and Zr-rich part of the bcc phase. Excellent agreements between our results and most of the literature data validate the high-throughput approach combining FSA with diffusion multiples to obtain a large amount of systematic diffusion data, which will help establish the diffusion (mobility) databases for the design and development of biomedical and structural Ti alloys.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Elia, M.; Edwards, H. C.; Hu, J.
Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less
D'Elia, M.; Edwards, H. C.; Hu, J.; ...
2018-01-18
Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less
An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory
Yen, Chung-Cheng; Guymon, Gary L.
1990-01-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory
NASA Astrophysics Data System (ADS)
Yen, Chung-Cheng; Guymon, Gary L.
1990-07-01
An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.
Dynamic characteristics of stay cables with inerter dampers
NASA Astrophysics Data System (ADS)
Shi, Xiang; Zhu, Songye
2018-06-01
This study systematically investigates the dynamic characteristics of a stay cable with an inerter damper installed close to one end of a cable. The interest in applying inerter dampers to stay cables is partially inspired by the superior damping performance of negative stiffness dampers in the same application. A comprehensive parametric study on two major parameters, namely, inertance and damping coefficients, are conducted using analytical and numerical approaches. An inerter damper can be optimized for one vibration mode of a stay cable by generating identical wave numbers in two adjacent modes. An optimal design approach is proposed for inerter dampers installed on stay cables. The corresponding optimal inertance and damping coefficients are summarized for different damper locations and interested modes. Inerter dampers can offer better damping performance than conventional viscous dampers for the target mode of a stay cable that requires optimization. However, additional damping ratios in other vibration modes through inerter damper are relatively limited.
Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S
2016-06-01
We extend dynamic generalized structured component analysis (GSCA) to enhance its data-analytic capability in structural equation modeling of multi-subject time series data. Time series data of multiple subjects are typically hierarchically structured, where time points are nested within subjects who are in turn nested within a group. The proposed approach, named multilevel dynamic GSCA, accommodates the nested structure in time series data. Explicitly taking the nested structure into account, the proposed method allows investigating subject-wise variability of the loadings and path coefficients by looking at the variance estimates of the corresponding random effects, as well as fixed loadings between observed and latent variables and fixed path coefficients between latent variables. We demonstrate the effectiveness of the proposed approach by applying the method to the multi-subject functional neuroimaging data for brain connectivity analysis, where time series data-level measurements are nested within subjects.
Haltrin, V I
1998-06-20
A self-consistent variant of the two-flow approximation that takes into account strong anisotropy of light scattering in seawater of finite depth and arbitrary turbidity is presented. To achieve an appropriate accuracy, this approach uses experimental dependencies between downward and total mean cosines. It calculates irradiances, diffuse attenuation coefficients, and diffuse reflectances in waters with arbitrary values of scattering, backscattering, and attenuation coefficients. It also takes into account arbitrary conditions of illumination and reflection from the bottom with the Lambertian albedo. This theory can be used for the calculation of apparent optical properties in both open and coastal oceanic waters, lakes, and rivers. It can also be applied to other types of absorbing and scattering medium such as paints, photographic emulsions, and biological tissues.
Characteristics of tuneable optical filters using optical ring resonator with PCF resonance loop
NASA Astrophysics Data System (ADS)
Shalmashi, K.; Seraji, F. E.; Mersagh, M. R.
2012-05-01
A theoretical analysis of a tuneable optical filter is presented by proposing an optical ring resonator (ORR) using photonic crystal fiber (PCF) as the resonance loop. The influences of the characteristic parameters of the PCF on the filter response have been analyzed under steady-state condition of the ORR. It is shown that the tuneability of the filter is mainly achieved by changing the modulation frequency of the light signal applied to the resonator. The analyses have shown that the sharpness and the depth of the filter response are controlled by parameters such as amplitude modulation index of applied field, the coupling coefficient of the ORR, and hole-spacing and air-filling ratio of the PCF, respectively. When transmission coefficient of the loop approaches the coupling coefficient, the filter response enhances sharply with PCF parameters. The depth and the full-width at half-maximum (FWHM) of the response strongly depend on the number of field circulations in the resonator loop. With the proposed tuneability scheme for optical filter, we achieved an FWHM of ~1.55 nm. The obtained results may be utilized in designing optical add/drop filters used in WDM communication systems.
NASA Astrophysics Data System (ADS)
Kelly, B.; Chelsky, A.; Bulygina, E.; Roberts, B. J.
2017-12-01
Remote sensing techniques have become valuable tools to researchers, providing the capability to measure and visualize important parameters without the need for time or resource intensive sampling trips. Relationships between dissolved organic carbon (DOC), colored dissolved organic matter (CDOM) and spectral data have been used to remotely sense DOC concentrations in riverine systems, however, this approach has not been applied to the northern Gulf of Mexico (GoM) and needs to be tested to determine how accurate these relationships are in riverine-dominated shelf systems. In April, July, and October 2017 we sampled surface water from 80+ sites over an area of 100,000 km2 along the Louisiana-Texas shelf in the northern GoM. DOC concentrations were measured on filtered water samples using a Shimadzu TOC-VCSH analyzer using standard techniques. Additionally, DOC concentrations were estimated from CDOM absorption coefficients of filtered water samples on a UV-Vis spectrophotometer using a modification of the methods of Fichot and Benner (2011). These values were regressed against Landsat visible band spectral data for those same locations to establish a relationship between the spectral data, CDOM absorption coefficients. This allowed us to spatially map CDOM absorption coefficients in the Gulf of Mexico using the Landsat spectral data in GIS. We then used a multiple linear regressions model to derive DOC concentrations from the CDOM absorption coefficients and applied those to our map. This study provides an evaluation of the viability of scaling up CDOM absorption coefficient and remote-sensing derived estimates of DOC concentrations to the scale of the LA-TX shelf ecosystem.
Fluctuation-enhanced electric conductivity in electrolyte solutions.
Péraud, Jean-Philippe; Nonaka, Andrew J; Bell, John B; Donev, Aleksandar; Garcia, Alejandro L
2017-10-10
We analyze the effects of an externally applied electric field on thermal fluctuations for a binary electrolyte fluid. We show that the fluctuating Poisson-Nernst-Planck (PNP) equations for charged multispecies diffusion coupled with the fluctuating fluid momentum equation result in enhanced charge transport via a mechanism distinct from the well-known enhancement of mass transport that accompanies giant fluctuations. Although the mass and charge transport occurs by advection by thermal velocity fluctuations, it can macroscopically be represented as electrodiffusion with renormalized electric conductivity and a nonzero cation-anion diffusion coefficient. Specifically, we predict a nonzero cation-anion Maxwell-Stefan coefficient proportional to the square root of the salt concentration, a prediction that agrees quantitatively with experimental measurements. The renormalized or effective macroscopic equations are different from the starting PNP equations, which contain no cross-diffusion terms, even for rather dilute binary electrolytes. At the same time, for infinitely dilute solutions the renormalized electric conductivity and renormalized diffusion coefficients are consistent and the classical PNP equations with renormalized coefficients are recovered, demonstrating the self-consistency of the fluctuating hydrodynamics equations. Our calculations show that the fluctuating hydrodynamics approach recovers the electrophoretic and relaxation corrections obtained by Debye-Huckel-Onsager theory, while elucidating the physical origins of these corrections and generalizing straightforwardly to more complex multispecies electrolytes. Finally, we show that strong applied electric fields result in anisotropically enhanced "giant" velocity fluctuations and reduced fluctuations of salt concentration.
Fluctuation-enhanced electric conductivity in electrolyte solutions
Péraud, Jean-Philippe; Nonaka, Andrew J.; Bell, John B.; Donev, Aleksandar; Garcia, Alejandro L.
2017-01-01
We analyze the effects of an externally applied electric field on thermal fluctuations for a binary electrolyte fluid. We show that the fluctuating Poisson–Nernst–Planck (PNP) equations for charged multispecies diffusion coupled with the fluctuating fluid momentum equation result in enhanced charge transport via a mechanism distinct from the well-known enhancement of mass transport that accompanies giant fluctuations. Although the mass and charge transport occurs by advection by thermal velocity fluctuations, it can macroscopically be represented as electrodiffusion with renormalized electric conductivity and a nonzero cation–anion diffusion coefficient. Specifically, we predict a nonzero cation–anion Maxwell–Stefan coefficient proportional to the square root of the salt concentration, a prediction that agrees quantitatively with experimental measurements. The renormalized or effective macroscopic equations are different from the starting PNP equations, which contain no cross-diffusion terms, even for rather dilute binary electrolytes. At the same time, for infinitely dilute solutions the renormalized electric conductivity and renormalized diffusion coefficients are consistent and the classical PNP equations with renormalized coefficients are recovered, demonstrating the self-consistency of the fluctuating hydrodynamics equations. Our calculations show that the fluctuating hydrodynamics approach recovers the electrophoretic and relaxation corrections obtained by Debye–Huckel–Onsager theory, while elucidating the physical origins of these corrections and generalizing straightforwardly to more complex multispecies electrolytes. Finally, we show that strong applied electric fields result in anisotropically enhanced “giant” velocity fluctuations and reduced fluctuations of salt concentration. PMID:28973890
A TBA approach to thermal transport in the XXZ Heisenberg model
NASA Astrophysics Data System (ADS)
Zotos, X.
2017-10-01
We show that the thermal Drude weight and magnetothermal coefficient of the 1D easy-plane Heisenberg model can be evaluated by an extension of the Bethe ansatz thermodynamics formulation by Takahashi and Suzuki (1972 Prog. Theor. Phys. 48 2187). They have earlier been obtained by the quantum transfer matrix method (Klümper 1999 Z. Phys. B 91 507). Furthermore, this approach can be applied to the study of the far-out of equilibrium energy current generated at the interface between two semi-infinite chains held at different temperatures.
Doppler-shift estimation of flat underwater channel using data-aided least-square approach
NASA Astrophysics Data System (ADS)
Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing
2015-06-01
In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.
Quasi-Newton methods for parameter estimation in functional differential equations
NASA Technical Reports Server (NTRS)
Brewer, Dennis W.
1988-01-01
A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.
Pérez-Payá, E; Porcar, I; Gómez, C M; Pedrós, J; Campos, A; Abad, C
1997-08-01
A thermodynamic approach is proposed to quantitatively analyze the binding isotherms of peptides to model membranes as a function of one adjustable parameter, the actual peptide charge in solution z(p)+. The main features of this approach are a theoretical expression for the partition coefficient calculated from the molar free energies of the peptide in the aqueous and lipid phases, an equation proposed by S. Stankowski [(1991) Biophysical Journal, Vol. 60, p. 341] to evaluate the activity coefficient of the peptide in the lipid phase, and the Debye-Hückel equation that quantifies the activity coefficient of the peptide in the aqueous phase. To assess the validity of this approach we have studied, by means of steady-state fluorescence spectroscopy, the interaction of basic amphipathic peptides such as melittin and its dansylcadaverine analogue (DNC-melittin), as well as a new fluorescent analogue of substance P, SP (DNC-SP) with neutral phospholipid membranes. A consistent quantitative analysis of each binding curve was achieved. The z(p)+ values obtained were always found to be lower than the physical charge of the peptide. These z(p)+ values can be rationalized by considering that the peptide charged groups are strongly associated with counterions in buffer solution at a given ionic strength. The partition coefficients theoretically derived using the z(p)+ values were in agreement with those deduced from the Gouy-Chapman formalism. Ultimately, from the z(p)+ values the molar free energies for the free and lipid-bound states of the peptides have been calculated.
NASA Astrophysics Data System (ADS)
Liu, Y. Y.; Xie, S. H.; Jin, G.; Li, J. Y.
2009-04-01
Magnetoelectric annealing is necessary to remove antiferromagnetic domains and induce macroscopic magnetoelectric effect in polycrystalline magnetoelectric materials, and in this paper, we study the effective magnetoelectric properties of perpendicularly annealed polycrystalline Cr2O3 using effective medium approximation. The effect of temperatures, grain aspect ratios, and two different types of orientation distribution function have been analyzed, and unusual material symmetry is observed when the orientation distribution function only depends on Euler angle ψ. Optimal grain aspect ratio and texture coefficient are also identified. The approach can be applied to analyze the microstructural field distribution and macroscopic properties of a wide range of magnetoelectric polycrystals.
Spectral approach to homogenization of hyperbolic equations with periodic coefficients
NASA Astrophysics Data System (ADS)
Dorodnyi, M. A.; Suslina, T. A.
2018-06-01
In L2 (Rd ;Cn), we consider selfadjoint strongly elliptic second order differential operators Aε with periodic coefficients depending on x / ε, ε > 0. We study the behavior of the operators cos (Aε1/2 τ) and Aε-1/2 sin (Aε1/2 τ), τ ∈ R, for small ε. Approximations for these operators in the (Hs →L2)-operator norm with a suitable s are obtained. The results are used to study the behavior of the solution vε of the Cauchy problem for the hyperbolic equation ∂τ2 vε = -Aεvε + F. General results are applied to the acoustics equation and the system of elasticity theory.
Tribological behaviour and statistical experimental design of sintered iron-copper based composites
NASA Astrophysics Data System (ADS)
Popescu, Ileana Nicoleta; Ghiţă, Constantin; Bratu, Vasile; Palacios Navarro, Guillermo
2013-11-01
The sintered iron-copper based composites for automotive brake pads have a complex composite composition and should have good physical, mechanical and tribological characteristics. In this paper, we obtained frictional composites by Powder Metallurgy (P/M) technique and we have characterized them by microstructural and tribological point of view. The morphology of raw powders was determined by SEM and the surfaces of obtained sintered friction materials were analyzed by ESEM, EDS elemental and compo-images analyses. One lot of samples were tested on a "pin-on-disc" type wear machine under dry sliding conditions, at applied load between 3.5 and 11.5 × 10-1 MPa and 12.5 and 16.9 m/s relative speed in braking point at constant temperature. The other lot of samples were tested on an inertial test stand according to a methodology simulating the real conditions of dry friction, at a contact pressure of 2.5-3 MPa, at 300-1200 rpm. The most important characteristics required for sintered friction materials are high and stable friction coefficient during breaking and also, for high durability in service, must have: low wear, high corrosion resistance, high thermal conductivity, mechanical resistance and thermal stability at elevated temperature. Because of the tribological characteristics importance (wear rate and friction coefficient) of sintered iron-copper based composites, we predicted the tribological behaviour through statistical analysis. For the first lot of samples, the response variables Yi (represented by the wear rate and friction coefficient) have been correlated with x1 and x2 (the code value of applied load and relative speed in braking points, respectively) using a linear factorial design approach. We obtained brake friction materials with improved wear resistance characteristics and high and stable friction coefficients. It has been shown, through experimental data and obtained linear regression equations, that the sintered composites wear rate increases with increasing applied load and relative speed, but in the same conditions, the frictional coefficients slowly decrease.
Determination of the diffusion coefficient and solubility of radon in plastics.
Pressyanov, D; Georgiev, S; Dimitrova, I; Mitev, K; Boshkova, T
2011-05-01
This paper describes a method for determination of the diffusion coefficient and the solubility of radon in plastics. The method is based on the absorption and desorption of radon in plastics. Firstly, plastic specimens are exposed for controlled time to referent (222)Rn concentrations. After exposure, the activity of the specimens is followed by HPGe gamma spectrometry. Using the mathematical algorithm described in this report and the decrease of activity as a function of time, the diffusion coefficient can be determined. In addition, if the referent (222)Rn concentration during the exposure is known, the solubility of radon can be determined. The algorithm has been experimentally applied for different plastics. The results show that this approach allows the specified quantities to be determined with a rather high accuracy-depending on the quality of the counting equipment, it can be better than 10 %.
Bonhommeau, David A; Perret, Alexandre; Nuzillard, Jean-Marc; Cilindre, Clara; Cours, Thibaud; Alijah, Alexander; Liger-Belair, Gérard
2014-12-18
The diffusion coefficients of carbon dioxide (CO2) and ethanol (EtOH) in carbonated hydroalcoholic solutions and Champagne wines are evaluated as a function of temperature by classical molecular dynamics (MD) simulations and (13)C NMR spectroscopy measurements. The excellent agreement between theoretical and experimental diffusion coefficients suggest that ethanol is the main molecule, apart from water, responsible for the value of the CO2 diffusion coefficients in typical Champagne wines, a result that could likely be extended to most sparkling wines with alike ethanol concentrations. CO2 and EtOH hydrodynamical radii deduced from viscometry measurements by applying the Stokes-Einstein relationship are found to be mostly constant and in close agreement with MD predictions. The reliability of our approach should be of interest to physical chemists aiming to model transport phenomena in supersaturated aqueous solutions or water/alcohol mixtures.
Pumpe, Sebastian; Chemnitz, Mario; Kobelke, Jens; Schmidt, Markus A
2017-09-18
We present a monolithic fiber device that enables investigation of the thermo- and piezo-optical properties of liquids using straightforward broadband transmission measurements. The device is a directional mode coupler consisting of a multi-mode liquid core and a single-mode glass core with pronounced coupling resonances whose wavelength strongly depend on the operation temperature. We demonstrated the functionality and flexibility of our device for carbon disulfide, extending the current knowledge of the thermo-optic coefficient by 200 nm at 20 °C and uniquely for high temperatures. Moreover, our device allows measuring the piezo-optic coefficient of carbon disulfide, confirming results first obtained by Röntgen in 1891. Finally, we applied our approach to obtain the dispersion of the thermo-optic coefficients of benzene and tetrachloroethylene between 450 and 800 nm, whereas no data was available for the latter so far.
Atomistic simulations of carbon diffusion and segregation in liquid silicon
NASA Astrophysics Data System (ADS)
Luo, Jinping; Alateeqi, Abdullah; Liu, Lijun; Sinno, Talid
2017-12-01
The diffusivity of carbon atoms in liquid silicon and their equilibrium distribution between the silicon melt and crystal phases are key, but unfortunately not precisely known parameters for the global models of silicon solidification processes. In this study, we apply a suite of molecular simulation tools, driven by multiple empirical potential models, to compute diffusion and segregation coefficients of carbon at the silicon melting temperature. We generally find good consistency across the potential model predictions, although some exceptions are identified and discussed. We also find good agreement with the range of available experimental measurements of segregation coefficients. However, the carbon diffusion coefficients we compute are significantly lower than the values typically assumed in continuum models of impurity distribution. Overall, we show that currently available empirical potential models may be useful, at least semi-quantitatively, for studying carbon (and possibly other impurity) transport in silicon solidification, especially if a multi-model approach is taken.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
Speech Enhancement Using Gaussian Scale Mixture Models
Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.
2011-01-01
This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139
Porous Media Approach for Modeling Closed Cell Foam
NASA Technical Reports Server (NTRS)
Ghosn, Louis J.; Sullivan, Roy M.
2006-01-01
In order to minimize boil off of the liquid oxygen and liquid hydrogen and to prevent the formation of ice on its exterior surface, the Space Shuttle External Tank (ET) is insulated using various low-density, closed-cell polymeric foams. Improved analysis methods for these foam materials are needed to predict the foam structural response and to help identify the foam fracture behavior in order to help minimize foam shedding occurrences. This presentation describes a continuum based approach to modeling the foam thermo-mechanical behavior that accounts for the cellular nature of the material and explicitly addresses the effect of the internal cell gas pressure. A porous media approach is implemented in a finite element frame work to model the mechanical behavior of the closed cell foam. The ABAQUS general purpose finite element program is used to simulate the continuum behavior of the foam. The soil mechanics element is implemented to account for the cell internal pressure and its effect on the stress and strain fields. The pressure variation inside the closed cells is calculated using the ideal gas laws. The soil mechanics element is compatible with an orthotropic materials model to capture the different behavior between the rise and in-plane directions of the foam. The porous media approach is applied to model the foam thermal strain and calculate the foam effective coefficient of thermal expansion. The calculated foam coefficients of thermal expansion were able to simulate the measured thermal strain during heat up from cryogenic temperature to room temperature in vacuum. The porous media approach was applied to an insulated substrate with one inch foam and compared to a simple elastic solution without pore pressure. The porous media approach is also applied to model the foam mechanical behavior during subscale laboratory experiments. In this test, a foam layer sprayed on a metal substrate is subjected to a temperature variation while the metal substrate is stretched to simulate the structural response of the tank during operation. The thermal expansion mismatch between the foam and the metal substrate and the thermal gradient in the foam layer causes high tensile stresses near the metal/foam interface that can lead to delamination.
Stein, Paul C; di Cagno, Massimiliano; Bauer-Brandl, Annette
2011-09-01
In this work a new, accurate and convenient technique for the measurement of distribution coefficients and membrane permeabilities based on nuclear magnetic resonance (NMR) is described. This method is a novel implementation of localized NMR spectroscopy and enables the simultaneous analysis of the drug content in the octanol and in the water phase without separation. For validation of the method, the distribution coefficients at pH = 7.4 of four active pharmaceutical ingredients (APIs), namely ibuprofen, ketoprofen, nadolol, and paracetamol (acetaminophen), were determined using a classical approach. These results were compared to the NMR experiments which are described in this work. For all substances, the respective distribution coefficients found with the two techniques coincided very well. Furthermore, the NMR experiments make it possible to follow the distribution of the drug between the phases as a function of position and time. Our results show that the technique, which is available on any modern NMR spectrometer, is well suited to the measurement of distribution coefficients. The experiments present also new insight into the dynamics of the water-octanol interface itself and permit measurement of the interface permeability.
NASA Astrophysics Data System (ADS)
Abril, J. M.; Abdel-Aal, M. M.; Al-Gamal, S. A.; Abdel-Hay, F. A.; Zahar, H. M.
2000-04-01
In this paper we take advantage of the two field tracing experiments carried out under the IAEA project EGY/07/002, to develop a modelling study on the dispersion of radioactive pollution in the Suez Canal. The experiments were accomplished by using rhodamine B as a tracer, and water samples were measured by luminescence spectrometry. The presence of natural luminescent particles in the canal waters limited the use of some field data. During experiments, water levels, velocities, wind and other physical parameters were recorded to supply appropriate information for the modelling work. From this data set, the hydrodynamics of the studied area has been reasonably described. We apply a 1-D-Gaussian and 2-D modelling approaches to predict the position and the spatial shape of the plume. The use of different formulations for dispersion coefficients is studied. These dispersion coefficients are then applied in a 2-D-hydrodynamic and dispersion model for the Bitter Lake to investigate different scenarios of accidental discharges.
NASA Astrophysics Data System (ADS)
Asgari, Ali; Dehestani, Pouya; Poruraminaie, Iman
2018-02-01
Shot peening is a well-known process in applying the residual stress on the surface of industrial parts. The induced residual stress improves fatigue life. In this study, the effects of shot peening parameters such as shot diameter, shot speed, friction coefficient, and the number of impacts on the applied residual stress will be evaluated. To assess these parameters effect, firstly the shot peening process has been simulated by finite element method. Then, effects of the process parameters on the residual stress have been evaluated by response surface method as a statistical approach. Finally, a strong model is presented to predict the maximum residual stress induced by shot peening process in AISI 4340 steel. Also, the optimum parameters for the maximum residual stress are achieved. The results indicate that effect of shot diameter on the induced residual stress is increased by increasing the shot speed. Also, enhancing the friction coefficient magnitude always cannot lead to increase in the residual stress.
Modeling rainfall-runoff process using soft computing techniques
NASA Astrophysics Data System (ADS)
Kisi, Ozgur; Shiri, Jalal; Tombul, Mustafa
2013-02-01
Rainfall-runoff process was modeled for a small catchment in Turkey, using 4 years (1987-1991) of measurements of independent variables of rainfall and runoff values. The models used in the study were Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS) and Gene Expression Programming (GEP) which are Artificial Intelligence (AI) approaches. The applied models were trained and tested using various combinations of the independent variables. The goodness of fit for the model was evaluated in terms of the coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), coefficient of efficiency (CE) and scatter index (SI). A comparison was also made between these models and traditional Multi Linear Regression (MLR) model. The study provides evidence that GEP (with RMSE=17.82 l/s, MAE=6.61 l/s, CE=0.72 and R2=0.978) is capable of modeling rainfall-runoff process and is a viable alternative to other applied artificial intelligence and MLR time-series methods.
Yoneoka, Daisuke; Henmi, Masayuki
2017-11-30
Recently, the number of clinical prediction models sharing the same regression task has increased in the medical literature. However, evidence synthesis methodologies that use the results of these regression models have not been sufficiently studied, particularly in meta-analysis settings where only regression coefficients are available. One of the difficulties lies in the differences between the categorization schemes of continuous covariates across different studies. In general, categorization methods using cutoff values are study specific across available models, even if they focus on the same covariates of interest. Differences in the categorization of covariates could lead to serious bias in the estimated regression coefficients and thus in subsequent syntheses. To tackle this issue, we developed synthesis methods for linear regression models with different categorization schemes of covariates. A 2-step approach to aggregate the regression coefficient estimates is proposed. The first step is to estimate the joint distribution of covariates by introducing a latent sampling distribution, which uses one set of individual participant data to estimate the marginal distribution of covariates with categorization. The second step is to use a nonlinear mixed-effects model with correction terms for the bias due to categorization to estimate the overall regression coefficients. Especially in terms of precision, numerical simulations show that our approach outperforms conventional methods, which only use studies with common covariates or ignore the differences between categorization schemes. The method developed in this study is also applied to a series of WHO epidemiologic studies on white blood cell counts. Copyright © 2017 John Wiley & Sons, Ltd.
Low-order aberration coefficients applied to design of telescopes with freeform surfaces
NASA Astrophysics Data System (ADS)
Stone, Bryan D.; Howard, Joseph M.
2017-09-01
As the number of smallsats and cubesats continues to increase [1], so does the interest in the space optics community to miniaturize reflective optical instrumentation for these smaller platforms. Applications of smallsats are typically for the Earth observing community, but recently opportunities for them are being made available for planetary science, heliophysics and astrophysics concepts [2]. With the smaller satellite platforms come reduced instrument sizes that they accommodate, but the specifications such as field of view and working f/# imposed on the smaller optical systems are often the same, or even more challenging. To meet them, and to "fit in the box", it is necessary to employ additional degrees of freedom to the optical design. An effective strategy to reduce package size is to remove rotational symmetry constraints on the system layout, allowing it to minimize the unused volume by applying rigid body tilts and decenters to mirrors. Requirements for faster systems and wider fields of view can be addressed by allowing optical surfaces to become "freeform" in shape, essentially removing rotational symmetry constraints on the mirrors themselves. This dual approach not only can reduce package size, but also can allow for increased fields of view with improved image quality. Tools were developed in the 1990s to compute low-order coefficients of the imaging properties of asymmetric tilted and decentered systems [3][4]. That approach was then applied to reflective systems with plane symmetry, where the coefficients were used to create closed-form constraints to reduce the number of degrees of freedom of the design space confronting the designer [5][6]. In this paper we describe the geometric interpretation of these coefficients for systems with a plane of symmetry, and discuss some insights that follow for the design of systems without closed-form constraints. We use a common three-mirror design form example to help illustrate these concepts, and incorporate freeform surfaces for each mirror shape. In section II, we evoke the typical form of the wave aberration function taught in most texts on geometrical optics, and then recast it into a general form that no longer assumes rotational symmetry. A freeform surface definition for mirrors is then defined, and the example three-mirror system used throughout this paper is introduced. In section III, the first-order coefficients of the plane symmetric system are discussed, and then the second-order in section IV. In both of these discussions, the example system is perturbed to present the explicit form of the aberration coefficient laid out in section II, and plots are presented using optical design software. Finally, some concluding remarks are given in section V.
Poroelasticity-driven lubrication in hydrogel interfaces.
Reale, Erik R; Dunn, Alison C
2017-01-04
It is widely accepted that hydrogel surfaces are slippery, and have low friction, but dynamic applied stresses alter the hydrogel composition at the interface as water is displaced. The induced osmotic imbalance of compressed hydrogel which cannot swell to equilibrium should drive the resistance to slip against it. This paper demonstrates the driving role of poroelasticity in the friction of hydrogel-glass interfaces, specifically how poroelastic relaxation of hydrogels increases adhesion. We translate the work of adhesion into an effective surface energy density that increases with the duration of applied pressure from 10 to 50 mJ m -2 , as measured by micro-indentation. A model of static friction coefficient is derived from an area-based rules of mixture for the surface energies, and predicts the friction coefficient changes upon initiation of slip. For kinetic friction, the competition between duration of contact and relaxation time is quantified by a contacting Péclet number, Pe C . A single length parameter on the scale of micrometers fits these two models to experimental micro-friction data. These models predict how short durations of applied pressure and faster sliding speeds, do not disrupt interfacial hydration; this prevailing water maintains low friction. At low speeds where interface drainage dominates, the osmotic suction works against slip for higher friction. The prediction of friction coefficients after adhesion characterization by micro-indentation makes use of the interplay between poroelasticity, adhesion, and friction. This approach provides a starting point for prediction of, and design for, hydrogel interfacial friction.
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1994-01-01
The primary accomplishments of the project are as follows: (1) Using the transonic small perturbation equation as a flowfield model, the project demonstrated that the quasi-analytical method could be used to obtain aerodynamic sensitivity coefficients for airfoils at subsonic, transonic, and supersonic conditions for design variables such as Mach number, airfoil thickness, maximum camber, angle of attack, and location of maximum camber. It was established that the quasi-analytical approach was an accurate method for obtaining aerodynamic sensitivity derivatives for airfoils at transonic conditions and usually more efficient than the finite difference approach. (2) The usage of symbolic manipulation software to determine the appropriate expressions and computer coding associated with the quasi-analytical method for sensitivity derivatives was investigated. Using the three dimensional fully conservative full potential flowfield model, it was determined that symbolic manipulation along with a chain rule approach was extremely useful in developing a combined flowfield and quasi-analytical sensitivity derivative code capable of considering a large number of realistic design variables. (3) Using the three dimensional fully conservative full potential flowfield model, the quasi-analytical method was applied to swept wings (i.e. three dimensional) at transonic flow conditions. (4) The incremental iterative technique has been applied to the three dimensional transonic nonlinear small perturbation flowfield formulation, an equivalent plate deflection model, and the associated aerodynamic and structural discipline sensitivity equations; and coupled aeroelastic results for an aspect ratio three wing in transonic flow have been obtained.
Statistical Analysis of the Uncertainty in Pre-Flight Aerodynamic Database of a Hypersonic Vehicle
NASA Astrophysics Data System (ADS)
Huh, Lynn
The objective of the present research was to develop a new method to derive the aerodynamic coefficients and the associated uncertainties for flight vehicles via post- flight inertial navigation analysis using data from the inertial measurement unit. Statistical estimates of vehicle state and aerodynamic coefficients are derived using Monte Carlo simulation. Trajectory reconstruction using the inertial navigation system (INS) is a simple and well used method. However, deriving realistic uncertainties in the reconstructed state and any associated parameters is not so straight forward. Extended Kalman filters, batch minimum variance estimation and other approaches have been used. However, these methods generally depend on assumed physical models, assumed statistical distributions (usually Gaussian) or have convergence issues for non-linear problems. The approach here assumes no physical models, is applicable to any statistical distribution, and does not have any convergence issues. The new approach obtains the statistics directly from a sufficient number of Monte Carlo samples using only the generally well known gyro and accelerometer specifications and could be applied to the systems of non-linear form and non-Gaussian distribution. When redundant data are available, the set of Monte Carlo simulations are constrained to satisfy the redundant data within the uncertainties specified for the additional data. The proposed method was applied to validate the uncertainty in the pre-flight aerodynamic database of the X-43A Hyper-X research vehicle. In addition to gyro and acceleration data, the actual flight data include redundant measurements of position and velocity from the global positioning system (GPS). The criteria derived from the blend of the GPS and INS accuracy was used to select valid trajectories for statistical analysis. The aerodynamic coefficients were derived from the selected trajectories by either direct extraction method based on the equations in dynamics, or by the inquiry of the pre-flight aerodynamic database. After the application of the proposed method to the case of the X-43A Hyper-X research vehicle, it was found that 1) there were consistent differences in the aerodynamic coefficients from the pre-flight aerodynamic database and post-flight analysis, 2) the pre-flight estimation of the pitching moment coefficients was significantly different from the post-flight analysis, 3) the type of distribution of the states from the Monte Carlo simulation were affected by that of the perturbation parameters, 4) the uncertainties in the pre-flight model were overestimated, 5) the range where the aerodynamic coefficients from the pre-flight aerodynamic database and post-flight analysis are in closest agreement is between Mach *.* and *.* and more data points may be needed between Mach * and ** in the pre-flight aerodynamic database, 6) selection criterion for valid trajectories from the Monte Carlo simulations was mostly driven by the horizontal velocity error, 7) the selection criterion must be based on reasonable model to ensure the validity of the statistics from the proposed method, and 8) the results from the proposed method applied to the two different flights with the identical geometry and similar flight profile were consistent.
Fluctuation-enhanced electric conductivity in electrolyte solutions
Péraud, Jean-Philippe; Nonaka, Andrew J.; Bell, John B.; ...
2017-09-26
In this work, we analyze the effects of an externally applied electric field on thermal fluctuations for a binary electrolyte fluid. We show that the fluctuating Poisson–Nernst–Planck (PNP) equations for charged multispecies diffusion coupled with the fluctuating fluid momentum equation result in enhanced charge transport via a mechanism distinct from the well-known enhancement of mass transport that accompanies giant fluctuations. Although the mass and charge transport occurs by advection by thermal velocity fluctuations, it can macroscopically be represented as electrodiffusion with renormalized electric conductivity and a nonzero cation–anion diffusion coefficient. Specifically, we predict a nonzero cation–anion Maxwell– Stefan coefficient proportionalmore » to the square root of the salt concentration, a prediction that agrees quantitatively with experimental measurements. The renormalized or effective macroscopic equations are different from the starting PNP equations, which contain no cross-diffusion terms, even for rather dilute binary electrolytes. At the same time, for infinitely dilute solutions the renormalized electric conductivity and renormalized diffusion coefficients are consistent and the classical PNP equations with renormalized coefficients are recovered, demonstrating the self-consistency of the fluctuating hydrodynamics equations. Our calculations show that the fluctuating hydrodynamics approach recovers the electrophoretic and relaxation corrections obtained by Debye–Huckel–Onsager theory, while elucidating the physical origins of these corrections and generalizing straightforwardly to more complex multispecies electrolytes. Lastly, we show that strong applied electric fields result in anisotropically enhanced “giant” velocity fluctuations and reduced fluctuations of salt concentration.« less
Fluctuation-enhanced electric conductivity in electrolyte solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Péraud, Jean-Philippe; Nonaka, Andrew J.; Bell, John B.
In this work, we analyze the effects of an externally applied electric field on thermal fluctuations for a binary electrolyte fluid. We show that the fluctuating Poisson–Nernst–Planck (PNP) equations for charged multispecies diffusion coupled with the fluctuating fluid momentum equation result in enhanced charge transport via a mechanism distinct from the well-known enhancement of mass transport that accompanies giant fluctuations. Although the mass and charge transport occurs by advection by thermal velocity fluctuations, it can macroscopically be represented as electrodiffusion with renormalized electric conductivity and a nonzero cation–anion diffusion coefficient. Specifically, we predict a nonzero cation–anion Maxwell– Stefan coefficient proportionalmore » to the square root of the salt concentration, a prediction that agrees quantitatively with experimental measurements. The renormalized or effective macroscopic equations are different from the starting PNP equations, which contain no cross-diffusion terms, even for rather dilute binary electrolytes. At the same time, for infinitely dilute solutions the renormalized electric conductivity and renormalized diffusion coefficients are consistent and the classical PNP equations with renormalized coefficients are recovered, demonstrating the self-consistency of the fluctuating hydrodynamics equations. Our calculations show that the fluctuating hydrodynamics approach recovers the electrophoretic and relaxation corrections obtained by Debye–Huckel–Onsager theory, while elucidating the physical origins of these corrections and generalizing straightforwardly to more complex multispecies electrolytes. Lastly, we show that strong applied electric fields result in anisotropically enhanced “giant” velocity fluctuations and reduced fluctuations of salt concentration.« less
NASA Astrophysics Data System (ADS)
Belyaev, Andrey K.; Yakovleva, Svetlana A.
2017-12-01
Aims: A simplified model is derived for estimating rate coefficients for inelastic processes in low-energy collisions of heavy particles with hydrogen, in particular, the rate coefficients with high and moderate values. Such processes are important for non-local thermodynamic equilibrium modeling of cool stellar atmospheres. Methods: The derived method is based on the asymptotic approach for electronic structure calculations and the Landau-Zener model for nonadiabatic transition probability determination. Results: It is found that the rate coefficients are expressed via statistical probabilities and reduced rate coefficients. It is shown that the reduced rate coefficients for neutralization and ion-pair formation processes depend on single electronic bound energies of an atomic particle, while the reduced rate coefficients for excitation and de-excitation processes depend on two electronic bound energies. The reduced rate coefficients are calculated and tabulated as functions of electronic bound energies. The derived model is applied to barium-hydrogen ionic collisions. For the first time, rate coefficients are evaluated for inelastic processes in Ba+ + H and Ba2+ + H- collisions for all transitions between the states from the ground and up to and including the ionic state. Tables with calculated data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A33
NASA Astrophysics Data System (ADS)
Belyaev, Andrey K.; Yakovleva, Svetlana A.
2017-10-01
Aims: We derive a simplified model for estimating atomic data on inelastic processes in low-energy collisions of heavy-particles with hydrogen, in particular for the inelastic processes with high and moderate rate coefficients. It is known that these processes are important for non-LTE modeling of cool stellar atmospheres. Methods: Rate coefficients are evaluated using a derived method, which is a simplified version of a recently proposed approach based on the asymptotic method for electronic structure calculations and the Landau-Zener model for nonadiabatic transition probability determination. Results: The rate coefficients are found to be expressed via statistical probabilities and reduced rate coefficients. It turns out that the reduced rate coefficients for mutual neutralization and ion-pair formation processes depend on single electronic bound energies of an atom, while the reduced rate coefficients for excitation and de-excitation processes depend on two electronic bound energies. The reduced rate coefficients are calculated and tabulated as functions of electronic bound energies. The derived model is applied to potassium-hydrogen collisions. For the first time, rate coefficients are evaluated for inelastic processes in K+H and K++H- collisions for all transitions from ground states up to and including ionic states. Tables with calculated data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/A147
NASA Astrophysics Data System (ADS)
Mohamad, Firdaus; Wisnoe, Wirachman; Nasir, Rizal E. M.; Kuntjoro, Wahyu
2012-06-01
This paper discusses on the split drag flaps to the yawing motion of BWB aircraft. This study used split drag flaps instead of vertical tail and rudder with the intention to generate yawing moment. These features are installed near the tips of the wing. Yawing moment is generated by the combination of side and drag forces which are produced upon the split drag flaps deflection. This study is carried out using Computational Fluid Dynamics (CFD) approach and applied to low subsonic speed (0.1 Mach number) with various sideslip angles (β) and total flaps deflections (δT). For this research, the split drag flaps deflections are varied up to ±30°. Data in terms of dimensionless coefficient such as drag coefficient (CD), side coefficient (CS) and yawing moment coefficient (Cn) were used to observe the effect of the split drag flaps. From the simulation results, these split drag flaps are proven to be effective from ±15° deflections or 30° total deflections.
NASA Astrophysics Data System (ADS)
Tovbin, Yu. K.
2017-08-01
The possibility of obtaining analytical estimates in a diffusion approximation of the times needed by nonequilibrium small bodies to relax to their equilibrium states based on knowledge of the mass transfer coefficient is considered. This coefficient is expressed as the product of the self-diffusion coefficient and the thermodynamic factor. A set of equations for the diffusion transport of mixture components is formulated, characteristic scales of the size of microheterogeneous phases are identified, and effective mass transfer coefficients are constructed for them. Allowing for the developed interface of coexisting and immiscible phases along with the porosity of solid phases is discussed. This approach can be applied to the diffusion equalization of concentrations of solid mixture components in many physicochemical systems: the mutual diffusion of components in multicomponent systems (alloys, semiconductors, solid mixtures of inert gases) and the mass transfer of an absorbed mobile component in the voids of a matrix consisting of slow components or a mixed composition of mobile and slow components (e.g., hydrogen in metals, oxygen in oxides, and the transfer of molecules through membranes of different natures, including polymeric).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Haitao, E-mail: liaoht@cae.ac.cn
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
NASA Astrophysics Data System (ADS)
Abo-Ezz, E. R.; Essa, K. S.
2016-04-01
A new linear least-squares approach is proposed to interpret magnetic anomalies of the buried structures by using a new magnetic anomaly formula. This approach depends on solving different sets of algebraic linear equations in order to invert the depth ( z), amplitude coefficient ( K), and magnetization angle ( θ) of buried structures using magnetic data. The utility and validity of the new proposed approach has been demonstrated through various reliable synthetic data sets with and without noise. In addition, the method has been applied to field data sets from USA and India. The best-fitted anomaly has been delineated by estimating the root-mean squared (rms). Judging satisfaction of this approach is done by comparing the obtained results with other available geological or geophysical information.
Model development for MODIS thermal band electronic cross-talk
NASA Astrophysics Data System (ADS)
Chang, Tiejun; Wu, Aisheng; Geng, Xu; Li, Yonghong; Brinkmann, Jake; Keller, Graziela; Xiong, Xiaoxiong (Jack)
2016-10-01
MODerate-resolution Imaging Spectroradiometer (MODIS) has 36 bands. Among them, 16 thermal emissive bands covering a wavelength range from 3.8 to 14.4 μm. After 16 years on-orbit operation, the electronic crosstalk of a few Terra MODIS thermal emissive bands develop substantial issues which cause biases in the EV brightness temperature measurements and surface feature contamination. The crosstalk effects on band 27 with center wavelength at 6.7 μm and band 29 at 8.5 μm increased significantly in recent years, affecting downstream products such as water vapor and cloud mask. The crosstalk issue can be observed from nearly monthly scheduled lunar measurements, from which the crosstalk coefficients can be derived. Most of MODIS thermal bands are saturated at moon surface temperatures and the development of an alternative approach is very helpful for verification. In this work, a physical model was developed to assess the crosstalk impact on calibration as well as in Earth view brightness temperature retrieval. This model was applied to Terra MODIS band 29 empirically for correction of Earth brightness temperature measurements. In the model development, the detector nonlinear response is considered. The impacts of the electronic crosstalk are assessed in two steps. The first step consists of determining the impact on calibration using the on-board blackbody (BB). Due to the detector nonlinear response and large background signal, both linear and nonlinear coefficients are affected by the crosstalk from sending bands. The crosstalk impact on calibration coefficients was calculated. The second step is to calculate the effects on the Earth view brightness temperature retrieval. The effects include those from affected calibration coefficients and the contamination of Earth view measurements. This model links the measurement bias with crosstalk coefficients, detector nonlinearity, and the ratio of Earth measurements between the sending and receiving bands. The correction of the electronic crosstalk can be implemented empirically from the processed bias at different brightness temperature. The implementation can be done through two approaches. As routine calibration assessment for thermal infrared bands, the trending over select Earth scenes is processed for all the detectors in a band and the band averaged bias is derived for certain time. In this case, the correction of an affected band can be made using the regression of the model with band averaged bias and then corrections of detector differences are applied. The second approach requires the trending for individual detectors and the bias for each detector is used for regression with the model. A test using the first approach was made for Terra MODIS band 29 with the biases derived from long-term trending of sea surface temperature and Dome-C surface temperature.
Radial mixing in turbomachines
NASA Astrophysics Data System (ADS)
Segaert, P.; Hirsch, Ch.; Deruyck, J.
1991-03-01
A method for computing the effects of radial mixing in a turbomachinery blade row has been developed. The method fits in the framework of a quasi-3D flow computation and hence is applied in a corrective fashion to through flow distributions. The method takes into account both secondary flows and turbulent diffusion as possible sources of mixing. Secondary flow velocities determine the magnitude of the convection terms in the energy redistribution equation while a turbulent diffusion coefficient determines the magnitude of the diffusion terms. Secondary flows are computed by solving a Poisson equation for a secondary streamfunction on a transversal S3-plane, whereby the right-hand side axial vorticity is composed of different contributions, each associated to a particular flow region: inviscid core flow, end-wall boundary layers, profile boundary layers and wakes. The turbulent mixing coefficient is estimated by a semi-empirical correlation. Secondary flow theory is applied to the VUB cascade testcase and comparisons are made between the computational results and the extensive experimental data available for this testcase. This comparison shows that the secondary flow computations yield reliable predictions of the secondary flow pattern, both qualitatively and quantitatively, taking into account the limitations of the model. However, the computations show that use of a uniform mixing coefficient has to be replaced by a more sophisticated approach.
Bayesian wavelet PCA methodology for turbomachinery damage diagnosis under uncertainty
NASA Astrophysics Data System (ADS)
Xu, Shengli; Jiang, Xiaomo; Huang, Jinzhi; Yang, Shuhua; Wang, Xiaofang
2016-12-01
Centrifugal compressor often suffers various defects such as impeller cracking, resulting in forced outage of the total plant. Damage diagnostics and condition monitoring of such a turbomachinery system has become an increasingly important and powerful tool to prevent potential failure in components and reduce unplanned forced outage and further maintenance costs, while improving reliability, availability and maintainability of a turbomachinery system. This paper presents a probabilistic signal processing methodology for damage diagnostics using multiple time history data collected from different locations of a turbomachine, considering data uncertainty and multivariate correlation. The proposed methodology is based on the integration of three advanced state-of-the-art data mining techniques: discrete wavelet packet transform, Bayesian hypothesis testing, and probabilistic principal component analysis. The multiresolution wavelet analysis approach is employed to decompose a time series signal into different levels of wavelet coefficients. These coefficients represent multiple time-frequency resolutions of a signal. Bayesian hypothesis testing is then applied to each level of wavelet coefficient to remove possible imperfections. The ratio of posterior odds Bayesian approach provides a direct means to assess whether there is imperfection in the decomposed coefficients, thus avoiding over-denoising. Power spectral density estimated by the Welch method is utilized to evaluate the effectiveness of Bayesian wavelet cleansing method. Furthermore, the probabilistic principal component analysis approach is developed to reduce dimensionality of multiple time series and to address multivariate correlation and data uncertainty for damage diagnostics. The proposed methodology and generalized framework is demonstrated with a set of sensor data collected from a real-world centrifugal compressor with impeller cracks, through both time series and contour analyses of vibration signal and principal components.
Rotationally invariant clustering of diffusion MRI data using spherical harmonics
NASA Astrophysics Data System (ADS)
Liptrot, Matthew; Lauze, François
2016-03-01
We present a simple approach to the voxelwise classification of brain tissue acquired with diffusion weighted MRI (DWI). The approach leverages the power of spherical harmonics to summarise the diffusion information, sampled at many points over a sphere, using only a handful of coefficients. We use simple features that are invariant to the rotation of the highly orientational diffusion data. This provides a way to directly classify voxels whose diffusion characteristics are similar yet whose primary diffusion orientations differ. Subsequent application of machine-learning to the spherical harmonic coefficients therefore may permit classification of DWI voxels according to their inferred underlying fibre properties, whilst ignoring the specifics of orientation. After smoothing apparent diffusion coefficients volumes, we apply a spherical harmonic transform, which models the multi-directional diffusion data as a collection of spherical basis functions. We use the derived coefficients as voxelwise feature vectors for classification. Using a simple Gaussian mixture model, we examined the classification performance for a range of sub-classes (3-20). The results were compared against existing alternatives for tissue classification e.g. fractional anisotropy (FA) or the standard model used by Camino.1 The approach was implemented on both two publicly-available datasets: an ex-vivo pig brain and in-vivo human brain from the Human Connectome Project (HCP). We have demonstrated how a robust classification of DWI data can be performed without the need for a model reconstruction step. This avoids the potential confounds and uncertainty that such models may impose, and has the benefit of being computable directly from the DWI volumes. As such, the method could prove useful in subsequent pre-processing stages, such as model fitting, where it could inform about individual voxel complexities and improve model parameter choice.
Smiga, Szymon; Fabiano, Eduardo
2017-11-15
We have developed a simplified coupled cluster (SCC) methodology, using the basic idea of scaled MP2 methods. The scheme has been applied to the coupled cluster double equations and implemented in three different non-iterative variants. This new method (especially the SCCD[3] variant, which utilizes a spin-resolved formalism) has been found to be very efficient and to yield an accurate approximation of the reference CCD results for both total and interaction energies of different atoms and molecules. Furthermore, we demonstrate that the equations determining the scaling coefficients for the SCCD[3] approach can generate non-empirical SCS-MP2 scaling coefficients which are in good agreement with previous theoretical investigations.
NASA Astrophysics Data System (ADS)
Roy, Sabyasachi; Choudhury, D. K.
2014-03-01
Nambu-Goto action for bosonic string predicts the quark-antiquark potential to be V(r) = -γ/r + σr + μ0. The coefficient γ = π(d - 2)/24 is the Lüscher coefficient of the Lüscher term 7/r, which depends upon the space-time dimension 'd'. Very recently, we have developed meson wave functions in higher dimension with this potential from higher dimensional Schrodinger equation by applying quantum mechanical perturbation technique with both Lüscher term as parent and as perturbation. In this letter, we analyze Isgur-Wise function for heavy-light mesons using these wave functions in higher dimension and make a comparative study on the status of the perturbation technique in both the cases.
Tests of Mediation: Paradoxical Decline in Statistical Power as a Function of Mediator Collinearity
Beasley, T. Mark
2013-01-01
Increasing the correlation between the independent variable and the mediator (a coefficient) increases the effect size (ab) for mediation analysis; however, increasing a by definition increases collinearity in mediation models. As a result, the standard error of product tests increase. The variance inflation due to increases in a at some point outweighs the increase of the effect size (ab) and results in a loss of statistical power. This phenomenon also occurs with nonparametric bootstrapping approaches because the variance of the bootstrap distribution of ab approximates the variance expected from normal theory. Both variances increase dramatically when a exceeds the b coefficient, thus explaining the power decline with increases in a. Implications for statistical analysis and applied researchers are discussed. PMID:24954952
Methods for determining the internal thrust of scramjet engine modules from experimental data
NASA Technical Reports Server (NTRS)
Voland, Randall T.
1990-01-01
Methods for calculating zero-fuel internal drag of scramjet engine modules from experimental measurements are presented. These methods include two control-volume approaches, and a pressure and skin-friction integration. The three calculation techniques are applied to experimental data taken during tests of a version of the NASA parametric scramjet. The methods agree to within seven percent of the mean value of zero-fuel internal drag even though several simplifying assumptions are made in the analysis. The mean zero-fuel internal drag coefficient for this particular engine is calculated to be 0.150. The zero-fuel internal drag coefficient when combined with the change in engine axial force with and without fuel defines the internal thrust of an engine.
Wavelets, ridgelets, and curvelets for Poisson noise removal.
Zhang, Bo; Fadili, Jalal M; Starck, Jean-Luc
2008-07-01
In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.
NASA Astrophysics Data System (ADS)
Clément, A.; Laurens, S.
2011-07-01
The Structural Health Monitoring of civil structures subjected to ambient vibrations is very challenging. Indeed, the variations of environmental conditions and the difficulty to characterize the excitation make the damage detection a hard task. Auto-regressive (AR) models coefficients are often used as damage sensitive feature. The presented work proposes a comparison of the AR approach with a state-space feature formed by the Jacobian matrix of the dynamical process. Since the detection of damage can be formulated as a novelty detection problem, Mahalanobis distance is applied to track new points from an undamaged reference collection of feature vectors. Data from a concrete beam subjected to temperature variations and damaged by several static loading are analyzed. It is observed that the damage sensitive features are effectively sensitive to temperature variations. However, the use of the Mahalanobis distance makes possible the detection of cracking with both of them. Early damage (before cracking) is only revealed by the AR coefficients with a good sensibility.
Thermoelectric properties of the LaCoO3-LaCrO3 system using a high-throughput combinatorial approach
NASA Astrophysics Data System (ADS)
Talley, K. R.; Barron, S. C.; Nguyen, N.; Wong-Ng, W.; Martin, J.; Zhang, Y. L.; Song, X.
2017-02-01
A combinatorial film of the LaCo1-xCrxO3 system was fabricated using the LaCoO3 and LaCrO3 targets at the NIST Pulsed Laser Deposition (PLD) facility. As the ionic size of Cr3+ is greater than that of Co3+, the unit cell volume of the series increases with increasing x. Using a custom screening tool, the Seebeck coefficient of LaCo1-xCrxO3 approaches a measured maximum of 286 μV/K, near to the cobalt-rich end of the film library (with x ≈ 0.49). The resistivity value increases continuously with increasing x. The measured power factor, PF, of this series, which is related to the efficiency of energy conversion, also exhibits a maximum at the composition of x ≈ 0.49, which corresponds to the maximum value of the Seebeck coefficient. Our results illustrate the efficiency of applying the high-throughput combinatorial technique to study thermoelectric materials.
Li, Jia; Lam, Edmund Y
2014-04-21
Mask topography effects need to be taken into consideration for a more accurate solution of source mask optimization (SMO) in advanced optical lithography. However, rigorous 3D mask models generally involve intensive computation and conventional SMO fails to manipulate the mask-induced undesired phase errors that degrade the usable depth of focus (uDOF) and process yield. In this work, an optimization approach incorporating pupil wavefront aberrations into SMO procedure is developed as an alternative to maximize the uDOF. We first design the pupil wavefront function by adding primary and secondary spherical aberrations through the coefficients of the Zernike polynomials, and then apply the conjugate gradient method to achieve an optimal source-mask pair under the condition of aberrated pupil. We also use a statistical model to determine the Zernike coefficients for the phase control and adjustment. Rigorous simulations of thick masks show that this approach provides compensation for mask topography effects by improving the pattern fidelity and increasing uDOF.
Expanding the calculation of activation volumes: Self-diffusion in liquid water
NASA Astrophysics Data System (ADS)
Piskulich, Zeke A.; Mesele, Oluwaseun O.; Thompson, Ward H.
2018-04-01
A general method for calculating the dependence of dynamical time scales on macroscopic thermodynamic variables from a single set of simulations is presented. The approach is applied to the pressure dependence of the self-diffusion coefficient of liquid water as a particularly useful illustration. It is shown how the activation volume associated with diffusion can be obtained directly from simulations at a single pressure, avoiding approximations that are typically invoked.
1992-01-01
4.13] have been applied to their estimation. This approach has the advantages of sensitivity and of not requiring high purity and known structures...Chrom absorbance detector, and an Alltech Econosil C-18 (10 micrometer) column (4.6 mm X 25 cm with guard column). The mobile phase, HPLC-grade methanol...water partition coefficient or vice versa. The HPLC method is of similar precision and has the advantage that known structure and purity of the dye are
Overall Equipment Effectiveness Implementation Criteria
NASA Astrophysics Data System (ADS)
Abramova, I. G.; Abramov, D. A.
2018-01-01
This article documents the methods applied in production control technics specifically focused on commonly used parameter OEE (Overall Equipment Effectiveness). The indicators of extensive and intensive use of equipment are considered. Their appointment this is comparison in the same type of production within the industry and comparison of single-type and / or different types of equipment in terms of capacity. However, it is shown that there is no possibility of revealing the reasons for the machine’s operation: productive / unproductive, with disturbances. Therefore, in the article reveals the approaches to calculating the indicator characterizing the direct operation of the equipment. The Machine Load coefficient is approaching closely to the indicator of the efficiency of the use of equipment. Methods analysis is proceeded through the historically applied techniques such as “Stopwatch” and “Motion” studies. Was performed the analysis of the efficiency index of OEE equipment using the comparable indexes performance of equipment in the Russian practice. An important indicator of OEE contains three components. The meaning of each of them reflects historically applicable indicators. The value of the availability of equipment indicator is close to the value of the equipment extensibility index. The value of the indicator of the efficiency of work can be compared with the characteristic of the capacity of the equipment and the indicator of the quality level can meet the requirements for compliance with the manufacturing technology. Shown that the sum of the values of the coefficient of “Availability” of the equipment and the value of the “Factor of compaction of working hours” are one. As well as the total value of the indicator “level of quality” and the coefficient of marriage given in the result unit. The measurability of the indicators makes it possible to make a prediction about efficiency of the equipment.
Mirone, Alessandro; Brun, Emmanuel; Coan, Paola
2014-01-01
X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing norm of the patch basis functions coefficients, and a coefficient multiplying the norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography. PMID:25531987
Mirone, Alessandro; Brun, Emmanuel; Coan, Paola
2014-01-01
X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing L1 norm of the patch basis functions coefficients, and a coefficient multiplying the L2 norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography.
Quantifying the Frictional Forces between Skin and Nonwoven Fabrics
Jayawardana, Kavinda; Ovenden, Nicholas C.; Cottenden, Alan
2017-01-01
When a compliant sheet of material is dragged over a curved surface of a body, the frictional forces generated can be many times greater than they would be for a planar interface. This phenomenon is known to contribute to the abrasion damage to skin often suffered by wearers of incontinence pads and bed/chairbound people susceptible to pressure sores. Experiments that attempt to quantify these forces often use a simple capstan-type equation to obtain a characteristic coefficient of friction. In general, the capstan approach assumes the ratio of applied tensions depends only on the arc of contact and the coefficient of friction, and ignores other geometric and physical considerations; this approach makes it straightforward to obtain explicitly a coefficient of friction from the tensions measured. In this paper, two mathematical models are presented that compute the material displacements and surface forces generated by, firstly, a membrane under tension in moving contact with a rigid obstacle and, secondly, a shell-membrane under tension in contact with a deformable substrate. The results show that, while the use of a capstan equation remains fairly robust in some cases, effects such as the curvature and flaccidness of the underlying body, and the mass density of the fabric can lead to significant variations in stresses generated in the contact region. Thus, the coefficient of friction determined by a capstan model may not be an accurate reflection of the true frictional behavior of the contact region. PMID:28321192
NASA Astrophysics Data System (ADS)
Abdel-Aziz, Omar; Abdel-Ghany, Maha F.; Nagi, Reham; Abdel-Fattah, Laila
2015-03-01
The present work is concerned with simultaneous determination of cefepime (CEF) and the co-administered drug, levofloxacin (LEV), in spiked human plasma by applying a new approach, Savitzky-Golay differentiation filters, and combined trigonometric Fourier functions to their ratio spectra. The different parameters associated with the calculation of Savitzky-Golay and Fourier coefficients were optimized. The proposed methods were validated and applied for determination of the two drugs in laboratory prepared mixtures and spiked human plasma. The results were statistically compared with reported HPLC methods and were found accurate and precise.
Na, X D; Zang, S Y; Wu, C S; Li, W L
2015-11-01
Knowledge of the spatial extent of forested wetlands is essential to many studies including wetland functioning assessment, greenhouse gas flux estimation, and wildlife suitable habitat identification. For discriminating forested wetlands from their adjacent land cover types, researchers have resorted to image analysis techniques applied to numerous remotely sensed data. While with some success, there is still no consensus on the optimal approaches for mapping forested wetlands. To address this problem, we examined two machine learning approaches, random forest (RF) and K-nearest neighbor (KNN) algorithms, and applied these two approaches to the framework of pixel-based and object-based classifications. The RF and KNN algorithms were constructed using predictors derived from Landsat 8 imagery, Radarsat-2 advanced synthetic aperture radar (SAR), and topographical indices. The results show that the objected-based classifications performed better than per-pixel classifications using the same algorithm (RF) in terms of overall accuracy and the difference of their kappa coefficients are statistically significant (p<0.01). There were noticeably omissions for forested and herbaceous wetlands based on the per-pixel classifications using the RF algorithm. As for the object-based image analysis, there were also statistically significant differences (p<0.01) of Kappa coefficient between results performed based on RF and KNN algorithms. The object-based classification using RF provided a more visually adequate distribution of interested land cover types, while the object classifications based on the KNN algorithm showed noticeably commissions for forested wetlands and omissions for agriculture land. This research proves that the object-based classification with RF using optical, radar, and topographical data improved the mapping accuracy of land covers and provided a feasible approach to discriminate the forested wetlands from the other land cover types in forestry area.
Anharmonic effects in the quantum cluster equilibrium method
NASA Astrophysics Data System (ADS)
von Domaros, Michael; Perlt, Eva
2017-03-01
The well-established quantum cluster equilibrium (QCE) model provides a statistical thermodynamic framework to apply high-level ab initio calculations of finite cluster structures to macroscopic liquid phases using the partition function. So far, the harmonic approximation has been applied throughout the calculations. In this article, we apply an important correction in the evaluation of the one-particle partition function and account for anharmonicity. Therefore, we implemented an analytical approximation to the Morse partition function and the derivatives of its logarithm with respect to temperature, which are required for the evaluation of thermodynamic quantities. This anharmonic QCE approach has been applied to liquid hydrogen chloride and cluster distributions, and the molar volume, the volumetric thermal expansion coefficient, and the isobaric heat capacity have been calculated. An improved description for all properties is observed if anharmonic effects are considered.
NASA Astrophysics Data System (ADS)
Lv, Zheng; Xu, Jinglei; Mo, Jianwei
2017-12-01
A new method based on quasi two-dimensional supersonic flow and maximum thrust theory to design a three-dimensional nozzle while considering lateral expansion and geometric constraints is presented in this paper. To generate the configuration of the three-dimensional nozzle, the inviscid flowfield is calculated through the method of characteristics, and the reference temperature method is applied to correct the boundary layer thickness. The computational fluid dynamics approach is used to obtain the aerodynamic performance of the nozzle. Results show that the initial arc radius slightly influences the axial thrust coefficient, whereas the variations in the lateral expansion contour, the length and initial expansion angle of the lower cowl significantly affect the axial thrust coefficient. The three-dimensional nozzle designed by streamline tracing technique is also investigated for comparison to verify the superiority of the new method. The proposed nozzle shows increases in the axial thrust coefficient, lift, and pitching moment of 6.86%, 203.15%, and 642.86%, respectively, at the design point, compared with the nozzle designed by streamline tracing approach. In addition, the lateral expansion accounts for 22.46% of the entire axial thrust, while it has no contribution to the lift and pitching moment in the proposed nozzle.
H-, He-like recombination spectra - II. l-changing collisions for He Rydberg states
NASA Astrophysics Data System (ADS)
Guzmán, F.; Badnell, N. R.; Williams, R. J. R.; van Hoof, P. A. M.; Chatzikos, M.; Ferland, G. J.
2017-01-01
Cosmological models can be constrained by determining primordial abundances. Accurate predictions of the He I spectrum are needed to determine the primordial helium abundance to a precision of <1 per cent in order to constrain big bang nucleosynthesis models. Theoretical line emissivities at least this accurate are needed if this precision is to be achieved. In the first paper of this series, which focused on H I, we showed that differences in l-changing collisional rate coefficients predicted by three different theories can translate into 10 per cent changes in predictions for H I spectra. Here, we consider the more complicated case of He atoms, where low-l subshells are not energy degenerate. A criterion for deciding when the energy separation between l subshells is small enough to apply energy-degenerate collisional theories is given. Moreover, for certain conditions, the Bethe approximation originally proposed by Pengelly & Seaton is not sufficiently accurate. We introduce a simple modification of this theory which leads to rate coefficients which agree well with those obtained from pure quantal calculations using the approach of Vrinceanu et al. We show that the l-changing rate coefficients from the different theoretical approaches lead to differences of ˜10 per cent in He I emissivities in simulations of H II regions using spectral code CLOUDY.
An adaptive multi-moment FVM approach for incompressible flows
NASA Astrophysics Data System (ADS)
Liu, Cheng; Hu, Changhong
2018-04-01
In this study, a multi-moment finite volume method (FVM) based on block-structured adaptive Cartesian mesh is proposed for simulating incompressible flows. A conservative interpolation scheme following the idea of the constrained interpolation profile (CIP) method is proposed for the prolongation operation of the newly created mesh. A sharp immersed boundary (IB) method is used to model the immersed rigid body. A moving least squares (MLS) interpolation approach is applied for reconstruction of the velocity field around the solid surface. An efficient method for discretization of Laplacian operators on adaptive meshes is proposed. Numerical simulations on several test cases are carried out for validation of the proposed method. For the case of viscous flow past an impulsively started cylinder (Re = 3000 , 9500), the computed surface vorticity coincides with the result of the body-fitted method. For the case of a fast pitching NACA 0015 airfoil at moderate Reynolds numbers (Re = 10000 , 45000), the predicted drag coefficient (CD) and lift coefficient (CL) agree well with other numerical or experimental results. For 2D and 3D simulations of viscous flow past a pitching plate with prescribed motions (Re = 5000 , 40000), the predicted CD, CL and CM (moment coefficient) are in good agreement with those obtained by other numerical methods.
Model Development for MODIS Thermal Band Electronic Crosstalk
NASA Technical Reports Server (NTRS)
Chang, Tiejun; Wu, Aisheng; Geng, Xu; Li, Yonghonh; Brinkman, Jake; Keller, Graziela; Xiong, Xiaoxiong
2016-01-01
MODerate-resolution Imaging Spectroradiometer (MODIS) has 36 bands. Among them, 16 thermal emissive bands covering a wavelength range from 3.8 to 14.4 m. After 16 years on-orbit operation, the electronic crosstalk of a few Terra MODIS thermal emissive bands developed substantial issues that cause biases in the EV brightness temperature measurements and surface feature contamination. The crosstalk effects on band 27 with center wavelength at 6.7 m and band 29 at 8.5 m increased significantly in recent years, affecting downstream products such as water vapor and cloud mask. The crosstalk effect is evident in the near-monthly scheduled lunar measurements, from which the crosstalk coefficients can be derived. The development of an alternative approach is very helpful for independent verification.In this work, a physical model was developed to assess the crosstalk impact on calibration as well as in Earth view brightness temperature retrieval. This model was applied to Terra MODIS band 29 empirically to correct the Earth brightness temperature measurements. In the model development, the detectors nonlinear response is considered. The impact of the electronic crosstalk is assessed in two steps. The first step consists of determining the impact on calibration using the on-board blackbody (BB). Due to the detectors nonlinear response and large background signal, both linear and nonlinear coefficients are affected by the crosstalk from sending bands. The second step is to calculate the effects on the Earth view brightness temperature retrieval. The effects include those from affected calibration coefficients and the contamination of Earth view measurements. This model links the measurement bias with crosstalk coefficients, detector non-linearity, and the ratio of Earth measurements between the sending and receiving bands. The correction of the electronic cross talk can be implemented empirically from the processed bias at different brightness temperature. The implementation can be done through two approaches. As routine calibration assessment for thermal infrared bands, the trending over select Earth scenes is processed for all the detectors in a band and the band averaged bias is derived at a certain time. In this case, the correction of an affected band can be made using the regression of the model with band averaged bias and then corrections of detector differences are applied. The second approach requires the trending for individual detectors and the bias for each detector is used for regression with the model. A test using the first approach was made for Terra MODIS band 29 with the biases derived from long-term trending of brightness temperature over ocean and Dome-C.
Zvereva, Alexandra; Kamp, Florian; Schlattl, Helmut; Zankl, Maria; Parodi, Katia
2018-05-17
Variance-based sensitivity analysis (SA) is described and applied to the radiation dosimetry model proposed by the Committee on Medical Internal Radiation Dose (MIRD) for the organ-level absorbed dose calculations in nuclear medicine. The uncertainties in the dose coefficients thus calculated are also evaluated. A Monte Carlo approach was used to compute first-order and total-effect SA indices, which rank the input factors according to their influence on the uncertainty in the output organ doses. These methods were applied to the radiopharmaceutical (S)-4-(3- 18 F-fluoropropyl)-L-glutamic acid ( 18 F-FSPG) as an example. Since 18 F-FSPG has 11 notable source regions, a 22-dimensional model was considered here, where 11 input factors are the time-integrated activity coefficients (TIACs) in the source regions and 11 input factors correspond to the sets of the specific absorbed fractions (SAFs) employed in the dose calculation. The SA was restricted to the foregoing 22 input factors. The distributions of the input factors were built based on TIACs of five individuals to whom the radiopharmaceutical 18 F-FSPG was administered and six anatomical models, representing two reference, two overweight, and two slim individuals. The self-absorption SAFs were mass-scaled to correspond to the reference organ masses. The estimated relative uncertainties were in the range 10%-30%, with a minimum and a maximum for absorbed dose coefficients for urinary bladder wall and heart wall, respectively. The applied global variance-based SA enabled us to identify the input factors that have the highest influence on the uncertainty in the organ doses. With the applied mass-scaling of the self-absorption SAFs, these factors included the TIACs for absorbed dose coefficients in the source regions and the SAFs from blood as source region for absorbed dose coefficients in highly vascularized target regions. For some combinations of proximal target and source regions, the corresponding cross-fire SAFs were found to have an impact. Global variance-based SA has been for the first time applied to the MIRD schema for internal dose calculation. Our findings suggest that uncertainties in computed organ doses can be substantially reduced by performing an accurate determination of TIACs in the source regions, accompanied by the estimation of individual source region masses along with the usage of an appropriate blood distribution in a patient's body and, in a few cases, the cross-fire SAFs from proximal source regions. © 2018 American Association of Physicists in Medicine.
Fellinger, Michael R.; Hector, Louis G.; Trinkle, Dallas R.
2016-10-28
Here, we present an efficient methodology for computing solute-induced changes in lattice parameters and elastic stiffness coefficients Cij of single crystals using density functional theory. We also introduce a solute strain misfit tensor that quantifies how solutes change lattice parameters due to the stress they induce in the host crystal. Solutes modify the elastic stiffness coefficients through volumetric changes and by altering chemical bonds. We compute each of these contributions to the elastic stiffness coefficients separately, and verify that their sum agrees with changes in the elastic stiffness coefficients computed directly using fully optimized supercells containing solutes. Computing the twomore » elastic stiffness contributions separately is more computationally efficient and provides more information on solute effects than the direct calculations. We compute the solute dependence of polycrystalline averaged shear and Young's moduli from the solute dependence of the single-crystal Cij. We then apply this methodology to substitutional Al, B, Cu, Mn, Si solutes and octahedral interstitial C and N solutes in bcc Fe. Comparison with experimental data indicates that our approach accurately predicts solute-induced changes in the lattice parameter and elastic coefficients. The computed data can be used to quantify solute-induced changes in mechanical properties such as strength and ductility, and can be incorporated into mesoscale models to improve their predictive capabilities.« less
Assessing FAO-56 dual crop coefficients using eddy covariance flux partitioning
USDA-ARS?s Scientific Manuscript database
Current approaches to scheduling crop irrigation using reference evapotranspiration (ET0) recommend using a dual-coefficient approach using basal (Kcb) and soil (Ke) coefficients along with a stress coefficient (Ks) to model crop evapotranspiration (ETc), [e.g. ETc=(Ks*Kcb+Ke)*ET0]. However, determi...
Continuous-time discrete-space models for animal movement
Hanks, Ephraim M.; Hooten, Mevin B.; Alldredge, Mat W.
2015-01-01
The processes influencing animal movement and resource selection are complex and varied. Past efforts to model behavioral changes over time used Bayesian statistical models with variable parameter space, such as reversible-jump Markov chain Monte Carlo approaches, which are computationally demanding and inaccessible to many practitioners. We present a continuous-time discrete-space (CTDS) model of animal movement that can be fit using standard generalized linear modeling (GLM) methods. This CTDS approach allows for the joint modeling of location-based as well as directional drivers of movement. Changing behavior over time is modeled using a varying-coefficient framework which maintains the computational simplicity of a GLM approach, and variable selection is accomplished using a group lasso penalty. We apply our approach to a study of two mountain lions (Puma concolor) in Colorado, USA.
NASA Technical Reports Server (NTRS)
Markey, Melvin F.
1959-01-01
A theory is derived for determining the loads and motions of a deeply immersed prismatic body. The method makes use of a two-dimensional water-mass variation and an aspect-ratio correction for three-dimensional flow. The equations of motion are generalized by using a mean value of the aspect-ratio correction and by assuming a variation of the two-dimensional water mass for the deeply immersed body. These equations lead to impact coefficients that depend on an approach parameter which, in turn, depends upon the initial trim and flight-path angles. Comparison of experiment with theory is shown at maximum load and maximum penetration for the flat-bottom (0 deg dead-rise angle) model with bean-loading coefficients from 36.5 to 133.7 over a wide range of initial conditions. A dead-rise angle correction is applied and maximum-load data are compared with theory for the case of a model with 300 dead-rise angle and beam-loading coefficients from 208 to 530.
NASA Astrophysics Data System (ADS)
Dymond, J. H.; Young, K. J.
1980-12-01
Viscosity coefficient measurements at saturation pressure are reported for n-hexane + n-hexadecane, n-hexane + n-octane + n-hexadecane, and n-hexane + n-octane + n-dodecane + n-hexadecane at temperatures from 283 to 378 K. The results show that the Congruence Principle applies to the molar excess Gibbs free energy of activation for flow, δ* G E, at temperatures other than 298 K. However, curves of δ* G E versus index number of the mixture are temperature dependent, and this must be taken into account for accurate prediction of mixture viscosity coefficients by this approach. The purely empirical equation of Grunberg and Nissan; 1 10765_2004_Article_BF00516562_TeX2GIFE1.gif ln η = x_1 ln η _1 + x_2 ln η _2 + x_1 x_2 G which has the advantage of not involving molar volumes, satisfactorily reproduces the experimental results for the binary mixture, but G is definitely composition dependent.
Estimating consumer familiarity with health terminology: a context-based approach.
Zeng-Treitler, Qing; Goryachev, Sergey; Tse, Tony; Keselman, Alla; Boxwala, Aziz
2008-01-01
Effective health communication is often hindered by a "vocabulary gap" between language familiar to consumers and jargon used in medical practice and research. To present health information to consumers in a comprehensible fashion, we need to develop a mechanism to quantify health terms as being more likely or less likely to be understood by typical members of the lay public. Prior research has used approaches including syllable count, easy word list, and frequency count, all of which have significant limitations. In this article, we present a new method that predicts consumer familiarity using contextual information. The method was applied to a large query log data set and validated using results from two previously conducted consumer surveys. We measured the correlation between the survey result and the context-based prediction, syllable count, frequency count, and log normalized frequency count. The correlation coefficient between the context-based prediction and the survey result was 0.773 (p < 0.001), which was higher than the correlation coefficients between the survey result and the syllable count, frequency count, and log normalized frequency count (p < or = 0.012). The context-based approach provides a good alternative to the existing term familiarity assessment methods.
Decentralized Adaptive Neural Output-Feedback DSC for Switched Large-Scale Nonlinear Systems.
Lijun Long; Jun Zhao
2017-04-01
In this paper, for a class of switched large-scale uncertain nonlinear systems with unknown control coefficients and unmeasurable states, a switched-dynamic-surface-based decentralized adaptive neural output-feedback control approach is developed. The approach proposed extends the classical dynamic surface control (DSC) technique for nonswitched version to switched version by designing switched first-order filters, which overcomes the problem of multiple "explosion of complexity." Also, a dual common coordinates transformation of all subsystems is exploited to avoid individual coordinate transformations for subsystems that are required when applying the backstepping recursive design scheme. Nussbaum-type functions are utilized to handle the unknown control coefficients, and a switched neural network observer is constructed to estimate the unmeasurable states. Combining with the average dwell time method and backstepping and the DSC technique, decentralized adaptive neural controllers of subsystems are explicitly designed. It is proved that the approach provided can guarantee the semiglobal uniformly ultimately boundedness for all the signals in the closed-loop system under a class of switching signals with average dwell time, and the tracking errors to a small neighborhood of the origin. A two inverted pendulums system is provided to demonstrate the effectiveness of the method proposed.
Confidence bounds for normal and lognormal distribution coefficients of variation
Steve Verrill
2003-01-01
This paper compares the so-called exact approach for obtaining confidence intervals on normal distribution coefficients of variation to approximate methods. Approximate approaches were found to perform less well than the exact approach for large coefficients of variation and small sample sizes. Web-based computer programs are described for calculating confidence...
Sparse coding for flexible, robust 3D facial-expression synthesis.
Lin, Yuxu; Song, Mingli; Quynh, Dao Thi Phuong; He, Ying; Chen, Chun
2012-01-01
Computer animation researchers have been extensively investigating 3D facial-expression synthesis for decades. However, flexible, robust production of realistic 3D facial expressions is still technically challenging. A proposed modeling framework applies sparse coding to synthesize 3D expressive faces, using specified coefficients or expression examples. It also robustly recovers facial expressions from noisy and incomplete data. This approach can synthesize higher-quality expressions in less time than the state-of-the-art techniques.
Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N
2014-12-01
Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.
USDA-ARS?s Scientific Manuscript database
Current approaches to scheduling crop irrigation using reference evapotranspiration (ET0) recommend using a dual-coefficient approach using basal (Kcb) and soil (Ke) coefficients along with a stress coefficient (Ks) to model crop evapotranspiration (ETc), [e.g. ETc=(Ks*Kcb+Ke)*ET0]. However, indepe...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Presser, Cary; Nazarian, Ashot; Conny, Joseph M.
Absorptivity measurements with a laser-heating approach, referred to as the laser-driven thermal reactor (LDTR), were carried out in the infrared and applied at ambient (laboratory) nonreacting conditions to particle-laden filters from a three-wavelength (visible) particle/soot absorption photometer (PSAP). Here, the particles were obtained during the Biomass Burning Observation Project (BBOP) field campaign. The focus of this study was to determine the particle absorption coefficient from field-campaign filter samples using the LDTR approach, and compare results with other commercially available instrumentation (in this case with the PSAP, which has been compared with numerous other optical techniques).
Presser, Cary; Nazarian, Ashot; Conny, Joseph M.; ...
2016-12-02
Absorptivity measurements with a laser-heating approach, referred to as the laser-driven thermal reactor (LDTR), were carried out in the infrared and applied at ambient (laboratory) nonreacting conditions to particle-laden filters from a three-wavelength (visible) particle/soot absorption photometer (PSAP). Here, the particles were obtained during the Biomass Burning Observation Project (BBOP) field campaign. The focus of this study was to determine the particle absorption coefficient from field-campaign filter samples using the LDTR approach, and compare results with other commercially available instrumentation (in this case with the PSAP, which has been compared with numerous other optical techniques).
Group theoretic approach for solving the problem of diffusion of a drug through a thin membrane
NASA Astrophysics Data System (ADS)
Abd-El-Malek, Mina B.; Kassem, Magda M.; Meky, Mohammed L. M.
2002-03-01
The transformation group theoretic approach is applied to study the diffusion process of a drug through a skin-like membrane which tends to partially absorb the drug. Two cases are considered for the diffusion coefficient. The application of one parameter group reduces the number of independent variables by one, and consequently the partial differential equation governing the diffusion process with the boundary and initial conditions is transformed into an ordinary differential equation with the corresponding conditions. The obtained differential equation is solved numerically using the shooting method, and the results are illustrated graphically and in tables.
Optimal Combinations of Diagnostic Tests Based on AUC.
Huang, Xin; Qin, Gengsheng; Fang, Yixin
2011-06-01
When several diagnostic tests are available, one can combine them to achieve better diagnostic accuracy. This article considers the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a nonparametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. The proposed methods are examined through simulation studies and applications to three real examples. © 2010, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Majstorovic, J.; Rosat, S.; Lambotte, S.; Rogister, Y. J. G.
2017-12-01
Although there are numerous studies about 3D density Earth model, building an accurate one is still an engaging challenge. One procedure to refine global 3D Earth density models is based on unambiguous measurements of Earth's normal mode eigenfrequencies. To have unbiased eigenfrequency measurements one needs to deal with a variety of time records quality and especially different noise sources, while standard approaches usually include signal processing methods such as Fourier transform. Here we present estimate of complex eigenfrequencies and structure coefficients for several modes below 1 mHz (0S2, 2S1, etc.). Our analysis is performed in three steps. The first step includes the use of stacking methods to enhance specific modes of interest above the observed noise level. Out of three trials the optimal sequence estimation turned out to be the foremost compared to the spherical harmonic stacking method and receiver strip method. In the second step we apply an autoregressive method in the frequency domain to estimate complex eigenfrequencies of target modes. In the third step we apply the phasor walkout method to test and confirm our eigenfrequencies. Before conducting an analysis of time records, we evaluate how the station distribution and noise levels impact the estimate of eigenfrequencies and structure coefficients by using synthetic seismograms calculated for a 3D realistic Earth model, which includes Earth's ellipticity and lateral heterogeneity. Synthetic seismograms are computed by means of normal mode summation using self-coupling and cross-coupling of modes up to 1 mHz. Eventually, the methods tested on synthetic data are applied to long-period seismometer and superconducting gravimeter data recorded after six mega-earthquakes of magnitude greater than 8.3. Hence, we propose new estimates of structure coefficients dependent on the density variations.
Chen, Weiting; Zhao, Huijuan; Li, Tongxin; Yan, Panpan; Zhao, Kuanxin; Qi, Caixia; Gao, Feng
2017-08-08
Spatial frequency domain (SFD) measurement allows rapid and non-contact wide-field imaging of the tissue optical properties, thus has become a potential tool for assessing physiological parameters and therapeutic responses during photodynamic therapy of skin diseases. The conventional SFD measurement requires a reference measurement within the same experimental scenario as that for a test one to calibrate mismatch between the real measurements and the model predictions. Due to the individual physical and geometrical differences among different tissues, organs and patients, an ideal reference measurement might be unavailable in clinical trials. To address this problem, we present a reference-free SFD determination of absorption coefficient that is based on the modulation transfer function (MTF) characterization. Instead of the absolute amplitude that is used in the conventional SFD approaches, we herein employ the MTF to characterize the propagation of the modulated lights in tissues. With such a dimensionless relative quantity, the measurements can be naturally corresponded to the model predictions without calibrating the illumination intensity. By constructing a three-dimensional database that portrays the MTF as a function of the optical properties (both the absorption coefficient μ a and the reduced scattering coefficient [Formula: see text]) and the spatial frequency, a look-up table approach or a least-square curve-fitting method is readily applied to recover the absorption coefficient from a single frequency or multiple frequencies, respectively. Simulation studies have verified the feasibility of the proposed reference-free method and evaluated its accuracy in the absorption recovery. Experimental validations have been performed on homogeneous tissue-mimicking phantoms with μ a ranging from 0.01 to 0.07 mm -1 and [Formula: see text] = 1.0 or 2.0 mm -1 . The results have shown maximum errors of 4.86 and 7% for [Formula: see text] = 1.0 mm -1 and [Formula: see text] = 2.0 mm -1 , respectively. We have also presented quantitative ex vivo imaging of human lung cancer in a subcutaneous xenograft mouse model for further validation, and observed high absorption contrast in the tumor region. The proposed method can be applied to the rapid and accurate determination of the absorption coefficient, and better yet, in a reference-free way. We believe this reference-free strategy will facilitate the clinical translation of the SFD measurement to achieve enhanced intraoperative hemodynamic monitoring and personalized treatment planning in photodynamic therapy.
Quantifying Proportional Variability
Heath, Joel P.; Borowski, Peter
2013-01-01
Real quantities can undergo such a wide variety of dynamics that the mean is often a meaningless reference point for measuring variability. Despite their widespread application, techniques like the Coefficient of Variation are not truly proportional and exhibit pathological properties. The non-parametric measure Proportional Variability (PV) [1] resolves these issues and provides a robust way to summarize and compare variation in quantities exhibiting diverse dynamical behaviour. Instead of being based on deviation from an average value, variation is simply quantified by comparing the numbers to each other, requiring no assumptions about central tendency or underlying statistical distributions. While PV has been introduced before and has already been applied in various contexts to population dynamics, here we present a deeper analysis of this new measure, derive analytical expressions for the PV of several general distributions and present new comparisons with the Coefficient of Variation, demonstrating cases in which PV is the more favorable measure. We show that PV provides an easily interpretable approach for measuring and comparing variation that can be generally applied throughout the sciences, from contexts ranging from stock market stability to climate variation. PMID:24386334
A Solar Radiation Parameterization for Atmospheric Studies. Volume 15
NASA Technical Reports Server (NTRS)
Chou, Ming-Dah; Suarez, Max J. (Editor)
1999-01-01
The solar radiation parameterization (CLIRAD-SW) developed at the Goddard Climate and Radiation Branch for application to atmospheric models are described. It includes the absorption by water vapor, O3, O2, CO2, clouds, and aerosols and the scattering by clouds, aerosols, and gases. Depending upon the nature of absorption, different approaches are applied to different absorbers. In the ultraviolet and visible regions, the spectrum is divided into 8 bands, and single O3 absorption coefficient and Rayleigh scattering coefficient are used for each band. In the infrared, the spectrum is divided into 3 bands, and the k-distribution method is applied for water vapor absorption. The flux reduction due to O2 is derived from a simple function, while the flux reduction due to CO2 is derived from precomputed tables. Cloud single-scattering properties are parameterized, separately for liquid drops and ice, as functions of water amount and effective particle size. A maximum-random approximation is adopted for the overlapping of clouds at different heights. Fluxes are computed using the Delta-Eddington approximation.
Dong, Chengzhi; Li, Kai; Jiang, Yuxi; Arola, Dwayne; Zhang, Dongsheng
2018-01-08
An optical system for measuring the coefficient of thermal expansion (CTE) of materials has been developed based on electronic speckle interferometry. In this system, the temperature can be varied from -60°C to 180°C with a Peltier device. A specific specimen geometry and an optical arrangement based on the Michelson interferometer are proposed to measure the deformation along two orthogonal axes due to temperature changes. The advantages of the system include its high sensitivity and stability over the whole range of measurement. The experimental setup and approach for estimating the CTE was validated using an Aluminum alloy. Following this validation, the system was applied for characterizing the CTE of carbon fiber reinforced composite (CFRP) laminates. For the unidirectional fiber reinforced composites, the CTE varied with fiber orientation and exhibits anisotropic behavior. By stacking the plies with specific angles and order, the CTE of a specific CFRP was constrained to a low level with minimum variation temperature. The optical system developed in this study can be applied to CTE measurement for engineering and natural materials with high accuracy.
Cuprate diamagnetism in the presence of a pseudogap: Beyond the standard fluctuation formalism
NASA Astrophysics Data System (ADS)
Boyack, Rufus; Chen, Qijin; Varlamov, A. A.; Levin, K.
2018-02-01
It is often claimed that among the strongest evidence for preformed-pair physics in the cuprates are the experimentally observed large values for the diamagnetic susceptibility and Nernst coefficient. These findings are most apparent in the underdoped regime, where a pseudogap is also evident. While the conventional (Gaussian) fluctuation picture has been applied to address these results, this preformed-pair approach omits the crucial effects of a pseudogap. In this paper we remedy this omission by computing the diamagnetic susceptibility and Nernst coefficient in the presence of a normal state gap. We find a large diamagnetic response for a range of temperatures much higher than the transition temperature. In particular, we report semiquantitative agreement with the measured diamagnetic susceptibility onset temperatures, over the entire range of hole dopings. Notable is the fact that at the lower critical doping of the superconducting dome, where the transition temperature vanishes and the pseudogap onset temperature remains large, the onset temperature for both diamagnetic and transverse thermoelectric transport coefficients tends to zero. Due to the importance attributed to the cuprate diamagnetic susceptibility and Nernst coefficient, this work helps to clarify the extent to which pairing fluctuations are a component of the cuprate pseudogap.
Agreement Analysis: What He Said, She Said Versus You Said.
Vetter, Thomas R; Schober, Patrick
2018-06-01
Correlation and agreement are 2 concepts that are widely applied in the medical literature and clinical practice to assess for the presence and strength of an association. However, because correlation and agreement are conceptually distinct, they require the use of different statistics. Agreement is a concept that is closely related to but fundamentally different from and often confused with correlation. The idea of agreement refers to the notion of reproducibility of clinical evaluations or biomedical measurements. The intraclass correlation coefficient is a commonly applied measure of agreement for continuous data. The intraclass correlation coefficient can be validly applied specifically to assess intrarater reliability and interrater reliability. As its name implies, the Lin concordance correlation coefficient is another measure of agreement or concordance. In undertaking a comparison of a new measurement technique with an established one, it is necessary to determine whether they agree sufficiently for the new to replace the old. Bland and Altman demonstrated that using a correlation coefficient is not appropriate for assessing the interchangeability of 2 such measurement methods. They in turn described an alternative approach, the since widely applied graphical Bland-Altman Plot, which is based on a simple estimation of the mean and standard deviation of differences between measurements by the 2 methods. In reading a medical journal article that includes the interpretation of diagnostic tests and application of diagnostic criteria, attention is conventionally focused on aspects like sensitivity, specificity, predictive values, and likelihood ratios. However, if the clinicians who interpret the test cannot agree on its interpretation and resulting typically dichotomous or binary diagnosis, the test results will be of little practical use. Such agreement between observers (interobserver agreement) about a dichotomous or binary variable is often reported as the kappa statistic. Assessing the interrater agreement between observers, in the case of ordinal variables and data, also has important biomedical applicability. Typically, this situation calls for use of the Cohen weighted kappa. Questionnaires, psychometric scales, and diagnostic tests are widespread and increasingly used by not only researchers but also clinicians in their daily practice. It is essential that these questionnaires, scales, and diagnostic tests have a high degree of agreement between observers. It is therefore vital that biomedical researchers and clinicians apply the appropriate statistical measures of agreement to assess the reproducibility and quality of these measurement instruments and decision-making processes.
NASA Astrophysics Data System (ADS)
Bishop, Kevin P.; Roy, Pierre-Nicholas
2018-03-01
Free energy calculations are a crucial part of understanding chemical systems but are often computationally expensive for all but the simplest of systems. Various enhanced sampling techniques have been developed to improve the efficiency of these calculations in numerical simulations. However, the majority of these approaches have been applied using classical molecular dynamics. There are many situations where nuclear quantum effects impact the system of interest and a classical description fails to capture these details. In this work, path integral molecular dynamics has been used in conjunction with umbrella sampling, and it has been observed that correct results are only obtained when the umbrella sampling potential is applied to a single path integral bead post quantization. This method has been validated against a Lennard-Jones benchmark system before being applied to the more complicated water dimer system over a broad range of temperatures. Free energy profiles are obtained, and these are utilized in the calculation of the second virial coefficient as well as the change in free energy from the separated water monomers to the dimer. Comparisons to experimental and ground state calculation values from the literature are made for the second virial coefficient at higher temperature and the dissociation energy of the dimer in the ground state.
Bishop, Kevin P; Roy, Pierre-Nicholas
2018-03-14
Free energy calculations are a crucial part of understanding chemical systems but are often computationally expensive for all but the simplest of systems. Various enhanced sampling techniques have been developed to improve the efficiency of these calculations in numerical simulations. However, the majority of these approaches have been applied using classical molecular dynamics. There are many situations where nuclear quantum effects impact the system of interest and a classical description fails to capture these details. In this work, path integral molecular dynamics has been used in conjunction with umbrella sampling, and it has been observed that correct results are only obtained when the umbrella sampling potential is applied to a single path integral bead post quantization. This method has been validated against a Lennard-Jones benchmark system before being applied to the more complicated water dimer system over a broad range of temperatures. Free energy profiles are obtained, and these are utilized in the calculation of the second virial coefficient as well as the change in free energy from the separated water monomers to the dimer. Comparisons to experimental and ground state calculation values from the literature are made for the second virial coefficient at higher temperature and the dissociation energy of the dimer in the ground state.
Lucio, Francesco; Calamia, Elisa; Russi, Elvio; Marchetto, Flavio
2013-01-01
When using an electronic portal imaging device (EPID) for dosimetric verifications, the calibration of the sensitive area is of paramount importance. Two calibration methods are generally adopted: one, empirical, based on an external reference dosimeter or on multiple narrow beam irradiations, and one based on the EPID response simulation. In this paper we present an alternative approach based on an intercalibration procedure, independent from external dosimeters and from simulations, and is quick and easy to perform. Each element of a detector matrix is characterized by a different gain; the aim of the calibration procedure is to relate the gain of each element to a reference one. The method that we used to compute the relative gains is based on recursive acquisitions with the EPID placed in different positions, assuming a constant fluence of the beam for subsequent deliveries. By applying an established procedure and analysis algorithm, the EPID calibration was repeated in several working conditions. Data show that both the photons energy and the presence of a medium between the source and the detector affect the calibration coefficients less than 1%. The calibration coefficients were then applied to the acquired images, comparing the EPID dose images with films. Measurements were performed with open field, placing the film at the level of the EPID. The standard deviation of the distribution of the point‐to‐point difference is 0.6%. An approach of this type for the EPID calibration has many advantages with respect to the standard methods — it does not need an external dosimeter, it is not related to the irradiation techniques, and it is easy to implement in the clinical practice. Moreover, it can be applied in case of transit or nontransit dosimetry, solving the problem of the EPID calibration independently from the dose reconstruction method. PACS number: 87.56.‐v PMID:24257285
NASA Astrophysics Data System (ADS)
Indarsih, Indrati, Ch. Rini
2016-02-01
In this paper, we define variance of the fuzzy random variables through alpha level. We have a theorem that can be used to know that the variance of fuzzy random variables is a fuzzy number. We have a multi-objective linear programming (MOLP) with fuzzy random of objective function coefficients. We will solve the problem by variance approach. The approach transform the MOLP with fuzzy random of objective function coefficients into MOLP with fuzzy of objective function coefficients. By weighted methods, we have linear programming with fuzzy coefficients and we solve by simplex method for fuzzy linear programming.
Pulsational stabilities of a star in thermal imbalance - Comparison between the methods
NASA Technical Reports Server (NTRS)
Vemury, S. K.
1978-01-01
The stability coefficients for quasi-adiabatic pulsations for a model in thermal imbalance are evaluated using the dynamical energy (DE) approach, the total (kinetic plus potential) energy (TE) approach, and the small amplitude (SA) approaches. From a comparison among the methods, it is found that there can exist two distinct stability coefficients under conditions of thermal imbalance as pointed out by Demaret. It is shown that both the TE approaches lead to one stability coefficient, while both the SA approaches lead to another coefficient. The coefficient obtained through the energy approaches is identified as the one which determines the stability of the velocity amplitudes. For a prenova model with a thin hydrogen-burning shell in thermal imbalance, several radial modes are found to be unstable both for radial displacements and for velocity amplitudes. However, a new kind of pulsational instability also appears, viz., while the radial displacements are unstable, the velocity amplitudes may be stabilized through the thermal imbalance terms.
Thin-film limit formalism applied to surface defect absorption.
Holovský, Jakub; Ballif, Christophe
2014-12-15
The thin-film limit is derived by a nonconventional approach and equations for transmittance, reflectance and absorptance are presented in highly versatile and accurate form. In the thin-film limit the optical properties do not depend on the absorption coefficient, thickness and refractive index individually, but only on their product. We show that this formalism is applicable to the problem of ultrathin defective layer e.g. on a top of a layer of amorphous silicon. We develop a new method of direct evaluation of the surface defective layer and the bulk defects. Applying this method to amorphous silicon on glass, we show that the surface defective layer differs from bulk amorphous silicon in terms of light soaking.
Abdel-Aziz, Omar; Abdel-Ghany, Maha F; Nagi, Reham; Abdel-Fattah, Laila
2015-03-15
The present work is concerned with simultaneous determination of cefepime (CEF) and the co-administered drug, levofloxacin (LEV), in spiked human plasma by applying a new approach, Savitzky-Golay differentiation filters, and combined trigonometric Fourier functions to their ratio spectra. The different parameters associated with the calculation of Savitzky-Golay and Fourier coefficients were optimized. The proposed methods were validated and applied for determination of the two drugs in laboratory prepared mixtures and spiked human plasma. The results were statistically compared with reported HPLC methods and were found accurate and precise. Copyright © 2014 Elsevier B.V. All rights reserved.
Modeling and predicting historical volatility in exchange rate markets
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2017-04-01
Volatility modeling and forecasting of currency exchange rate is an important task in several business risk management tasks; including treasury risk management, derivatives pricing, and portfolio risk evaluation. The purpose of this study is to present a simple and effective approach for predicting historical volatility of currency exchange rate. The approach is based on a limited set of technical indicators as inputs to the artificial neural networks (ANN). To show the effectiveness of the proposed approach, it was applied to forecast US/Canada and US/Euro exchange rates volatilities. The forecasting results show that our simple approach outperformed the conventional GARCH and EGARCH with different distribution assumptions, and also the hybrid GARCH and EGARCH with ANN in terms of mean absolute error, mean of squared errors, and Theil's inequality coefficient. Because of the simplicity and effectiveness of the approach, it is promising for US currency volatility prediction tasks.
Mutual diffusion coefficients of heptane isomers in nitrogen: A molecular dynamics study
NASA Astrophysics Data System (ADS)
Chae, Kyungchan; Violi, Angela
2011-01-01
The accurate knowledge of transport properties of pure and mixture fluids is essential for the design of various chemical and mechanical systems that include fluxes of mass, momentum, and energy. In this study we determine the mutual diffusion coefficients of mixtures composed of heptane isomers and nitrogen using molecular dynamics (MD) simulations with fully atomistic intermolecular potential parameters, in conjunction with the Green-Kubo formula. The computed results were compared with the values obtained using the Chapman-Enskog (C-E) equation with Lennard-Jones (LJ) potential parameters derived from the correlations of state values: MD simulations predict a maximum difference of 6% among isomers while the C-E equation presents that of 3% in the mutual diffusion coefficients in the temperature range 500-1000 K. The comparison of two approaches implies that the corresponding state principle can be applied to the models, which are only weakly affected by the anisotropy of the interaction potentials and the large uncertainty will be included in its application for complex polyatomic molecules. The MD simulations successfully address the pure effects of molecular structure among isomers on mutual diffusion coefficients by revealing that the differences of the total mutual diffusion coefficients for the six mixtures are caused mainly by heptane isomers. The cross interaction potential parameters, collision diameter σ _{12}, and potential energy well depth \\varepsilon _{12} of heptane isomers and nitrogen mixtures were also computed from the mutual diffusion coefficients.
Santos, M V; Sansinena, M; Zaritzky, N; Chirife, J
BACKGROUND: Dry ice-ethanol bath (-78 degree C) have been widely used in low temperature biological research to attain rapid cooling of samples below freezing temperature. The prediction of cooling rates of biological samples immersed in dry ice-ethanol bath is of practical interest in cryopreservation. The cooling rate can be obtained using mathematical models representing the heat conduction equation in transient state. Additionally, at the solid cryogenic-fluid interface, the knowledge of the surface heat transfer coefficient (h) is necessary for the convective boundary condition in order to correctly establish the mathematical problem. The study was to apply numerical modeling to obtain the surface heat transfer coefficient of a dry ice-ethanol bath. A numerical finite element solution of heat conduction equation was used to obtain surface heat transfer coefficients from measured temperatures at the center of polytetrafluoroethylene and polymethylmetacrylate cylinders immersed in a dry ice-ethanol cooling bath. The numerical model considered the temperature dependence of thermophysical properties of plastic materials used. A negative linear relationship is observed between cylinder diameter and heat transfer coefficient in the liquid bath, the calculated h values were 308, 135 and 62.5 W/(m 2 K) for PMMA 1.3, PTFE 2.59 and 3.14 cm in diameter, respectively. The calculated heat transfer coefficients were consistent among several replicates; h in dry ice-ethanol showed an inverse relationship with cylinder diameter.
Analysis of Drafting Effects in Swimming Using Computational Fluid Dynamics
Silva, António José; Rouboa, Abel; Moreira, António; Reis, Victor Machado; Alves, Francisco; Vilas-Boas, João Paulo; Marinho, Daniel Almeida
2008-01-01
The purpose of this study was to determine the effect of drafting distance on the drag coefficient in swimming. A k-epsilon turbulent model was implemented in the commercial code Fluent® and applied to the fluid flow around two swimmers in a drafting situation. Numerical simulations were conducted for various distances between swimmers (0.5-8.0 m) and swimming velocities (1.6-2.0 m.s-1). Drag coefficient (Cd) was computed for each one of the distances and velocities. We found that the drag coefficient of the leading swimmer decreased as the flow velocity increased. The relative drag coefficient of the back swimmer was lower (about 56% of the leading swimmer) for the smallest inter-swimmer distance (0.5 m). This value increased progressively until the distance between swimmers reached 6.0 m, where the relative drag coefficient of the back swimmer was about 84% of the leading swimmer. The results indicated that the Cd of the back swimmer was equal to that of the leading swimmer at distances ranging from 6.45 to 8. 90 m. We conclude that these distances allow the swimmers to be in the same hydrodynamic conditions during training and competitions. Key pointsThe drag coefficient of the leading swimmer decreased as the flow velocity increased.The relative drag coefficient of the back swimmer was least (about 56% of the leading swimmer) for the smallest inter-swimmer distance (0.5 m).The drag coefficient values of both swimmers in drafting were equal to distances ranging between 6.45 m and 8.90 m, considering the different flow velocities.The numerical simulation techniques could be a good approach to enable the analysis of the fluid forces around objects in water, as it happens in swimming. PMID:24150135
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kong, Dali; Zhang, Keke; Schubert, Gerald
2013-02-15
We present a new three-dimensional numerical method for calculating the non-spherical shape and internal structure of a model of a rapidly rotating gaseous body with a polytropic index of unity. The calculation is based on a finite-element method and accounts for the full effects of rotation. After validating the numerical approach against the asymptotic solution of Chandrasekhar that is valid only for a slowly rotating gaseous body, we apply it to models of Jupiter and a rapidly rotating, highly flattened star ({alpha} Eridani). In the case of Jupiter, the two-dimensional distributions of density and pressure are determined via a hybridmore » inverse approach by adjusting an a priori unknown coefficient in the equation of state until the model shape matches the observed shape of Jupiter. After obtaining the two-dimensional distribution of density, we then compute the zonal gravity coefficients and the total mass from the non-spherical model that takes full account of rotation-induced shape change. Our non-spherical model with a polytropic index of unity is able to produce the known mass of Jupiter with about 4% accuracy and the zonal gravitational coefficient J {sub 2} of Jupiter with better than 2% accuracy, a reasonable result considering that there is only one parameter in the model. For {alpha} Eridani, we calculate its rotationally distorted shape and internal structure based on the observationally deduced rotation rate and size of the star by using a similar hybrid inverse approach. Our model of the star closely approximates the observed flattening.« less
Equivalent-circuit models for electret-based vibration energy harvesters
NASA Astrophysics Data System (ADS)
Phu Le, Cuong; Halvorsen, Einar
2017-08-01
This paper presents a complete analysis to build a tool for modelling electret-based vibration energy harvesters. The calculational approach includes all possible effects of fringing fields that may have significant impact on output power. The transducer configuration consists of two sets of metal strip electrodes on a top substrate that faces electret strips deposited on a bottom movable substrate functioning as a proof mass. Charge distribution on each metal strip is expressed by series expansion using Chebyshev polynomials multiplied by a reciprocal square-root form. The Galerkin method is then applied to extract all charge induction coefficients. The approach is validated by finite element calculations. From the analytic tool, a variety of connection schemes for power extraction in slot-effect and cross-wafer configurations can be lumped to a standard equivalent circuit with inclusion of parasitic capacitance. Fast calculation of the coefficients is also obtained by a proposed closed-form solution based on leading terms of the series expansions. The achieved analytical result is an important step for further optimisation of the transducer geometry and maximising harvester performance.
Gravity field of Jupiter’s moon Amalthea and the implication on a spacecraft trajectory
NASA Astrophysics Data System (ADS)
Weinwurm, Gudrun
2006-01-01
Before its final plunge into Jupiter in September 2003, GALILEO made a last 'visit' to one of Jupiter's moons - Amalthea. This final flyby of the spacecraft's successful mission occurred on November 5, 2002. In order to analyse the spacecraft data with respect to Amalthea's gravity field, interior models of the moon had to be provided. The method used for this approach is based on the numerical integration of infinitesimal volume elements of a three-axial ellipsoid in elliptic coordinates. To derive the gravity field coefficients of the body, the second method of Neumann was applied. Based on the spacecraft trajectory data provided by the Jet Propulsion Laboratory, GALILEO's velocity perturbations at closest approach could be calculated. The harmonic coefficients of Amalthea's gravity field have been derived up to degree and order six, for both homogeneous and reasonable heterogeneous cases. Founded on these numbers the impact on the trajectory of GALILEO was calculated and compared to existing Doppler data. Furthermore, predictions for future spacecraft flybys were derived. No two-way Doppler-data was available during the flyby and the harmonic coefficients of the gravity field are buried in the one-way Doppler-noise. Nevertheless, the generated gravity field models reflect the most likely interior structure of the moon and can be a basis for further exploration of the Jovian system.
Lee, Nam-Jin; Kang, Chul-Goo
2015-01-01
A brake hardware-in-the-loop simulation (HILS) system for a railway vehicle is widely applied to estimate and validate braking performance in research studies and field tests. When we develop a simulation model for a full vehicle system, the characteristics of all components are generally properly simplified based on the understanding of each component’s purpose and interaction with other components. The friction coefficient between the brake disc and the pad used in simulations has been conventionally considered constant, and the effect of a variable friction coefficient is ignored with the assumption that the variability affects the performance of the vehicle braking very little. However, the friction coefficient of a disc pad changes significantly within a range due to environmental conditions, and thus, the friction coefficient can affect the performance of the brakes considerably, especially on the wheel slide. In this paper, we apply a variable friction coefficient and analyzed the effects of the variable friction coefficient on a mechanical brake system of a railway vehicle. We introduce a mathematical formula for the variable friction coefficient in which the variable friction is represented by two variables and five parameters. The proposed formula is applied to real-time simulations using a brake HILS system, and the effectiveness of the formula is verified experimentally by testing the mechanical braking performance of the brake HILS system. PMID:26267883
Lee, Nam-Jin; Kang, Chul-Goo
2015-01-01
A brake hardware-in-the-loop simulation (HILS) system for a railway vehicle is widely applied to estimate and validate braking performance in research studies and field tests. When we develop a simulation model for a full vehicle system, the characteristics of all components are generally properly simplified based on the understanding of each component's purpose and interaction with other components. The friction coefficient between the brake disc and the pad used in simulations has been conventionally considered constant, and the effect of a variable friction coefficient is ignored with the assumption that the variability affects the performance of the vehicle braking very little. However, the friction coefficient of a disc pad changes significantly within a range due to environmental conditions, and thus, the friction coefficient can affect the performance of the brakes considerably, especially on the wheel slide. In this paper, we apply a variable friction coefficient and analyzed the effects of the variable friction coefficient on a mechanical brake system of a railway vehicle. We introduce a mathematical formula for the variable friction coefficient in which the variable friction is represented by two variables and five parameters. The proposed formula is applied to real-time simulations using a brake HILS system, and the effectiveness of the formula is verified experimentally by testing the mechanical braking performance of the brake HILS system.
Long-distance effects in B→ K^*ℓ ℓ from analyticity
NASA Astrophysics Data System (ADS)
Bobeth, Christoph; Chrzaszcz, Marcin; van Dyk, Danny; Virto, Javier
2018-06-01
We discuss a novel approach to systematically determine the dominant long-distance contribution to B→ K^*ℓ ℓ decays in the kinematic region where the dilepton invariant mass is below the open charm threshold. This approach provides the most consistent and reliable determination to date and can be used to compute Standard Model predictions for all observables of interest, including the kinematic region where the dilepton invariant mass lies between the J/ψ and the ψ (2S) resonances. We illustrate the power of our results by performing a New Physics fit to the Wilson coefficient C_9. This approach is systematically improvable from theoretical and experimental sides, and applies to other decay modes of the type B→ Vℓ ℓ , B→ Pℓ ℓ and B→ Vγ.
Siudem, Grzegorz; Fronczak, Agata; Fronczak, Piotr
2016-10-10
In this paper, we provide the exact expression for the coefficients in the low-temperature series expansion of the partition function of the two-dimensional Ising model on the infinite square lattice. This is equivalent to exact determination of the number of spin configurations at a given energy. With these coefficients, we show that the ferromagnetic-to-paramagnetic phase transition in the square lattice Ising model can be explained through equivalence between the model and the perfect gas of energy clusters model, in which the passage through the critical point is related to the complete change in the thermodynamic preferences on the size of clusters. The combinatorial approach reported in this article is very general and can be easily applied to other lattice models.
Siudem, Grzegorz; Fronczak, Agata; Fronczak, Piotr
2016-01-01
In this paper, we provide the exact expression for the coefficients in the low-temperature series expansion of the partition function of the two-dimensional Ising model on the infinite square lattice. This is equivalent to exact determination of the number of spin configurations at a given energy. With these coefficients, we show that the ferromagnetic–to–paramagnetic phase transition in the square lattice Ising model can be explained through equivalence between the model and the perfect gas of energy clusters model, in which the passage through the critical point is related to the complete change in the thermodynamic preferences on the size of clusters. The combinatorial approach reported in this article is very general and can be easily applied to other lattice models. PMID:27721435
Vasilyev, K N
2013-01-01
When developing new software products and adapting existing software, project leaders have to decide which functionalities to keep, adapt or develop. They have to consider that the cost of making errors during the specification phase is extremely high. In this paper a formalised approach is proposed that considers the main criteria for selecting new software functions. The application of this approach minimises the chances of making errors in selecting the functions to apply. Based on the work on software development and support projects in the area of water resources and flood damage evaluation in economic terms at CH2M HILL (the developers of the flood modelling package ISIS), the author has defined seven criteria for selecting functions to be included in a software product. The approach is based on the evaluation of the relative significance of the functions to be included into the software product. Evaluation is achieved by considering each criterion and the weighting coefficients of each criterion in turn and applying the method of normalisation. This paper includes a description of this new approach and examples of its application in the development of new software products in the are of the water resources management.
NASA Technical Reports Server (NTRS)
Chaderjian, N. M.
1986-01-01
A computer code is under development whereby the thin-layer Reynolds-averaged Navier-Stokes equations are to be applied to realistic fighter-aircraft configurations. This transonic Navier-Stokes code (TNS) utilizes a zonal approach in order to treat complex geometries and satisfy in-core computer memory constraints. The zonal approach has been applied to isolated wing geometries in order to facilitate code development. Part 1 of this paper addresses the TNS finite-difference algorithm, zonal methodology, and code validation with experimental data. Part 2 of this paper addresses some numerical issues such as code robustness, efficiency, and accuracy at high angles of attack. Special free-stream-preserving metrics proved an effective way to treat H-mesh singularities over a large range of severe flow conditions, including strong leading-edge flow gradients, massive shock-induced separation, and stall. Furthermore, lift and drag coefficients have been computed for a wing up through CLmax. Numerical oil flow patterns and particle trajectories are presented both for subcritical and transonic flow. These flow simulations are rich with complex separated flow physics and demonstrate the efficiency and robustness of the zonal approach.
Ford, R M; Lauffenburger, D A
1992-05-01
An individual cell-based mathematical model of Rivero et al. provides a framework for determining values of the chemotactic sensitivity coefficient chi 0, an intrinsic cell population parameter that characterizes the chemotactic response of bacterial populations. This coefficient can theoretically relate the swimming behavior of individual cells to the resulting migration of a bacterial population. When this model is applied to the commonly used capillary assay, an approximate solution can be obtained for a particular range of chemotactic strengths yielding a very simple analytical expression for estimating the value of chi 0, [formula: see text] from measurements of cell accumulation in the capillary, N, when attractant uptake is negligible. A0 and A infinity are the dimensionless attractant concentrations initially present at the mouth of the capillary and far into the capillary, respectively, which are scaled by Kd, the effective dissociation constant for receptor-attractant binding. D is the attractant diffusivity, and mu is the cell random motility coefficient. NRM is the cell accumulation in the capillary in the absence of an attractant gradient, from which mu can be determined independently as mu = (pi/4t)(NRM/pi r2bc)2, with r the capillary tube radius and bc the bacterial density initially in the chamber. When attractant uptake is significant, a slightly more involved procedure requiring a simple numerical integration becomes necessary. As an example, we apply this approach to quantitatively characterize, in terms of the chemotactic sensitivity coefficient chi 0, data from Terracciano indicating enhanced chemotactic responses of Escherichia coli to galactose when cultured under growth-limiting galactose levels in a chemostat.
Wu, Xia; Zhu, Jian-Cheng; Zhang, Yu; Li, Wei-Min; Rong, Xiang-Lu; Feng, Yi-Fan
2016-08-25
Potential impact of lipid research has been increasingly realized both in disease treatment and prevention. An effective metabolomics approach based on ultra-performance liquid chromatography/quadrupole-time-of-flight mass spectrometry (UPLC/Q-TOF-MS) along with multivariate statistic analysis has been applied for investigating the dynamic change of plasma phospholipids compositions in early type 2 diabetic rats after the treatment of an ancient prescription of Chinese Medicine Huang-Qi-San. The exported UPLC/Q-TOF-MS data of plasma samples were subjected to SIMCA-P and processed by bioMark, mixOmics, Rcomdr packages with R software. A clear score plots of plasma sample groups, including normal control group (NC), model group (MC), positive medicine control group (Flu) and Huang-Qi-San group (HQS), were achieved by principal-components analysis (PCA), partial least-squares discriminant analysis (PLS-DA) and orthogonal partial least-squares discriminant analysis (OPLS-DA). Biomarkers were screened out using student T test, principal component regression (PCR), partial least-squares regression (PLS) and important variable method (variable influence on projection, VIP). Structures of metabolites were identified and metabolic pathways were deduced by correlation coefficient. The relationship between compounds was explained by the correlation coefficient diagram, and the metabolic differences between similar compounds were illustrated. Based on KEGG database, the biological significances of identified biomarkers were described. The correlation coefficient was firstly applied to identify the structure and deduce the metabolic pathways of phospholipids metabolites, and the study provided a new methodological cue for further understanding the molecular mechanisms of metabolites in the process of regulating Huang-Qi-San for treating early type 2 diabetes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Coefficient Alpha: A Reliability Coefficient for the 21st Century?
ERIC Educational Resources Information Center
Yang, Yanyun; Green, Samuel B.
2011-01-01
Coefficient alpha is almost universally applied to assess reliability of scales in psychology. We argue that researchers should consider alternatives to coefficient alpha. Our preference is for structural equation modeling (SEM) estimates of reliability because they are informative and allow for an empirical evaluation of the assumptions…
Crane, Paul K; Gibbons, Laura E; Jolley, Lance; van Belle, Gerald
2006-11-01
We present an ordinal logistic regression model for identification of items with differential item functioning (DIF) and apply this model to a Mini-Mental State Examination (MMSE) dataset. We employ item response theory ability estimation in our models. Three nested ordinal logistic regression models are applied to each item. Model testing begins with examination of the statistical significance of the interaction term between ability and the group indicator, consistent with nonuniform DIF. Then we turn our attention to the coefficient of the ability term in models with and without the group term. If including the group term has a marked effect on that coefficient, we declare that it has uniform DIF. We examined DIF related to language of test administration in addition to self-reported race, Hispanic ethnicity, age, years of education, and sex. We used PARSCALE for IRT analyses and STATA for ordinal logistic regression approaches. We used an iterative technique for adjusting IRT ability estimates on the basis of DIF findings. Five items were found to have DIF related to language. These same items also had DIF related to other covariates. The ordinal logistic regression approach to DIF detection, when combined with IRT ability estimates, provides a reasonable alternative for DIF detection. There appear to be several items with significant DIF related to language of test administration in the MMSE. More attention needs to be paid to the specific criteria used to determine whether an item has DIF, not just the technique used to identify DIF.
Past and projected trends of body mass index and weight status in South Australia: 2003 to 2019.
Hendrie, Gilly A; Ullah, Shahid; Scott, Jane A; Gray, John; Berry, Narelle; Booth, Sue; Carter, Patricia; Cobiac, Lynne; Coveney, John
2015-12-01
Functional data analysis (FDA) is a forecasting approach that, to date, has not been applied to obesity, and that may provide more accurate forecasting analysis to manage uncertainty in public health. This paper uses FDA to provide projections of Body Mass Index (BMI), overweight and obesity in an Australian population through to 2019. Data from the South Australian Monitoring and Surveillance System (January 2003 to December 2012, n=51,618 adults) were collected via telephone interview survey. FDA was conducted in four steps: 1) age-gender specific BMIs for each year were smoothed using a weighted regression; 2) the functional principal components decomposition was applied to estimate the basis functions; 3) an exponential smoothing state space model was used for forecasting the coefficient series; and 4) forecast coefficients were combined with the basis function. The forecast models suggest that between 2012 and 2019 average BMI will increase from 27.2 kg/m(2) to 28.0 kg/m(2) in males and 26.4 kg/m(2) to 27.6 kg/m(2) in females. The prevalence of obesity is forecast to increase by 6-7 percentage points by 2019 (to 28.7% in males and 29.2% in females). Projections identify age-gender groups at greatest risk of obesity over time. The novel approach will be useful to facilitate more accurate planning and policy development. © 2015 Public Health Association of Australia.
Chaurasia, Ashok; Harel, Ofer
2015-02-10
Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.
Upgrading CCIR's fo F 2 maps using available ionosondes and genetic algorithms
NASA Astrophysics Data System (ADS)
Gularte, Erika; Carpintero, Daniel D.; Jaen, Juliana
2018-04-01
We have developed a new approach towards a new database of the ionospheric parameter fo F 2 . This parameter, being the frequency of the maximum of the ionospheric electronic density profile and its main modeller, is of great interest not only in atmospheric studies but also in the realm of radio propagation. The current databases, generated by CCIR (Committee Consultative for Ionospheric Radiowave propagation) and URSI (International Union of Radio Science), and used by the IRI (International Reference Ionosphere) model, are based on Fourier expansions and have been built in the 60s from the available ionosondes at that time. The main goal of this work is to upgrade the databases by using new available ionosonde data. To this end we used the IRI diurnal/spherical expansions to represent the fo F 2 variability, and computed its coefficients by means of a genetic algorithm (GA). In order to test the performance of the proposed methodology, we applied it to the South American region with data obtained by RAPEAS (Red Argentina para el Estudio de la Atmósfera Superior, i.e. Argentine Network for the Study of the Upper Atmosphere) during the years 1958-2009. The new GA coefficients provide a global better fit of the IRI model to the observed fo F 2 than the CCIR coefficients. Since the same formulae and the same number of coefficients were used, the overall integrity of IRI's typical ionospheric feature representation was preserved. The best improvements with respect to CCIR are obtained at low solar activities, at large (in absolute value) modip latitudes, and at night-time. The new method is flexible in the sense that can be applied either globally or regionally. It is also very easy to recompute the coefficients when new data is available. The computation of a third set of coefficients corresponding to days of medium solar activity in order to avoid the interpolation between low and high activities is suggested. The same procedure as for fo F 2 can be perfomed to obtain the ionospheric parameter M(3000)F2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, X.; Florinski, V.
We present a new model that couples galactic cosmic-ray (GCR) propagation with magnetic turbulence transport and the MHD background evolution in the heliosphere. The model is applied to the problem of the formation of corotating interaction regions (CIRs) during the last solar minimum from the period between 2007 and 2009. The numerical model simultaneously calculates the large-scale supersonic solar wind properties and its small-scale turbulent content from 0.3 au to the termination shock. Cosmic rays are then transported through the background, and thus computed, with diffusion coefficients derived from the solar wind turbulent properties, using a stochastic Parker approach. Ourmore » results demonstrate that GCR variations depend on the ratio of diffusion coefficients in the fast and slow solar winds. Stream interfaces inside the CIRs always lead to depressions of the GCR intensity. On the other hand, heliospheric current sheet (HCS) crossings do not appreciably affect GCR intensities in the model, which is consistent with the two observations under quiet solar wind conditions. Therefore, variations in diffusion coefficients associated with CIR stream interfaces are more important for GCR propagation than the drift effects of the HCS during a negative solar minimum.« less
Testing a single regression coefficient in high dimensional linear models
Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2017-01-01
In linear regression models with high dimensional data, the classical z-test (or t-test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z-test to assess the significance of each covariate. Based on the p-value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively. PMID:28663668
Testing a single regression coefficient in high dimensional linear models.
Lan, Wei; Zhong, Ping-Shou; Li, Runze; Wang, Hansheng; Tsai, Chih-Ling
2016-11-01
In linear regression models with high dimensional data, the classical z -test (or t -test) for testing the significance of each single regression coefficient is no longer applicable. This is mainly because the number of covariates exceeds the sample size. In this paper, we propose a simple and novel alternative by introducing the Correlated Predictors Screening (CPS) method to control for predictors that are highly correlated with the target covariate. Accordingly, the classical ordinary least squares approach can be employed to estimate the regression coefficient associated with the target covariate. In addition, we demonstrate that the resulting estimator is consistent and asymptotically normal even if the random errors are heteroscedastic. This enables us to apply the z -test to assess the significance of each covariate. Based on the p -value obtained from testing the significance of each covariate, we further conduct multiple hypothesis testing by controlling the false discovery rate at the nominal level. Then, we show that the multiple hypothesis testing achieves consistent model selection. Simulation studies and empirical examples are presented to illustrate the finite sample performance and the usefulness of the proposed method, respectively.
Tilt correction for intracavity mirror of laser with an unstable resonator
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Xu, Bing; Yang, Wei
2005-12-01
The influence on outcoupled mode by introducing intracavity tilt-perturbation in confocal unstable resonator is analyzed. The intracavity mode properties and Zernike-aberration coefficient of intrcavity mirror's maladjustment are calculated theoretically. The experimental results about the relations of intracavity mirror maladjustment and the properties of mode aberration are presented by adopting Hartmann-Shack wavefront sensor. The results show that the intracavity perturbation of the concave mirror has more remarkable effect on outcoupled beam-quality than that of the convex mirror. For large Fresnel-number resonator, the tilt angle of intracavity mirror has a close linear relationship with extracavity Zernike tilt coefficient. The ratio of tilt aberration coefficient approaches to the magnification of unstable resonator if equivalent perturbation is applied to concave mirror and convex mirror respectively. Furthermore, astigmatism and defocus aberration also increase with the augment of tilt aberration of beam mode. So intracavity phase-corrected elements used in unstable resonator should be close to the concave mirror. Based these results, a set of automatic control system of intracavity tilt aberration is established and the aberration-corrected results are presented and analyzed in detail.
Non-perturbative Approach to Equation of State and Collective Modes of the QGP
NASA Astrophysics Data System (ADS)
Liu, Y. F. Shuai; Rappxs, Ralf
2018-01-01
We discuss a non-perturbative T-matrix approach to investigate the microscopic structure of the quark-gluon plasma (QGP). Utilizing an effective Hamiltonian which includes both light- and heavy-parton degrees of freedoms. The basic two-body interaction includes color-Coulomb and confining contributions in all available color channels, and is constrained by lattice-QCD data for the heavy-quark free energy. The in-medium T-matrices and parton spectral functions are computed selfconsistently with full account of off-shell properties encoded in large scattering widths. We apply the T-matrices to calculate the equation of state (EoS) for the QGP, including a ladder resummation of the Luttinger-Ward functional using a matrix-log technique to account for the dynamical formation of bound states. It turns out that the latter become the dominant degrees of freedom in the EoS at low QGP temperatures indicating a transition from parton to hadron degrees of freedom. The calculated spectral properties of one- and two-body states confirm this picture, where large parton scattering rates dissolve the parton quasiparticle structures while broad resonances start to form as the pseudocritical temperature is approached from above. Further calculations of transport coefficients reveal a small viscosity and heavy-quark diffusion coefficient.
Measuring monotony in two-dimensional samples
NASA Astrophysics Data System (ADS)
Kachapova, Farida; Kachapov, Ilias
2010-04-01
This note introduces a monotony coefficient as a new measure of the monotone dependence in a two-dimensional sample. Some properties of this measure are derived. In particular, it is shown that the absolute value of the monotony coefficient for a two-dimensional sample is between |r| and 1, where r is the Pearson's correlation coefficient for the sample; that the monotony coefficient equals 1 for any monotone increasing sample and equals -1 for any monotone decreasing sample. This article contains a few examples demonstrating that the monotony coefficient is a more accurate measure of the degree of monotone dependence for a non-linear relationship than the Pearson's, Spearman's and Kendall's correlation coefficients. The monotony coefficient is a tool that can be applied to samples in order to find dependencies between random variables; it is especially useful in finding couples of dependent variables in a big dataset of many variables. Undergraduate students in mathematics and science would benefit from learning and applying this measure of monotone dependence.
Ellwood, R; Stratoudaki, T; Sharples, S D; Clark, M; Somekh, M G
2014-03-01
The third-order elastic constants of a material are believed to be sensitive to residual stress, fatigue, and creep damage. The acoustoelastic coefficient is directly related to these third-order elastic constants. Several techniques have been developed to monitor the acoustoelastic coefficient using ultrasound. In this article, two techniques to impose stress on a sample are compared, one using the classical method of applying a static strain using a bending jig and the other applying a dynamic stress due to the presence of an acoustic wave. Results on aluminum samples are compared. Both techniques are found to produce similar values for the acoustoelastic coefficient. The dynamic strain technique however has the advantages that it can be applied to large, real world components, in situ, while ensuring the measurement takes place in the nondestructive, elastic regime.
Hybrid method for moving interface problems with application to the Hele-Shaw flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, T.Y.; Li, Zhilin; Osher, S.
In this paper, a hybrid approach which combines the immersed interface method with the level set approach is presented. The fast version of the immersed interface method is used to solve the differential equations whose solutions and their derivatives may be discontinuous across the interfaces due to the discontinuity of the coefficients or/and singular sources along the interfaces. The moving interfaces then are updated using the newly developed fast level set formulation which involves computation only inside some small tubes containing the interfaces. This method combines the advantage of the two approaches and gives a second-order Eulerian discretization for interfacemore » problems. Several key steps in the implementation are addressed in detail. This new approach is then applied to Hele-Shaw flow, an unstable flow involving two fluids with very different viscosity. 40 refs., 10 figs., 3 tabs.« less
Uncertainty Quantification in Simulations of Epidemics Using Polynomial Chaos
Santonja, F.; Chen-Charpentier, B.
2012-01-01
Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model. PMID:22927889
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
Milner, Allison; Aitken, Zoe; Kavanagh, Anne; LaMontagne, Anthony D; Petrie, Dennis
2016-11-01
This study investigated the extent that psychosocial job stressors had lasting effects on a scaled measure of mental health. We applied econometric approaches to a longitudinal cohort to: (1) control for unmeasured individual effects; (2) assess the role of prior (lagged) exposures of job stressors on mental health and (3) the persistence of mental health. We used a panel study with 13 annual waves and applied fixed-effects, first-difference and fixed-effects Arellano-Bond models. The Short Form 36 (SF-36) Mental Health Component Summary score was the outcome variable and the key exposures included: job control, job demands, job insecurity and fairness of pay. Results from the Arellano-Bond models suggest that greater fairness of pay (β-coefficient 0.34, 95% CI 0.23 to 0.45), job control (β-coefficient 0.15, 95% CI 0.10 to 0.20) and job security (β-coefficient 0.37, 95% CI 0.32 to 0.42) were contemporaneously associated with better mental health. Similar results were found for the fixed-effects and first-difference models. The Arellano-Bond model also showed persistent effects of individual mental health, whereby individuals' previous reports of mental health were related to their reporting in subsequent waves. The estimated long-run impact of job demands on mental health increased after accounting for time-related dynamics, while there were more minimal impacts for the other job stressor variables. Our results showed that the majority of the effects of psychosocial job stressors on a scaled measure of mental health are contemporaneous except for job demands where accounting for the lagged dynamics was important. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Rouse-Bueche Theory and The Calculation of The Monomeric Friction Coefficient in a Filled System
NASA Astrophysics Data System (ADS)
Martinetti, Luca; Macosko, Christopher; Bates, Frank
According to flexible chain theories of viscoelasticity, all relaxation and retardation times of a polymer melt (hence, any dynamic property such as the diffusion coefficient) depend on the monomeric friction coefficient, ζ0, i.e. the average drag force per monomer per unit velocity encountered by a Gaussian submolecule moving through its free-draining surroundings. Direct experimental access to ζ0 relies on the availability of a suitable polymer dynamics model. Thus far, no method has been suggested that is applicable to filled systems, such as filled rubbers or microphase-segregated A-B-A thermoplastic elastomers at temperatures where one of the blocks is glassy. Building upon the procedure proposed by Ferry for entangled and unfilled polymer melts, the Rouse-Bueche theory is applied to an undiluted triblock copolymer to extract ζ0 from the linear viscoelastic behavior in the rubber-glass transition region, and to estimate the size of Gaussian submolecules. At iso-free volume conditions, the so-obtained matrix monomeric friction factor is consistent with the corresponding value for the homopolymer melt. In addition, the characteristic Rouse dimensions are in good agreement with independent estimates based on the Kratky-Porod worm-like chain model. These results seem to validate the proposed approach for estimating ζ0 in a filled system. Although preliminary tested on a thermoplastic elastomer of the A-B-A type, the method may be extended and applied to filled homopolymers as well.
Jonker, Michiel T O
2016-06-01
Octanol-water partition coefficients (KOW ) are widely used in fate and effects modeling of chemicals. Still, high-quality experimental KOW data are scarce, in particular for very hydrophobic chemicals. This hampers reliable assessments of several fate and effect parameters and the development and validation of new models. One reason for the limited availability of experimental values may relate to the challenging nature of KOW measurements. In the present study, KOW values for 13 polycyclic aromatic hydrocarbons were determined with the gold standard "slow-stirring" method (log KOW 4.6-7.2). These values were then used as reference data for the development of an alternative method for measuring KOW . This approach combined slow stirring and equilibrium sampling of the extremely low aqueous concentrations with polydimethylsiloxane-coated solid-phase microextraction fibers, applying experimentally determined fiber-water partition coefficients. It resulted in KOW values matching the slow-stirring data very well. Therefore, the method was subsequently applied to a series of 17 moderately to extremely hydrophobic petrochemical compounds. The obtained KOW values spanned almost 6 orders of magnitude, with the highest value measuring 10(10.6) . The present study demonstrates that the hydrophobicity domain within which experimental KOW measurements are possible can be extended with the help of solid-phase microextraction and that experimentally determined KOW values can exceed the proposed upper limit of 10(9) . Environ Toxicol Chem 2016;35:1371-1377. © 2015 SETAC. © 2015 SETAC.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
Liger-Belair, Gérard; Topgaard, Daniel; Voisin, Cédric; Jeandet, Philippe
2004-05-11
In this paper, the transversal diffusion coefficient D perpendicular of CO2 dissolved molecules through the wall of a hydrated cellulose fiber was approached, from the liquid bulk diffusion coefficient of CO2 dissolved molecules modified by an obstruction factor. The porous network between the cellulose microfibrils of the fiber wall was assumed being saturated with liquid. We retrieved information from previous NMR experiments on the self-diffusion of water in cellulose fibers to reach an order of magnitude for the transversal diffusion coefficient of CO2 molecules through the fiber wall. A value of about D perpendicular approximately 0.2D0 was proposed, D0 being the diffusion coefficient of CO2 molecules in the liquid bulk. Because most of bubble nucleation sites in a glass poured with carbonated beverage are cellulose fibers cast off from paper or cloth which floated from the surrounding air, or remaining from the wiping process, this result directly applies to the kinetics of carbon dioxide bubble formation from champagne and sparkling wines. If the cellulose fiber wall was impermeable with regard to CO2 dissolved molecules, it was suggested that the kinetics of bubbling would be about three times less than it is.
Ultrasensitivity and sharp threshold theorems for multisite systems
NASA Astrophysics Data System (ADS)
Dougoud, M.; Mazza, C.; Vinckenbosch, L.
2017-02-01
This work studies the ultrasensitivity of multisite binding processes where ligand molecules can bind to several binding sites. It considers more particularly recent models involving complex chemical reactions in allosteric phosphorylation processes and for transcription factors and nucleosomes competing for binding on DNA. New statistics-based formulas for the Hill coefficient and the effective Hill coefficient are provided and necessary conditions for a system to be ultrasensitive are exhibited. It is first shown that the ultrasensitivity of binding processes can be approached using sharp-threshold theorems which have been developed in applied probability theory and statistical mechanics for studying sharp threshold phenomena in reliability theory, random graph theory and percolation theory. Special classes of binding process are then introduced and are described as density dependent birth and death process. New precise large deviation results for the steady state distribution of the process are obtained, which permits to show that switch-like ultrasensitive responses are strongly related to the multi-modality of the steady state distribution. Ultrasensitivity occurs if and only if the entropy of the dynamical system has more than one global minimum for some critical ligand concentration. In this case, the Hill coefficient is proportional to the number of binding sites, and the system is highly ultrasensitive. The classical effective Hill coefficient I is extended to a new cooperativity index I q , for which we recommend the computation of a broad range of values of q instead of just the standard one I = I 0.9 corresponding to the 10%-90% variation in the dose-response. It is shown that this single choice can sometimes mislead the conclusion by not detecting ultrasensitivity. This new approach allows a better understanding of multisite ultrasensitive systems and provides new tools for the design of such systems.
Limit Cycle Analysis Applied to the Oscillations of Decelerating Blunt-Body Entry Vehicles
NASA Technical Reports Server (NTRS)
Schoenenberger, Mark; Queen, Eric M.
2008-01-01
Many blunt-body entry vehicles have nonlinear dynamic stability characteristics that produce self-limiting oscillations in flight. Several different test techniques can be used to extract dynamic aerodynamic coefficients to predict this oscillatory behavior for planetary entry mission design and analysis. Most of these test techniques impose boundary conditions that alter the oscillatory behavior from that seen in flight. Three sets of test conditions, representing three commonly used test techniques, are presented to highlight these effects. Analytical solutions to the constant-coefficient planar equations-of-motion for each case are developed to show how the same blunt body behaves differently depending on the imposed test conditions. The energy equation is applied to further illustrate the governing dynamics. Then, the mean value theorem is applied to the energy rate equation to find the effective damping for an example blunt body with nonlinear, self-limiting dynamic characteristics. This approach is used to predict constant-energy oscillatory behavior and the equilibrium oscillation amplitudes for the various test conditions. These predictions are verified with planar simulations. The analysis presented provides an overview of dynamic stability test techniques and illustrates the effects of dynamic stability, static aerodynamics and test conditions on observed dynamic motions. It is proposed that these effects may be leveraged to develop new test techniques and refine test matrices in future tests to better define the nonlinear functional forms of blunt body dynamic stability curves.
Model selection and Bayesian inference for high-resolution seabed reflection inversion.
Dettmer, Jan; Dosso, Stan E; Holland, Charles W
2009-02-01
This paper applies Bayesian inference, including model selection and posterior parameter inference, to inversion of seabed reflection data to resolve sediment structure at a spatial scale below the pulse length of the acoustic source. A practical approach to model selection is used, employing the Bayesian information criterion to decide on the number of sediment layers needed to sufficiently fit the data while satisfying parsimony to avoid overparametrization. Posterior parameter inference is carried out using an efficient Metropolis-Hastings algorithm for high-dimensional models, and results are presented as marginal-probability depth distributions for sound velocity, density, and attenuation. The approach is applied to plane-wave reflection-coefficient inversion of single-bounce data collected on the Malta Plateau, Mediterranean Sea, which indicate complex fine structure close to the water-sediment interface. This fine structure is resolved in the geoacoustic inversion results in terms of four layers within the upper meter of sediments. The inversion results are in good agreement with parameter estimates from a gravity core taken at the experiment site.
A model-updating procedure to stimulate piezoelectric transducers accurately.
Piranda, B; Ballandras, S; Steichen, W; Hecart, B
2001-09-01
The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.
Liu, Sheng; Li, Changyi; Figiel, Jeffrey J.; ...
2015-04-27
In this paper, we report continuous, dynamic, reversible, and widely tunable lasing from 367 to 337 nm from single GaN nanowires (NWs) by applying hydrostatic pressure up to ~7 GPa. The GaN NW lasers, with heights of 4–5 μm and diameters ~140 nm, are fabricated using a lithographically defined two-step top-down technique. The wavelength tuning is caused by an increasing Γ direct bandgap of GaN with increasing pressure and is precisely controllable to subnanometer resolution. The observed pressure coefficients of the NWs are ~40% larger compared with GaN microstructures fabricated from the same material or from reported bulk GaN values,more » revealing a nanoscale-related effect that significantly enhances the tuning range using this approach. Finally, this approach can be generally applied to other semiconductor NW lasers to potentially achieve full spectral coverage from the UV to IR.« less
Functional Linear Model with Zero-value Coefficient Function at Sub-regions.
Zhou, Jianhui; Wang, Nae-Yuh; Wang, Naisyin
2013-01-01
We propose a shrinkage method to estimate the coefficient function in a functional linear regression model when the value of the coefficient function is zero within certain sub-regions. Besides identifying the null region in which the coefficient function is zero, we also aim to perform estimation and inferences for the nonparametrically estimated coefficient function without over-shrinking the values. Our proposal consists of two stages. In stage one, the Dantzig selector is employed to provide initial location of the null region. In stage two, we propose a group SCAD approach to refine the estimated location of the null region and to provide the estimation and inference procedures for the coefficient function. Our considerations have certain advantages in this functional setup. One goal is to reduce the number of parameters employed in the model. With a one-stage procedure, it is needed to use a large number of knots in order to precisely identify the zero-coefficient region; however, the variation and estimation difficulties increase with the number of parameters. Owing to the additional refinement stage, we avoid this necessity and our estimator achieves superior numerical performance in practice. We show that our estimator enjoys the Oracle property; it identifies the null region with probability tending to 1, and it achieves the same asymptotic normality for the estimated coefficient function on the non-null region as the functional linear model estimator when the non-null region is known. Numerically, our refined estimator overcomes the shortcomings of the initial Dantzig estimator which tends to under-estimate the absolute scale of non-zero coefficients. The performance of the proposed method is illustrated in simulation studies. We apply the method in an analysis of data collected by the Johns Hopkins Precursors Study, where the primary interests are in estimating the strength of association between body mass index in midlife and the quality of life in physical functioning at old age, and in identifying the effective age ranges where such associations exist.
Structural interactions in ionic liquids linked to higher-order Poisson-Boltzmann equations
NASA Astrophysics Data System (ADS)
Blossey, R.; Maggs, A. C.; Podgornik, R.
2017-06-01
We present a derivation of generalized Poisson-Boltzmann equations starting from classical theories of binary fluid mixtures, employing an approach based on the Legendre transform as recently applied to the case of local descriptions of the fluid free energy. Under specific symmetry assumptions, and in the linearized regime, the Poisson-Boltzmann equation reduces to a phenomenological equation introduced by Bazant et al. [Phys. Rev. Lett. 106, 046102 (2011)], 10.1103/PhysRevLett.106.046102, whereby the structuring near the surface is determined by bulk coefficients.
Ali, Zulfiqar; Alsulaiman, Mansour; Muhammad, Ghulam; Elamvazuthi, Irraivan; Al-Nasheri, Ahmed; Mesallam, Tamer A; Farahat, Mohamed; Malki, Khalid H
2017-05-01
A large population around the world has voice complications. Various approaches for subjective and objective evaluations have been suggested in the literature. The subjective approach strongly depends on the experience and area of expertise of a clinician, and human error cannot be neglected. On the other hand, the objective or automatic approach is noninvasive. Automatic developed systems can provide complementary information that may be helpful for a clinician in the early screening of a voice disorder. At the same time, automatic systems can be deployed in remote areas where a general practitioner can use them and may refer the patient to a specialist to avoid complications that may be life threatening. Many automatic systems for disorder detection have been developed by applying different types of conventional speech features such as the linear prediction coefficients, linear prediction cepstral coefficients, and Mel-frequency cepstral coefficients (MFCCs). This study aims to ascertain whether conventional speech features detect voice pathology reliably, and whether they can be correlated with voice quality. To investigate this, an automatic detection system based on MFCC was developed, and three different voice disorder databases were used in this study. The experimental results suggest that the accuracy of the MFCC-based system varies from database to database. The detection rate for the intra-database ranges from 72% to 95%, and that for the inter-database is from 47% to 82%. The results conclude that conventional speech features are not correlated with voice, and hence are not reliable in pathology detection. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, Paul R.; Boyce, Donald E.; Park, Jun-Sang
A robust methodology is presented to extract slip system strengths from lattice strain distributions for polycrystalline samples obtained from high-energy x-ray diffraction (HEXD) experiments with in situ loading. The methodology consists of matching the evolution of coefficients of a harmonic expansion of the distributions from simulation to the coefficients derived from measurements. Simulation results are generated via finite element simulations of virtual polycrystals that are subjected to the loading history applied in the HEXD experiments. Advantages of the methodology include: (1) its ability to utilize extensive data sets generated by HEXD experiments; (2) its ability to capture trends in distributionsmore » that may be noisy (both measured and simulated); and (3) its sensitivity to the ratios of the family strengths. The approach is used to evaluate the slip system strengths of Ti-6Al-4V using samples having relatively equiaxed grains. These strength estimates are compared to values in the literature.« less
A support vector machine approach for classification of welding defects from ultrasonic signals
NASA Astrophysics Data System (ADS)
Chen, Yuan; Ma, Hong-Wei; Zhang, Guang-Ming
2014-07-01
Defect classification is an important issue in ultrasonic non-destructive evaluation. A layered multi-class support vector machine (LMSVM) classification system, which combines multiple SVM classifiers through a layered architecture, is proposed in this paper. The proposed LMSVM classification system is applied to the classification of welding defects from ultrasonic test signals. The measured ultrasonic defect echo signals are first decomposed into wavelet coefficients by the wavelet packet transform. The energy of the wavelet coefficients at different frequency channels are used to construct the feature vectors. The bees algorithm (BA) is then used for feature selection and SVM parameter optimisation for the LMSVM classification system. The BA-based feature selection optimises the energy feature vectors. The optimised feature vectors are input to the LMSVM classification system for training and testing. Experimental results of classifying welding defects demonstrate that the proposed technique is highly robust, precise and reliable for ultrasonic defect classification.
Monitoring Everglades freshwater marsh water level using L-band synthetic aperture radar backscatter
Kim, Jin-Woo; Lu, Zhong; Jones, John W.; Shum, C.K.; Lee, Hyongki; Jia, Yuanyuan
2014-01-01
The Florida Everglades plays a significant role in controlling floods, improving water quality, supporting ecosystems, and maintaining biodiversity in south Florida. Adaptive restoration and management of the Everglades requires the best information possible regarding wetland hydrology. We developed a new and innovative approach to quantify spatial and temporal variations in wetland water levels within the Everglades, Florida. We observed high correlations between water level measured at in situ gages and L-band SAR backscatter coefficients in the freshwater marsh, though C-band SAR backscatter has no close relationship with water level. Here we illustrate the complementarity of SAR backscatter coefficient differencing and interferometry (InSAR) for improved estimation of high spatial resolution water level variations in the Everglades. This technique has a certain limitation in applying to swamp forests with dense vegetation cover, but we conclude that this new method is promising in future applications to wetland hydrology research.
The influence of trading volume on market efficiency: The DCCA approach
NASA Astrophysics Data System (ADS)
Sukpitak, Jessada; Hengpunya, Varagorn
2016-09-01
For a single market, the cross-correlation between market efficiency and trading volume, which is an indicator of market liquidity, is attentively analysed. The study begins with creating time series of market efficiency by applying time-varying Hurst exponent with one year sliding window to daily closing prices. The time series of trading volume corresponding to the same time period used for the market efficiency is derived from one year moving average of daily trading volume. Subsequently, the detrended cross-correlation coefficient is employed to quantify the degree of cross-correlation between the two time series. It was found that values of cross-correlation coefficient of all considered stock markets are close to 0 and are clearly out of range in which correlation being considered significant in almost every time scale. Obtained results show that the market liquidity in term of trading volume hardly has effect on the market efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liangzhe Zhang; Anthony D. Rollett; Timothy Bartel
2012-02-01
A calibrated Monte Carlo (cMC) approach, which quantifies grain boundary kinetics within a generic setting, is presented. The influence of misorientation is captured by adding a scaling coefficient in the spin flipping probability equation, while the contribution of different driving forces is weighted using a partition function. The calibration process relies on the established parametric links between Monte Carlo (MC) and sharp-interface models. The cMC algorithm quantifies microstructural evolution under complex thermomechanical environments and remedies some of the difficulties associated with conventional MC models. After validation, the cMC approach is applied to quantify the texture development of polycrystalline materials withmore » influences of misorientation and inhomogeneous bulk energy across grain boundaries. The results are in good agreement with theory and experiments.« less
Subcortical structure segmentation using probabilistic atlas priors
NASA Astrophysics Data System (ADS)
Gouttard, Sylvain; Styner, Martin; Joshi, Sarang; Smith, Rachel G.; Cody Hazlett, Heather; Gerig, Guido
2007-03-01
The segmentation of the subcortical structures of the brain is required for many forms of quantitative neuroanatomic analysis. The volumetric and shape parameters of structures such as lateral ventricles, putamen, caudate, hippocampus, pallidus and amygdala are employed to characterize a disease or its evolution. This paper presents a fully automatic segmentation of these structures via a non-rigid registration of a probabilistic atlas prior and alongside a comprehensive validation. Our approach is based on an unbiased diffeomorphic atlas with probabilistic spatial priors built from a training set of MR images with corresponding manual segmentations. The atlas building computes an average image along with transformation fields mapping each training case to the average image. These transformation fields are applied to the manually segmented structures of each case in order to obtain a probabilistic map on the atlas. When applying the atlas for automatic structural segmentation, an MR image is first intensity inhomogeneity corrected, skull stripped and intensity calibrated to the atlas. Then the atlas image is registered to the image using an affine followed by a deformable registration matching the gray level intensity. Finally, the registration transformation is applied to the probabilistic maps of each structures, which are then thresholded at 0.5 probability. Using manual segmentations for comparison, measures of volumetric differences show high correlation with our results. Furthermore, the dice coefficient, which quantifies the volumetric overlap, is higher than 62% for all structures and is close to 80% for basal ganglia. The intraclass correlation coefficient computed on these same datasets shows a good inter-method correlation of the volumetric measurements. Using a dataset of a single patient scanned 10 times on 5 different scanners, reliability is shown with a coefficient of variance of less than 2 percents over the whole dataset. Overall, these validation and reliability studies show that our method accurately and reliably segments almost all structures. Only the hippocampus and amygdala segmentations exhibit relative low correlation with the manual segmentation in at least one of the validation studies, whereas they still show appropriate dice overlap coefficients.
Fast wavelet based algorithms for linear evolution equations
NASA Technical Reports Server (NTRS)
Engquist, Bjorn; Osher, Stanley; Zhong, Sifen
1992-01-01
A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.
Measuring dynamic oil film coefficients of sliding bearing
NASA Technical Reports Server (NTRS)
Feng, G.; Tang, X.
1985-01-01
A method is presented for determining the dynamic coefficients of bearing oil film. By varying the support stiffness and damping, eight dynamic coefficients of the bearing were determined. Simple and easy to apply, the method can be used in solving practical machine problems.
NASA Astrophysics Data System (ADS)
Camporesi, Roberto
2011-06-01
We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and the variation of constants method. The approach presented here can be used in a first course on differential equations for science and engineering majors.
Bartolino, James R.
2007-01-01
A numerical flow model of the Spokane Valley-Rathdrum Prairie aquifer currently (2007) being developed requires the input of values for areally-distributed recharge, a parameter that is often the most uncertain component of water budgets and ground-water flow models because it is virtually impossible to measure over large areas. Data from six active weather stations in and near the study area were used in four recharge-calculation techniques or approaches; the Langbein method, in which recharge is estimated on the basis of empirical data from other basins; a method developed by the U.S. Department of Agriculture (USDA), in which crop consumptive use and effective precipitation are first calculated and then subtracted from actual precipitation to yield an estimate of recharge; an approach developed as part of the Eastern Snake Plain Aquifer Model (ESPAM) Enhancement Project in which recharge is calculated on the basis of precipitation-recharge relations from other basins; and an approach in which reference evapotranspiration is calculated by the Food and Agriculture Organization (FAO) Penman-Monteith equation, crop consumptive use is determined (using a single or dual coefficient approach), and recharge is calculated. Annual recharge calculated by the Langbein method for the six weather stations was 4 percent of annual mean precipitation, yielding the lowest values of the methods discussed in this report, however, the Langbein method can be only applied to annual time periods. Mean monthly recharge calculated by the USDA method ranged from 53 to 73 percent of mean monthly precipitation. Mean annual recharge ranged from 64 to 69 percent of mean annual precipitation. Separate mean monthly recharge calculations were made with the ESPAM method using initial input parameters to represent thin-soil, thick-soil, and lava-rock conditions. The lava-rock parameters yielded the highest recharge values and the thick-soil parameters the lowest. For thin-soil parameters, calculated monthly recharge ranged from 10 to 29 percent of mean monthly precipitation and annual recharge ranged from 16 to 23 percent of mean annual precipitation. For thick-soil parameters, calculated monthly recharge ranged from 1 to 5 percent of mean monthly precipitation and mean annual recharge ranged from 2 to 4 percent of mean annual precipitation. For lava-rock parameters, calculated mean monthly recharge ranged from 37 to 57 percent of mean monthly precipitation and mean annual recharge ranged from 45 to 52 percent of mean annual precipitation. Single-coefficient (crop coefficient) FAO Penman-Monteith mean monthly recharge values were calculated for Spokane Weather Service Office (WSO) Airport, the only station for which the necessary meteorological data were available. Grass-referenced values of mean monthly recharge ranged from 0 to 81 percent of mean monthly precipitation and mean annual recharge was 21 percent of mean annual precipitation; alfalfa-referenced values of mean monthly recharge ranged from 0 to 85 percent of mean monthly precipitation and mean annual recharge was 24 percent of mean annual precipitation. Single-coefficient FAO Penman-Monteith calculations yielded a mean monthly recharge of zero during the eight warmest and driest months of the year (March-October). In order to refine the mean monthly recharge estimates, dual-coefficient (basal crop and soil evaporation coefficients) FAO Penman-Monteith dual-crop evapotranspiration and deep-percolation calculations were applied to daily values from the Spokane WSO Airport for January 1990 through December 2005. The resultant monthly totals display a temporal variability that is absent from the mean monthly values and demonstrate that the daily amount and timing of precipitation dramatically affect calculated recharge. The dual-coefficient FAO Penman-Monteith calculations were made for the remaining five stations using wind-speed values for Spokane WSO Airport and other assumptions regarding
Microscopic medical image classification framework via deep learning and shearlet transform.
Rezaeilouyeh, Hadi; Mollahosseini, Ali; Mahoor, Mohammad H
2016-10-01
Cancer is the second leading cause of death in US after cardiovascular disease. Image-based computer-aided diagnosis can assist physicians to efficiently diagnose cancers in early stages. Existing computer-aided algorithms use hand-crafted features such as wavelet coefficients, co-occurrence matrix features, and recently, histogram of shearlet coefficients for classification of cancerous tissues and cells in images. These hand-crafted features often lack generalizability since every cancerous tissue and cell has a specific texture, structure, and shape. An alternative approach is to use convolutional neural networks (CNNs) to learn the most appropriate feature abstractions directly from the data and handle the limitations of hand-crafted features. A framework for breast cancer detection and prostate Gleason grading using CNN trained on images along with the magnitude and phase of shearlet coefficients is presented. Particularly, we apply shearlet transform on images and extract the magnitude and phase of shearlet coefficients. Then we feed shearlet features along with the original images to our CNN consisting of multiple layers of convolution, max pooling, and fully connected layers. Our experiments show that using the magnitude and phase of shearlet coefficients as extra information to the network can improve the accuracy of detection and generalize better compared to the state-of-the-art methods that rely on hand-crafted features. This study expands the application of deep neural networks into the field of medical image analysis, which is a difficult domain considering the limited medical data available for such analysis.
Non-Ideality in Solvent Extraction Systems: PNNL FY 2014 Status Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levitskaia, Tatiana G.; Chatterjee, Sayandev; Pence, Natasha K.
The overall objective of this project is to develop predictive modeling capabilities for advanced fuel cycle separation processes by gaining a fundamental quantitative understanding of non-ideality effects and speciation in relevant aqueous and organic solutions. Aqueous solutions containing actinides and lanthanides encountered during nuclear fuel reprocessing have high ionic strength and do not behave as ideal solutions. Activity coefficients must be calculated to take into account the deviation from ideality and predict their behavior. In FY 2012-2013, a convenient method for determining activity effects in aqueous electrolyte solutions was developed. Our initial experiments demonstrated that water activity and osmotic coefficientsmore » of the electrolyte solutions can be accurately measured by the combination of two techniques, a Water Activity Meter and Vapor Pressure Osmometry (VPO). The water activity measurements have been conducted for binary lanthanide solutions in wide concentration range for all lanthanides (La-Lu with the exception of Pm). The osmotic coefficients and Pitzer parameters for each binary system were obtained by the least squares fitting of the water activity data. However, application of Pitzer model for the quantitative evaluation of the activity effects in the multicomponent mixtures is difficult due to the large number of the required interaction parameters. In FY 2014, the applicability of the Bromley model for the determination of the Ln(NO 3) 3 activity coefficients was evaluated. The new Bromley parameters for the binary Ln(NO 3) 3 electrolytes were obtained based on the available literature and our experimental data. This allowed for the accurate prediction of the Ln(NO 3) 3 activity coefficients for the binary Ln(NO 3) 3 electrolytes. This model was then successfully implemented for the determination of the Ln(NO 3) 3 activity coefficients in the ternary Nd(NO 3) 3/HNO 3/H2O, Eu(NO 3) 3/HNO 3/H 2O, and Eu(NO 3) 3/NaNO 3/H 2O systems. The main achievement of this work is the verified pathway for the estimation of the activity coefficients in the multicomponent aqueous electrolyte systems. The accurate Bromley electrolytes contributions obtained in this work for the entire series of lanthanide(III) nitrates (except Pm) can be applied for predicting activity coefficients and non-ideality effects for multi-component systems containing these species. This work also provides the proof-of-principle of extending the model to more complex multicomponent systems. Moreover, this approach can also be applied to actinide-containing electrolyte systems, for determination of the activity coefficients in concentrated radioactive solutions.« less
Bano, Kiran; Kennedy, Gareth F; Zhang, Jie; Bond, Alan M
2012-04-14
The theory for large amplitude Fourier transformed ac voltammetry at a rotating disc electrode is described. Resolution of time domain data into dc and ac harmonic components reveals that the mass transport for the dc component is controlled by convective-diffusion, while the background free higher order harmonic components are flow rate insensitive and mainly governed by linear diffusion. Thus, remarkable versatility is available; Levich behaviour of the dc component limiting current provides diffusion coefficient values and access to higher harmonics allows fast electrode kinetics to be probed. Two series of experiments (dc and ac voltammetry) have been required to extract these parameters; here large amplitude ac voltammetry with RDE methodology is used to demonstrate that kinetics and diffusion coefficient information can be extracted from a single experiment. To demonstrate the power of this approach, theoretical and experimental comparisons of data obtained for the reversible [Ru(NH(3))(6)](3+/2+) and quasi-reversible [Fe(CN)(6)](3-/4-) electron transfer processes are presented over a wide range of electrode rotation rates and with different concentrations and electrode materials. Excellent agreement of experimental and simulated data is achieved, which allows parameters such as electron transfer rate, diffusion coefficient, uncompensated resistance and others to be determined using a strategically applied approach that takes into account the different levels of sensitivity of each parameter to the dc or the ac harmonic.
NASA Astrophysics Data System (ADS)
Zhang, Keke; Kong, D.; Schubert, G.; Anderson, J.
2012-10-01
An accurate calculation of the rotationally distorted shape and internal structure of Jupiter is required to understand the high-precision gravitational field that will be measured by the Juno spacecraft now on its way to Jupiter. We present a three-dimensional non-spherical numerical calculation of the shape and internal structure of a model of Jupiter with a polytropic index of unity. The calculation is based on a finite element method and accounts for the full effects of rotation. After validating the numerical approach against the asymptotic solution of Chandrasekhar (1933) that is valid only for a slowly rotating gaseous planet, we apply it to a model of Jupiter whose rapid rotation causes a significant departure from spherical geometry. The two-dimensional distribution of the density and the pressure within Jupiter is then determined via a hybrid inverse approach by matching the a priori unknown coefficient in the equation of state to the observed shape of Jupiter. After obtaining the two-dimensional distribution of Jupiter's density, we then compute the zonal gravity coefficients and the total mass from the non-spherical Jupiter model that takes full account of rotation-induced shape changes. Our non-spherical model with a polytrope of unit index is able to produce the known mass and zonal gravitational coefficients of Jupiter. Chandrasekhar, S. 1933, The equilibrium of distorted polytropes, MNRAS 93, 390
Heddam, Salim
2014-11-01
The prediction of colored dissolved organic matter (CDOM) using artificial neural network approaches has received little attention in the past few decades. In this study, colored dissolved organic matter (CDOM) was modeled using generalized regression neural network (GRNN) and multiple linear regression (MLR) models as a function of Water temperature (TE), pH, specific conductance (SC), and turbidity (TU). Evaluation of the prediction accuracy of the models is based on the root mean square error (RMSE), mean absolute error (MAE), coefficient of correlation (CC), and Willmott's index of agreement (d). The results indicated that GRNN can be applied successfully for prediction of colored dissolved organic matter (CDOM).
A physiologically motivated sparse, compact, and smooth (SCS) approach to EEG source localization.
Cao, Cheng; Akalin Acar, Zeynep; Kreutz-Delgado, Kenneth; Makeig, Scott
2012-01-01
Here, we introduce a novel approach to the EEG inverse problem based on the assumption that principal cortical sources of multi-channel EEG recordings may be assumed to be spatially sparse, compact, and smooth (SCS). To enforce these characteristics of solutions to the EEG inverse problem, we propose a correlation-variance model which factors a cortical source space covariance matrix into the multiplication of a pre-given correlation coefficient matrix and the square root of the diagonal variance matrix learned from the data under a Bayesian learning framework. We tested the SCS method using simulated EEG data with various SNR and applied it to a real ECOG data set. We compare the results of SCS to those of an established SBL algorithm.
Iterative approach as alternative to S-matrix in modal methods
NASA Astrophysics Data System (ADS)
Semenikhin, Igor; Zanuccoli, Mauro
2014-12-01
The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.
The recurrence coefficients of semi-classical Laguerre polynomials and the fourth Painlevé equation
NASA Astrophysics Data System (ADS)
Filipuk, Galina; Van Assche, Walter; Zhang, Lun
2012-05-01
We show that the coefficients of the three-term recurrence relation for orthogonal polynomials with respect to a semi-classical extension of the Laguerre weight satisfy the fourth Painlevé equation when viewed as functions of one of the parameters in the weight. We compare different approaches to derive this result, namely, the ladder operators approach, the isomonodromy deformations approach and combining the Toda system for the recurrence coefficients with a discrete equation. We also discuss a relation between the recurrence coefficients for the Freud weight and the semi-classical Laguerre weight and show how it arises from the Bäcklund transformation of the fourth Painlevé equation.
NASA Astrophysics Data System (ADS)
Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim
2017-08-01
Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning's n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and uncertainties of Manning's n coefficients compared to the full posterior distributions inferred by MCMC.
A Tactile Sensor Using Piezoresistive Beams for Detection of the Coefficient of Static Friction
Okatani, Taiyu; Takahashi, Hidetoshi; Noda, Kentaro; Takahata, Tomoyuki; Matsumoto, Kiyoshi; Shimoyama, Isao
2016-01-01
This paper reports on a tactile sensor using piezoresistive beams for detection of the coefficient of static friction merely by pressing the sensor against an object. The sensor chip is composed of three pairs of piezoresistive beams arranged in parallel and embedded in an elastomer; this sensor is able to measure the vertical and lateral strains of the elastomer. The coefficient of static friction is estimated from the ratio of the fractional resistance changes corresponding to the sensing elements of vertical and lateral strains when the sensor is in contact with an object surface. We applied a normal force on the sensor surface through objects with coefficients of static friction ranging from 0.2 to 1.1. The fractional resistance changes corresponding to vertical and lateral strains were proportional to the applied force. Furthermore, the relationship between these responses changed according to the coefficients of static friction. The experimental result indicated the proposed sensor could determine the coefficient of static friction before a global slip occurs. PMID:27213374
Mathematical Model for a Simplified Calculation of the Input Momentum Coefficient for AFC Purposes
NASA Astrophysics Data System (ADS)
Hirsch, Damian; Gharib, Morteza
2016-11-01
Active Flow Control (AFC) is an emerging technology which aims at enhancing the aerodynamic performance of flight vehicles (i.e., to save fuel). A viable AFC system must consider the limited resources available on a plane for attaining performance goals. A higher performance goal (i.e., airplane incremental lift) demands a higher input fluidic requirement (i.e., mass flow rate). Therefore, the key requirement for a successful and practical design is to minimize power input while maximizing performance to achieve design targets. One of the most used design parameters is the input momentum coefficient Cμ. The difficulty associated with Cμ lies in obtaining the parameters for its calculation. In the literature two main approaches can be found, which both have their own disadvantages (assumptions, difficult measurements). A new, much simpler calculation approach will be presented that is based on a mathematical model that can be applied to most jet designs (i.e., steady or sweeping jets). The model-incorporated assumptions will be justified theoretically as well as experimentally. Furthermore, the model's capabilities are exploited to give new insight to the AFC technology and its physical limitations. Supported by Boeing.
A new adaptive multiple modelling approach for non-linear and non-stationary systems
NASA Astrophysics Data System (ADS)
Chen, Hao; Gong, Yu; Hong, Xia
2016-07-01
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
A biomechanical triphasic approach to the transport of nondilute solutions in articular cartilage.
Abazari, Alireza; Elliott, Janet A W; Law, Garson K; McGann, Locksley E; Jomha, Nadr M
2009-12-16
Biomechanical models for biological tissues such as articular cartilage generally contain an ideal, dilute solution assumption. In this article, a biomechanical triphasic model of cartilage is described that includes nondilute treatment of concentrated solutions such as those applied in vitrification of biological tissues. The chemical potential equations of the triphasic model are modified and the transport equations are adjusted for the volume fraction and frictional coefficients of the solutes that are not negligible in such solutions. Four transport parameters, i.e., water permeability, solute permeability, diffusion coefficient of solute in solvent within the cartilage, and the cartilage stiffness modulus, are defined as four degrees of freedom for the model. Water and solute transport in cartilage were simulated using the model and predictions of average concentration increase and cartilage weight were fit to experimental data to obtain the values of the four transport parameters. As far as we know, this is the first study to formulate the solvent and solute transport equations of nondilute solutions in the cartilage matrix. It is shown that the values obtained for the transport parameters are within the ranges reported in the available literature, which confirms the proposed model approach.
A Biomechanical Triphasic Approach to the Transport of Nondilute Solutions in Articular Cartilage
Abazari, Alireza; Elliott, Janet A.W.; Law, Garson K.; McGann, Locksley E.; Jomha, Nadr M.
2009-01-01
Abstract Biomechanical models for biological tissues such as articular cartilage generally contain an ideal, dilute solution assumption. In this article, a biomechanical triphasic model of cartilage is described that includes nondilute treatment of concentrated solutions such as those applied in vitrification of biological tissues. The chemical potential equations of the triphasic model are modified and the transport equations are adjusted for the volume fraction and frictional coefficients of the solutes that are not negligible in such solutions. Four transport parameters, i.e., water permeability, solute permeability, diffusion coefficient of solute in solvent within the cartilage, and the cartilage stiffness modulus, are defined as four degrees of freedom for the model. Water and solute transport in cartilage were simulated using the model and predictions of average concentration increase and cartilage weight were fit to experimental data to obtain the values of the four transport parameters. As far as we know, this is the first study to formulate the solvent and solute transport equations of nondilute solutions in the cartilage matrix. It is shown that the values obtained for the transport parameters are within the ranges reported in the available literature, which confirms the proposed model approach. PMID:20006942
A Bayesian estimate of the concordance correlation coefficient with skewed data.
Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir
2015-01-01
Concordance correlation coefficient (CCC) is one of the most popular scaled indices used to evaluate agreement. Most commonly, it is used under the assumption that data is normally distributed. This assumption, however, does not apply to skewed data sets. While methods for the estimation of the CCC of skewed data sets have been introduced and studied, the Bayesian approach and its comparison with the previous methods has been lacking. In this study, we propose a Bayesian method for the estimation of the CCC of skewed data sets and compare it with the best method previously investigated. The proposed method has certain advantages. It tends to outperform the best method studied before when the variation of the data is mainly from the random subject effect instead of error. Furthermore, it allows for greater flexibility in application by enabling incorporation of missing data, confounding covariates, and replications, which was not considered previously. The superiority of this new approach is demonstrated using simulation as well as real-life biomarker data sets used in an electroencephalography clinical study. The implementation of the Bayesian method is accessible through the Comprehensive R Archive Network. Copyright © 2015 John Wiley & Sons, Ltd.
Rathbun, R.E.; Tai, D.Y.
1988-01-01
The two-film model is often used to describe the volatilization of organic substances from water. This model assumes uniformly mixed water and air phases separated by thin films of water and air in which mass transfer is by molecular diffusion. Mass-transfer coefficients for the films, commonly called film coefficients, are related through the Henry's law constant and the model equation to the overall mass-transfer coefficient for volatilization. The films are modeled as two resistances in series, resulting in additive resistances. The two-film model and the concept of additivity of resistances were applied to experimental data for acetone and t-butyl alcohol. Overall mass-transfer coefficients for the volatilization of acetone and t-butyl alcohol from water were measured in the laboratory in a stirred constant-temperature bath. Measurements were completed for six water temperatures, each at three water mixing conditions. Wind-speed was constant at about 0.1 meter per second for all experiments. Oxygen absorption coefficients were measured simultaneously with the measurement of the acetone and t-butyl alcohol mass-transfer coefficients. Gas-film coefficients for acetone, t-butyl alcohol, and water were determined by measuring the volatilization fluxes of the pure substances over a range of temperatures. Henry's law constants were estimated from data from the literature. The combination of high resistance in the gas film for solutes with low values of the Henry's law constants has not been studied previously. Calculation of the liquid-film coefficients for acetone and t-butyl alcohol from measured overall mass-transfer and gas-film coefficients, estimated Henry's law constants, and the two-film model equation resulted in physically unrealistic, negative liquid-film coefficients for most of the experiments at the medium and high water mixing conditions. An analysis of the two-film model equation showed that when the percentage resistance in the gas film is large and the gas-film resistance approaches the overall resistance in value, the calculated liquid-film coefficient becomes extremely sensitive to errors in the Henry's law constant. The negative coefficients were attributed to this sensitivity and to errors in the estimated Henry's law constants. Liquid-film coefficients for the absorption of oxygen were correlated with the stirrer Reynolds number and the Schmidt number. Application of this correlation with the experimental conditions and a molecular-diffusion coefficient adjustment resulted in values of the liquid-film coefficients for both acetone and t-butyl alcohol within the range expected for all three mixing conditions. Comparison of Henry's law constants calculated from these film coefficients and the experimental data with the constants calculated from literature data showed that the differences were small relative to the errors reported in the literature as typical for the measurement or estimation of Henry's law constants for hydrophilic compounds such as ketones and alcohols. Temperature dependence of the mass-transfer coefficients was expressed in two forms. The first, based on thermodynamics, assumed the coefficients varied as the exponential of the reciprocal absolute temperature. The second empirical approach assumed the coefficients varied as the exponential of the absolute temperature. Both of these forms predicted the temperature dependence of the experimental mass-transfer coefficients with little error for most of the water temperature range likely to be found in streams and rivers. Liquid-film and gas-film coefficients for acetone and t-butyl alcohol were similar in value. However, depending on water mixing conditions, overall mass-transfer coefficients for acetone were from two to four times larger than the coefficients for t-butyl alcohol. This difference in behavior of the coefficients resulted because the Henry's law constant for acetone was about three times larger than that of
Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A
2016-09-06
Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ion radial diffusion in an electrostatic impulse model for stormtime ring current formation
NASA Technical Reports Server (NTRS)
Chen, Margaret W.; Schulz, Michael; Lyons, Larry R.; Gorney, David J.
1992-01-01
Two refinements to the quasi-linear theory of ion radial diffusion are proposed and examined analytically with simulations of particle trajectories. The resonance-broadening correction by Dungey (1965) is applied to the quasi-linear diffusion theory by Faelthammar (1965) for an individual model storm. Quasi-linear theory is then applied to the mean diffusion coefficients resulting from simulations of particle trajectories in 20 model storms. The correction for drift-resonance broadening results in quasi-linear diffusion coefficients with discrepancies from the corresponding simulated values that are reduced by a factor of about 3. Further reductions in the discrepancies are noted following the averaging of the quasi-linear diffusion coefficients, the simulated coefficients, and the resonance-broadened coefficients for the 20 storms. Quasi-linear theory provides good descriptions of particle transport for a single storm but performs even better in conjunction with the present ensemble-averaging.
Free response approach in a parametric system
NASA Astrophysics Data System (ADS)
Huang, Dishan; Zhang, Yueyue; Shao, Hexi
2017-07-01
In this study, a new approach to predict the free response in a parametric system is investigated. It is proposed in the special form of a trigonometric series with an exponentially decaying function of time, based on the concept of frequency splitting. By applying harmonic balance, the parametric vibration equation is transformed into an infinite set of homogeneous linear equations, from which the principal oscillation frequency can be computed, and all coefficients of harmonic components can be obtained. With initial conditions, arbitrary constants in a general solution can be determined. To analyze the computational accuracy and consistency, an approach error function is defined, which is used to assess the computational error in the proposed approach and in the standard numerical approach based on the Runge-Kutta algorithm. Furthermore, an example of a dynamic model of airplane wing flutter on a turbine engine is given to illustrate the applicability of the proposed approach. Numerical solutions show that the proposed approach exhibits high accuracy in mathematical expression, and it is valuable for theoretical research and engineering applications of parametric systems.
NASA Astrophysics Data System (ADS)
Ben Shabat, Yael; Shitzer, Avraham
2012-07-01
Facial heat exchange convection coefficients were estimated from experimental data in cold and windy ambient conditions applicable to wind chill calculations. Measured facial temperature datasets, that were made available to this study, originated from 3 separate studies involving 18 male and 6 female subjects. Most of these data were for a -10°C ambient environment and wind speeds in the range of 0.2 to 6 m s-1. Additional single experiments were for -5°C, 0°C and 10°C environments and wind speeds in the same range. Convection coefficients were estimated for all these conditions by means of a numerical facial heat exchange model, applying properties of biological tissues and a typical facial diameter of 0.18 m. Estimation was performed by adjusting the guessed convection coefficients in the computed facial temperatures, while comparing them to measured data, to obtain a satisfactory fit ( r 2 > 0.98, in most cases). In one of the studies, heat flux meters were additionally used. Convection coefficients derived from these meters closely approached the estimated values for only the male subjects. They differed significantly, by about 50%, when compared to the estimated female subjects' data. Regression analysis was performed for just the -10°C ambient temperature, and the range of experimental wind speeds, due to the limited availability of data for other ambient temperatures. The regressed equation was assumed in the form of the equation underlying the "new" wind chill chart. Regressed convection coefficients, which closely duplicated the measured data, were consistently higher than those calculated by this equation, except for one single case. The estimated and currently used convection coefficients are shown to diverge exponentially from each other, as wind speed increases. This finding casts considerable doubts on the validity of the convection coefficients that are used in the computation of the "new" wind chill chart and their applicability to humans in cold and windy environments.
Ben Shabat, Yael; Shitzer, Avraham
2012-07-01
Facial heat exchange convection coefficients were estimated from experimental data in cold and windy ambient conditions applicable to wind chill calculations. Measured facial temperature datasets, that were made available to this study, originated from 3 separate studies involving 18 male and 6 female subjects. Most of these data were for a -10°C ambient environment and wind speeds in the range of 0.2 to 6 m s(-1). Additional single experiments were for -5°C, 0°C and 10°C environments and wind speeds in the same range. Convection coefficients were estimated for all these conditions by means of a numerical facial heat exchange model, applying properties of biological tissues and a typical facial diameter of 0.18 m. Estimation was performed by adjusting the guessed convection coefficients in the computed facial temperatures, while comparing them to measured data, to obtain a satisfactory fit (r(2) > 0.98, in most cases). In one of the studies, heat flux meters were additionally used. Convection coefficients derived from these meters closely approached the estimated values for only the male subjects. They differed significantly, by about 50%, when compared to the estimated female subjects' data. Regression analysis was performed for just the -10°C ambient temperature, and the range of experimental wind speeds, due to the limited availability of data for other ambient temperatures. The regressed equation was assumed in the form of the equation underlying the "new" wind chill chart. Regressed convection coefficients, which closely duplicated the measured data, were consistently higher than those calculated by this equation, except for one single case. The estimated and currently used convection coefficients are shown to diverge exponentially from each other, as wind speed increases. This finding casts considerable doubts on the validity of the convection coefficients that are used in the computation of the "new" wind chill chart and their applicability to humans in cold and windy environments.
NASA Technical Reports Server (NTRS)
Herbst, E.; Leung, C. M.
1986-01-01
In order to incorporate large ion-polar neutral rate coefficients into existing gas phase reaction networks, it is necessary to utilize simplified theoretical treatments because of the significant number of rate coefficients needed. The authors have used two simple theoretical treatments: the locked dipole approach of Moran and Hamill for linear polar neutrals and the trajectory scaling approach of Su and Chesnavich for nonlinear polar neutrals. The former approach is suitable for linear species because in the interstellar medium these are rotationally relaxed to a large extent and the incoming charged reactants can lock their dipoles into the lowest energy configuration. The latter approach is a better approximation for nonlinear neutral species, in which rotational relaxation is normally less severe and the incoming charged reactants are not as effective at locking the dipoles. The treatments are in reasonable agreement with more detailed long range theories and predict an inverse square root dependence on kinetic temperature for the rate coefficient. Compared with the locked dipole method, the trajectory scaling approach results in rate coefficients smaller by a factor of approximately 2.5.
NASA Astrophysics Data System (ADS)
Song, Qi; Song, Y. D.; Cai, Wenchuan
2011-09-01
Although backstepping control design approach has been widely utilised in many practical systems, little effort has been made in applying this useful method to train systems. The main purpose of this paper is to apply this popular control design technique to speed and position tracking control of high-speed trains. By integrating adaptive control with backstepping control, we develop a control scheme that is able to address not only the traction and braking dynamics ignored in most existing methods, but also the uncertain friction and aerodynamic drag forces arisen from uncertain resistance coefficients. As such, the resultant control algorithms are able to achieve high precision train position and speed tracking under varying operation railway conditions, as validated by theoretical analysis and numerical simulations.
NASA Astrophysics Data System (ADS)
Zhan, Zhigang; Wei, Huajiang; Jin, Ying
2015-02-01
Laser irradiation is considered to be a promising innovative technology which has been developed in an attempt to increase transdermal drug delivery. In this study, a near-infrared CW diode laser (785 nm) was applied to increase permeability of glycerol solutions in human skin in vivo and improve the optical clearing efficacy. Results show that for both 15%v/v and 30%v/v glycerol, the permeability coefficient increased significantly if the detected area of the skin tissue was treated with laser irradiation before optical clearing agents (OCAs) were applied. This study based on optical coherence tomography imaging technique and optical clearing effect finds laser irradiation a new approach for enhancing the penetration of OCAs and accelerating the rate of transdermal drug delivery.
NASA Astrophysics Data System (ADS)
Camporesi, Roberto
2016-01-01
We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and variation of parameters. The approach presented here can be used in a first course on differential equations for science and engineering majors.
Apparent-Strain Correction for Combined Thermal and Mechanical Testing
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; O'Neil, Teresa L.
2007-01-01
Combined thermal and mechanical testing requires that the total strain be corrected for the coefficient of thermal expansion mismatch between the strain gage and the specimen or apparent strain when the temperature varies while a mechanical load is being applied. Collecting data for an apparent strain test becomes problematic as the specimen size increases. If the test specimen cannot be placed in a variable temperature test chamber to generate apparent strain data with no mechanical loads, coupons can be used to generate the required data. The coupons, however, must have the same strain gage type, coefficient of thermal expansion, and constraints as the specimen to be useful. Obtaining apparent-strain data at temperatures lower than -320 F is challenging due to the difficulty to maintain steady-state and uniform temperatures on a given specimen. Equations to correct for apparent strain in a real-time fashion and data from apparent-strain tests for composite and metallic specimens over a temperature range from -450 F to +250 F are presented in this paper. Three approaches to extrapolate apparent-strain data from -320 F to -430 F are presented and compared to the measured apparent-strain data. The first two approaches use a subset of the apparent-strain curves between -320 F and 100 F to extrapolate to -430 F, while the third approach extrapolates the apparent-strain curve over the temperature range of -320 F to +250 F to -430 F. The first two approaches are superior to the third approach but the use of either of the first two approaches is contingent upon the degree of non-linearity of the apparent-strain curve.
NASA Astrophysics Data System (ADS)
Bashir, Usman; Yu, Yugang; Hussain, Muntazir; Zebende, Gilney F.
2016-11-01
This paper investigates the dynamics of the relationship between foreign exchange markets and stock markets through time varying co-movements. In this sense, we analyzed the time series monthly of Latin American countries for the period from 1991 to 2015. Furthermore, we apply Granger causality to verify the direction of causality between foreign exchange and stock market and detrended cross-correlation approach (ρDCCA) for any co-movements at different time scales. Our empirical results suggest a positive cross correlation between exchange rate and stock price for all Latin American countries. The findings reveal two clear patterns of correlation. First, Brazil and Argentina have positive correlation in both short and long time frames. Second, the remaining countries are negatively correlated in shorter time scale, gradually moving to positive. This paper contributes to the field in three ways. First, we verified the co-movements of exchange rate and stock prices that were rarely discussed in previous empirical studies. Second, ρDCCA coefficient is a robust and powerful methodology to measure the cross correlation when dealing with non stationarity of time series. Third, most of the studies employed one or two time scales using co-integration and vector autoregressive approaches. Not much is known about the co-movements at varying time scales between foreign exchange and stock markets. ρDCCA coefficient facilitates the understanding of its explanatory depth.
Cong, Fengyu; Lin, Qiu-Hua; Astikainen, Piia; Ristaniemi, Tapani
2014-10-30
It is well-known that data of event-related potentials (ERPs) conform to the linear transform model (LTM). For group-level ERP data processing using principal/independent component analysis (PCA/ICA), ERP data of different experimental conditions and different participants are often concatenated. It is theoretically assumed that different experimental conditions and different participants possess the same LTM. However, how to validate the assumption has been seldom reported in terms of signal processing methods. When ICA decomposition is globally optimized for ERP data of one stimulus, we gain the ratio between two coefficients mapping a source in brain to two points along the scalp. Based on such a ratio, we defined a relative mapping coefficient (RMC). If RMCs between two conditions for an ERP are not significantly different in practice, mapping coefficients of this ERP between the two conditions are statistically identical. We examined whether the same LTM of ERP data could be applied for two different stimulus types of fearful and happy facial expressions. They were used in an ignore oddball paradigm in adult human participants. We found no significant difference in LTMs (based on ICASSO) of N170 responses to the fearful and the happy faces in terms of RMCs of N170. We found no methods for straightforward comparison. The proposed RMC in light of ICA decomposition is an effective approach for validating the similarity of LTMs of ERPs between experimental conditions. This is very fundamental to apply group-level PCA/ICA to process ERP data. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
González-Llana, Arturo; González-Bárcena, David; Pérez-Grande, Isabel; Sanz-Andrés, Ángel
2018-07-01
The selection of the extreme thermal environmental conditions -albedo coefficient and Earth infrared radiation- for the thermal design of stratospheric balloon missions is usually based on the methodologies applied in space missions. However, the particularities of stratospheric balloon missions, such as the much higher residence time of the balloon payload over a determined area, make necessary an approach centered in the actual environment the balloon is going to find, in terms of geographic area and season of flight. In this sense, this work is focussed on stratospheric balloon missions circumnavigating the North Pole during the summer period. Pairs of albedo and Earth infrared radiation satellite data restricted to this area and season of interest have been treated statistically. Furthermore, the environmental conditions leading to the extreme temperatures of the payload depend in turn on the surface finish, and more particularly on the ratio between the solar absorptance and the infrared emissivity α/ε. A simple but representative thermal model of a balloon and its payload has been set up in order to identify the pairs of albedo coefficient and Earth infrared radiation leading to extreme temperatures for each value of α/ε.
Argenti, Fabrizio; Bianchi, Tiziano; Alparone, Luciano
2006-11-01
In this paper, a new despeckling method based on undecimated wavelet decomposition and maximum a posteriori MIAP) estimation is proposed. Such a method relies on the assumption that the probability density function (pdf) of each wavelet coefficient is generalized Gaussian (GG). The major novelty of the proposed approach is that the parameters of the GG pdf are taken to be space-varying within each wavelet frame. Thus, they may be adjusted to spatial image context, not only to scale and orientation. Since the MAP equation to be solved is a function of the parameters of the assumed pdf model, the variance and shape factor of the GG function are derived from the theoretical moments, which depend on the moments and joint moments of the observed noisy signal and on the statistics of speckle. The solution of the MAP equation yields the MAP estimate of the wavelet coefficients of the noise-free image. The restored SAR image is synthesized from such coefficients. Experimental results, carried out on both synthetic speckled images and true SAR images, demonstrate that MAP filtering can be successfully applied to SAR images represented in the shift-invariant wavelet domain, without resorting to a logarithmic transformation.
Limb Correction of Polar-Orbiting Imagery for the Improved Interpretation of RGB Composites
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; Elmer, Nicholas
2016-01-01
Red-Green-Blue (RGB) composite imagery combines information from several spectral channels into one image to aid in the operational analysis of atmospheric processes. However, infrared channels are adversely affected by the limb effect, the result of an increase in optical path length of the absorbing atmosphere between the satellite and the earth as viewing zenith angle increases. This paper reviews a newly developed technique to quickly correct for limb effects in both clear and cloudy regions using latitudinally and seasonally varying limb correction coefficients for real-time applications. These limb correction coefficients account for the increase in optical path length in order to produce limb-corrected RGB composites. The improved utility of a limb-corrected Air Mass RGB composite from the application of this approach is demonstrated using Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) imagery. However, the limb correction can be applied to any polar-orbiting sensor infrared channels, provided the proper limb correction coefficients are calculated. Corrected RGB composites provide multiple advantages over uncorrected RGB composites, including increased confidence in the interpretation of RGB features, improved situational awareness for operational forecasters, and the ability to use RGB composites from multiple sensors jointly to increase the temporal frequency of observations.
Influence of temperature and charge effects on thermophoresis of polystyrene beads⋆.
Syshchyk, Olga; Afanasenkau, Dzmitry; Wang, Zilin; Kriegs, Hartmut; Buitenhuis, Johan; Wiegand, Simone
2016-12-01
We study the thermodiffusion behavior of spherical polystyrene beads with a diameter of 25 nm by infrared thermal diffusion Forced Rayleigh Scattering (IR-TDFRS). Similar beads were used to investigate the radial dependence of the Soret coefficient by different authors. While Duhr and Braun (Proc. Natl. Acad. Sci. U.S.A. 104, 9346 (2007)) observed a quadratic radial dependence Braibanti et al. (Phys. Rev. Lett. 100, 108303 (2008)) found a linear radial dependence of the Soret coefficient. We demonstrated that special care needs to be taken to obtain reliable thermophoretic data, because the measurements are very sensitive to surface properties. The colloidal particles were characterized by transmission electron microscopy and dynamic light scattering (DLS) experiments were performed. We carried out systematic thermophoretic measurements as a function of temperature, buffer and surfactant concentration. The temperature dependence was analyzed using an empirical formula. To describe the Debye length dependence we used a theoretical model by Dhont. The resulting surface charge density is in agreement with previous literature results. Finally, we analyze the dependence of the Soret coefficient on the concentration of the anionic surfactant sodium dodecyl sulfate (SDS), applying an empirical thermodynamic approach accounting for chemical contributions.
A New Approach to Galaxy Morphology. I. Analysis of the Sloan Digital Sky Survey Early Data Release
NASA Astrophysics Data System (ADS)
Abraham, Roberto G.; van den Bergh, Sidney; Nair, Preethi
2003-05-01
In this paper we present a new statistic for quantifying galaxy morphology based on measurements of the Gini coefficient of galaxy light distributions. This statistic is easy to measure and is commonly used in econometrics to measure how wealth is distributed in human populations. When applied to galaxy images, the Gini coefficient provides a quantitative measure of the inequality with which a galaxy's light is distributed among its constituent pixels. We measure the Gini coefficient of local galaxies in the Early Data Release of the Sloan Digital Sky Survey and demonstrate that this quantity is closely correlated with measurements of central concentration, but with significant scatter. This scatter is almost entirely due to variations in the mean surface brightness of galaxies. By exploring the distribution of galaxies in the three-dimensional parameter space defined by the Gini coefficient, central concentration, and mean surface brightness, we show that all nearby galaxies lie on a well-defined two-dimensional surface (a slightly warped plane) embedded within a three-dimensional parameter space. By associating each galaxy sample with the equation of this plane, we can encode the morphological composition of the entire SDSS g*-band sample using the following three numbers: {22.451, 5.366, 7.010}. The i*-band sample is encoded as {22.149, 5.373, and 7.627}.
Binzoni, T; Leung, T S; Rüfenacht, D; Delpy, D T
2006-01-21
Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware.
Improving the prospects of cleavage-based nanopore sequencing engines
NASA Astrophysics Data System (ADS)
Brady, Kyle T.; Reiner, Joseph E.
2015-08-01
Recently proposed methods for DNA sequencing involve the use of cleavage-based enzymes attached to the opening of a nanopore. The idea is that DNA interacting with either an exonuclease or polymerase protein will lead to a small molecule being cleaved near the mouth of the nanopore, and subsequent entry into the pore will yield information about the DNA sequence. The prospects for this approach seem promising, but it has been shown that diffusion related effects impose a limit on the capture probability of molecules by the pore, which limits the efficacy of the technique. Here, we revisit the problem with the goal of optimizing the capture probability via a step decrease in the nucleotide diffusion coefficient between the pore and bulk solutions. It is shown through random walk simulations and a simplified analytical model that decreasing the molecule's diffusion coefficient in the bulk relative to its value in the pore increases the nucleotide capture probability. Specifically, we show that at sufficiently high applied transmembrane potentials (≥100 mV), increasing the potential by a factor f is equivalent to decreasing the diffusion coefficient ratio Dbulk/Dpore by the same factor f. This suggests a promising route toward implementation of cleavage-based sequencing protocols. We also discuss the feasibility of forming a step function in the diffusion coefficient across the pore-bulk interface.
Self-organization of developing embryo using scale-invariant approach
2011-01-01
Background Self-organization is a fundamental feature of living organisms at all hierarchical levels from molecule to organ. It has also been documented in developing embryos. Methods In this study, a scale-invariant power law (SIPL) method has been used to study self-organization in developing embryos. The SIPL coefficient was calculated using a centro-axial skew symmetrical matrix (CSSM) generated by entering the components of the Cartesian coordinates; for each component, one CSSM was generated. A basic square matrix (BSM) was constructed and the determinant was calculated in order to estimate the SIPL coefficient. This was applied to developing C. elegans during early stages of embryogenesis. The power law property of the method was evaluated using the straight line and Koch curve and the results were consistent with fractal dimensions (fd). Diffusion-limited aggregation (DLA) was used to validate the SIPL method. Results and conclusion The fractal dimensions of both the straight line and Koch curve showed consistency with the SIPL coefficients, which indicated the power law behavior of the SIPL method. The results showed that the ABp sublineage had a higher SIPL coefficient than EMS, indicating that ABp is more organized than EMS. The fd determined using DLA was higher in ABp than in EMS and its value was consistent with type 1 cluster formation, while that in EMS was consistent with type 2. PMID:21635789
Self-organization of developing embryo using scale-invariant approach.
Tiraihi, Ali; Tiraihi, Mujtaba; Tiraihi, Taki
2011-06-03
Self-organization is a fundamental feature of living organisms at all hierarchical levels from molecule to organ. It has also been documented in developing embryos. In this study, a scale-invariant power law (SIPL) method has been used to study self-organization in developing embryos. The SIPL coefficient was calculated using a centro-axial skew symmetrical matrix (CSSM) generated by entering the components of the Cartesian coordinates; for each component, one CSSM was generated. A basic square matrix (BSM) was constructed and the determinant was calculated in order to estimate the SIPL coefficient. This was applied to developing C. elegans during early stages of embryogenesis. The power law property of the method was evaluated using the straight line and Koch curve and the results were consistent with fractal dimensions (fd). Diffusion-limited aggregation (DLA) was used to validate the SIPL method. The fractal dimensions of both the straight line and Koch curve showed consistency with the SIPL coefficients, which indicated the power law behavior of the SIPL method. The results showed that the ABp sublineage had a higher SIPL coefficient than EMS, indicating that ABp is more organized than EMS. The fd determined using DLA was higher in ABp than in EMS and its value was consistent with type 1 cluster formation, while that in EMS was consistent with type 2. © 2011 Tiraihi et al; licensee BioMed Central Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
AllamehZadeh, Mostafa, E-mail: dibaparima@yahoo.com
A Quadratic Neural Networks (QNNs) model has been developed for identifying seismic source classification problem at regional distances using ARMA coefficients determination by Artificial Neural Networks (ANNs). We have devised a supervised neural system to discriminate between earthquakes and chemical explosions with filter coefficients obtained by windowed P-wave phase spectra (15 s). First, we preprocess the recording's signals to cancel out instrumental and attenuation site effects and obtain a compact representation of seismic records. Second, we use a QNNs system to obtain ARMA coefficients for feature extraction in the discrimination problem. The derived coefficients are then applied to the neuralmore » system to train and classification. In this study, we explore the possibility of using single station three-component (3C) covariance matrix traces from a priori-known explosion sites (learning) for automatically recognizing subsequent explosions from the same site. The results have shown that this feature extraction gives the best classifier for seismic signals and performs significantly better than other classification methods. The events have been tested, which include 36 chemical explosions at the Semipalatinsk test site in Kazakhstan and 61 earthquakes (mb = 5.0-6.5) recorded by the Iranian National Seismic Network (INSN). The 100% correct decisions were obtained between site explosions and some of non-site events. The above approach to event discrimination is very flexible as we can combine several 3C stations.« less
Test Reliability at the Individual Level
Hu, Yueqin; Nesselroade, John R.; Erbacher, Monica K.; Boker, Steven M.; Burt, S. Alexandra; Keel, Pamela K.; Neale, Michael C.; Sisk, Cheryl L.; Klump, Kelly
2016-01-01
Reliability has a long history as one of the key psychometric properties of a test. However, a given test might not measure people equally reliably. Test scores from some individuals may have considerably greater error than others. This study proposed two approaches using intraindividual variation to estimate test reliability for each person. A simulation study suggested that the parallel tests approach and the structural equation modeling approach recovered the simulated reliability coefficients. Then in an empirical study, where forty-five females were measured daily on the Positive and Negative Affect Schedule (PANAS) for 45 consecutive days, separate estimates of reliability were generated for each person. Results showed that reliability estimates of the PANAS varied substantially from person to person. The methods provided in this article apply to tests measuring changeable attributes and require repeated measures across time on each individual. This article also provides a set of parallel forms of PANAS. PMID:28936107
The determination of the elastodynamic fields of an ellipsoidal inhomogeneity
NASA Technical Reports Server (NTRS)
Fu, L. S.; Mura, T.
1983-01-01
The determination of the elastodynamic fields of an ellipsoidal inhomogeneity is studied in detail via the eigenstrain approach. A complete formulation and a treatment of both types of eigenstrains for equivalence between the inhomogeneity problem and the inclusion problem are given. This approach is shown to be mathematically identical to other approaches such as the direct volume integral formulation. Expanding the eigenstrains and applied strains in the polynomial form in the position vector and satisfying the equivalence conditions at every point, the governing simultaneous algebraic equations for the unknown coefficients in the eigenstrain expansion are derived. The elastodynamic field outside an ellipsoidal inhomogeneity in a linear elastic isotropic medium is given as an example. The angular and frequency dependence of the induced displacement field, as well as the differential and total cross sections are formally given in series expansion form for the case of uniformly distributed eigenstrains.
NASA Astrophysics Data System (ADS)
Movshovitz, N.; Fortney, J. J.; Helled, R.; Hubbard, W. B.; Mankovich, C.; Thorngren, D.; Wahl, S. M.; Militzer, B.; Durante, D.
2017-12-01
The external gravity field of a planetary body is determined by the distribution of mass in its interior. Therefore, a measurement of the external field, properlyinterpreted, tells us about the interior density profile, ρ(r), which in turn can be used to constrain the composition in the interior and thereby learn about theformation mechanism of the planet. Recently, very high precision measurements of the gravity coefficients for Saturn have been made by the radio science instrument on the Cassini spacecraft during its Grand Finale orbits. The resulting coefficients come with an associated uncertainty. The task of matching a given density profile to a given set of gravity coefficients is relatively straightforward, but the question of how to best account for the uncertainty is not. In essentially all prior work on matching models to gravity field data inferences about planetary structure have rested on assumptions regarding the imperfectly known H/He equation of state and the assumption of an adiabatic interior. Here we wish to vastly expand the phase space of such calculations. We present a framework for describing all the possible interior density structures of a Jovian planet constrained by a given set of gravity coefficients and their associated uncertainties. Our approach is statistical. We produce a random sample of ρ(a) curves drawn from the underlying (and unknown) probability distribution of all curves, where ρ is the density on an interior level surface with equatorial radius a. Since the resulting set of density curves is a random sample, that is, curves appear with frequency proportional to the likelihood of their being consistent with the measured gravity, we can compute probability distributions for any quantity that is a function of ρ, such as central pressure, oblateness, core mass and radius, etc. Our approach is also Bayesian, in that it can utilize any prior assumptions about the planet's interior, as necessary, without being overly constrained by them. We apply this approach to produce a sample of Saturn interior models based on gravity data from Grand Finale orbits and discuss their implications.
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1992-01-01
Research conducted during the period from July 1991 through December 1992 is covered. A method based upon the quasi-analytical approach was developed for computing the aerodynamic sensitivity coefficients of three dimensional wings in transonic and subsonic flow. In addition, the method computes for comparison purposes the aerodynamic sensitivity coefficients using the finite difference approach. The accuracy and validity of the methods are currently under investigation.
Penalized spline estimation for functional coefficient regression models.
Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan
2010-04-01
The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.
NASA Technical Reports Server (NTRS)
1977-01-01
A class of signal processors suitable for the reduction of radar scatterometer data in real time was developed. The systems were applied to the reduction of single polarized 13.3 GHz scatterometer data and provided a real time output of radar scattering coefficient as a function of incident angle. It was proposed that a system for processing of C band radar data be constructed to support scatterometer system currently under development. The establishment of a feasible design approach to the development of this processor system utilizing microprocessor technology was emphasized.
Gazzillo, Domenico
2011-03-28
For fluids of molecules with short-ranged hard-sphere-Yukawa (HSY) interactions, it is proven that the Noro-Frenkel "extended law of corresponding states" cannot be applied down to the vanishing attraction range, since the exact HSY second virial coefficient diverges in such a limit. It is also shown that, besides Baxter's original approach, a fully correct alternative definition of "adhesive hard spheres" can be obtained by taking the vanishing-range-limit (sticky limit) not of a Yukawa tail, as is commonly done, but of a slightly different potential with a logarithmic-Yukawa attraction.
Generalized epidemic process on modular networks.
Chung, Kihong; Baek, Yongjoo; Kim, Daniel; Ha, Meesoon; Jeong, Hawoong
2014-05-01
Social reinforcement and modular structure are two salient features observed in the spreading of behavior through social contacts. In order to investigate the interplay between these two features, we study the generalized epidemic process on modular networks with equal-sized finite communities and adjustable modularity. Using the analytical approach originally applied to clique-based random networks, we show that the system exhibits a bond-percolation type continuous phase transition for weak social reinforcement, whereas a discontinuous phase transition occurs for sufficiently strong social reinforcement. Our findings are numerically verified using the finite-size scaling analysis and the crossings of the bimodality coefficient.
Machine learning for many-body physics: The case of the Anderson impurity model
Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole; ...
2014-10-31
We applied machine learning methods in order to find the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Furthermore, different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. Our results indicate that a machine learning approach to dynamical mean-field theory may be feasible.
Machine learning for many-body physics: The case of the Anderson impurity model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole
We applied machine learning methods in order to find the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Furthermore, different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. Our results indicate that a machine learning approach to dynamical mean-field theory may be feasible.
Tight-binding model for borophene and borophane
NASA Astrophysics Data System (ADS)
Nakhaee, M.; Ketabi, S. A.; Peeters, F. M.
2018-03-01
Starting from the simplified linear combination of atomic orbitals method in combination with first-principles calculations, we construct a tight-binding (TB) model in the two-centre approximation for borophene and hydrogenated borophene (borophane). The Slater and Koster approach is applied to calculate the TB Hamiltonian of these systems. We obtain expressions for the Hamiltonian and overlap matrix elements between different orbitals for the different atoms and present the SK coefficients in a nonorthogonal basis set. An anisotropic Dirac cone is found in the band structure of borophane. We derive a Dirac low-energy Hamiltonian and compare the Fermi velocities with that of graphene.
Determining attenuation properties of interfering fast and slow ultrasonic waves in cancellous bone.
Nelson, Amber M; Hoffman, Joseph J; Anderson, Christian C; Holland, Mark R; Nagatani, Yoshiki; Mizuno, Katsunori; Matsukawa, Mami; Miller, James G
2011-10-01
Previous studies have shown that interference between fast waves and slow waves can lead to observed negative dispersion in cancellous bone. In this study, the effects of overlapping fast and slow waves on measurements of the apparent attenuation as a function of propagation distance are investigated along with methods of analysis used to determine the attenuation properties. Two methods are applied to simulated data that were generated based on experimentally acquired signals taken from a bovine specimen. The first method uses a time-domain approach that was dictated by constraints imposed by the partial overlap of fast and slow waves. The second method uses a frequency-domain log-spectral subtraction technique on the separated fast and slow waves. Applying the time-domain analysis to the broadband data yields apparent attenuation behavior that is larger in the early stages of propagation and decreases as the wave travels deeper. In contrast, performing frequency-domain analysis on the separated fast waves and slow waves results in attenuation coefficients that are independent of propagation distance. Results suggest that features arising from the analysis of overlapping two-mode data may represent an alternate explanation for the previously reported apparent dependence on propagation distance of the attenuation coefficient of cancellous bone. © 2011 Acoustical Society of America
Determining attenuation properties of interfering fast and slow ultrasonic waves in cancellous bone
Nelson, Amber M.; Hoffman, Joseph J.; Anderson, Christian C.; Holland, Mark R.; Nagatani, Yoshiki; Mizuno, Katsunori; Matsukawa, Mami; Miller, James G.
2011-01-01
Previous studies have shown that interference between fast waves and slow waves can lead to observed negative dispersion in cancellous bone. In this study, the effects of overlapping fast and slow waves on measurements of the apparent attenuation as a function of propagation distance are investigated along with methods of analysis used to determine the attenuation properties. Two methods are applied to simulated data that were generated based on experimentally acquired signals taken from a bovine specimen. The first method uses a time-domain approach that was dictated by constraints imposed by the partial overlap of fast and slow waves. The second method uses a frequency-domain log-spectral subtraction technique on the separated fast and slow waves. Applying the time-domain analysis to the broadband data yields apparent attenuation behavior that is larger in the early stages of propagation and decreases as the wave travels deeper. In contrast, performing frequency-domain analysis on the separated fast waves and slow waves results in attenuation coefficients that are independent of propagation distance. Results suggest that features arising from the analysis of overlapping two-mode data may represent an alternate explanation for the previously reported apparent dependence on propagation distance of the attenuation coefficient of cancellous bone. PMID:21973378
Plasma-assisted physical vapor deposition surface treatments for tribological control
NASA Technical Reports Server (NTRS)
Spalvins, Talivaldis
1990-01-01
In any mechanical or engineering system where contacting surfaces are in relative motion, adhesion, wear, and friction affect reliability and performance. With the advancement of space age transportation systems, the tribological requirements have dramatically increased. This is due to the optimized design, precision tolerance requirements, and high reliability expected for solid lubricating films in order to withstand hostile operating conditions (vacuum, high-low temperatures, high loads, and space radiation). For these problem areas the ion-assisted deposition/modification processes (plasma-based and ion beam techniques) offer the greatest potential for the synthesis of thin films and the tailoring of adherence and chemical and structural properties for optimized tribological performance. The present practices and new approaches of applying soft solid lubricant and hard wear resistant films to engineering substrates are reviewed. The ion bombardment treatments have increased film adherence, lowered friction coefficients, and enhanced wear life of the solid lubricating films such as the dichalcogenides (MoS2) and the soft metals (Au, Ag, Pb). Currently, sputtering is the preferred method of applying MoS2 films; and ion plating, the soft metallic films. Ultralow friction coefficients (less than 0.01) were achieved with sputtered MoS2. Further, new diamond-like carbon and BN lubricating films are being developed by using the ion assisted deposition techniques.
Theoretical Analysis of Drug Dissolution: I. Solubility and Intrinsic Dissolution Rate.
Shekunov, Boris; Montgomery, Eda Ross
2016-09-01
The first-principles approach presented in this work combines surface kinetics and convective diffusion modeling applied to compounds with pH-dependent solubility and in different dissolution media. This analysis is based on experimental data available for approximately 100 compounds of pharmaceutical interest. Overall, there is a linear relationship between the drug solubility and intrinsic dissolution rate expressed through the total kinetic coefficient of dissolution and dimensionless numbers defining the mass transfer regime. The contribution of surface kinetics appears to be significant constituting on average ∼20% resistance to the dissolution flux in the compendial rotating disk apparatus at 100 rpm. The surface kinetics contribution becomes more dominant under conditions of fast laminar or turbulent flows or in cases when the surface kinetic coefficient may decrease as a function of solution composition or pH. Limitations of the well-known convective diffusion equation for rotating disk by Levich are examined using direct computational modeling with simultaneous dissociation and acid-base reactions in which intrinsic dissolution rate is strongly dependent on pH profile and solution ionic strength. It is shown that concept of diffusion boundary layer does not strictly apply for reacting/interacting species and that thin-film diffusion models cannot be used quantitatively in general case. Copyright © 2016. Published by Elsevier Inc.
Kong, W W; Zhang, C; Liu, F; Gong, A P; He, Y
2013-08-01
The objective of this study was to examine the possibility of applying visible and near-infrared spectroscopy to the quantitative detection of irradiation dose of irradiated milk powder. A total of 150 samples were used: 100 for the calibration set and 50 for the validation set. The samples were irradiated at 5 different dose levels in the dose range 0 to 6.0 kGy. Six different pretreatment methods were compared. The prediction results of full spectra given by linear and nonlinear calibration methods suggested that Savitzky-Golay smoothing and first derivative were suitable pretreatment methods in this study. Regression coefficient analysis was applied to select effective wavelengths (EW). Less than 10 EW were selected and they were useful for portable detection instrument or sensor development. Partial least squares, extreme learning machine, and least squares support vector machine were used. The best prediction performance was achieved by the EW-extreme learning machine model with first-derivative spectra, and correlation coefficients=0.97 and root mean square error of prediction=0.844. This study provided a new approach for the fast detection of irradiation dose of milk powder. The results could be helpful for quality detection and safety monitoring of milk powder. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Virtual screening by a new Clustering-based Weighted Similarity Extreme Learning Machine approach
Kudisthalert, Wasu
2018-01-01
Machine learning techniques are becoming popular in virtual screening tasks. One of the powerful machine learning algorithms is Extreme Learning Machine (ELM) which has been applied to many applications and has recently been applied to virtual screening. We propose the Weighted Similarity ELM (WS-ELM) which is based on a single layer feed-forward neural network in a conjunction of 16 different similarity coefficients as activation function in the hidden layer. It is known that the performance of conventional ELM is not robust due to random weight selection in the hidden layer. Thus, we propose a Clustering-based WS-ELM (CWS-ELM) that deterministically assigns weights by utilising clustering algorithms i.e. k-means clustering and support vector clustering. The experiments were conducted on one of the most challenging datasets–Maximum Unbiased Validation Dataset–which contains 17 activity classes carefully selected from PubChem. The proposed algorithms were then compared with other machine learning techniques such as support vector machine, random forest, and similarity searching. The results show that CWS-ELM in conjunction with support vector clustering yields the best performance when utilised together with Sokal/Sneath(1) coefficient. Furthermore, ECFP_6 fingerprint presents the best results in our framework compared to the other types of fingerprints, namely ECFP_4, FCFP_4, and FCFP_6. PMID:29652912
A subgradient approach for constrained binary optimization via quantum adiabatic evolution
NASA Astrophysics Data System (ADS)
Karimi, Sahar; Ronagh, Pooya
2017-08-01
Outer approximation method has been proposed for solving the Lagrangian dual of a constrained binary quadratic programming problem via quantum adiabatic evolution in the literature. This should be an efficient prescription for solving the Lagrangian dual problem in the presence of an ideally noise-free quantum adiabatic system. However, current implementations of quantum annealing systems demand methods that are efficient at handling possible sources of noise. In this paper, we consider a subgradient method for finding an optimal primal-dual pair for the Lagrangian dual of a constrained binary polynomial programming problem. We then study the quadratic stable set (QSS) problem as a case study. We see that this method applied to the QSS problem can be viewed as an instance-dependent penalty-term approach that avoids large penalty coefficients. Finally, we report our experimental results of using the D-Wave 2X quantum annealer and conclude that our approach helps this quantum processor to succeed more often in solving these problems compared to the usual penalty-term approaches.
NASA Technical Reports Server (NTRS)
Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.
1976-01-01
Results of a study of the development of flutter modules applicable to automated structural design of advanced aircraft configurations, such as a supersonic transport, are presented. Automated structural design is restricted to automated sizing of the elements of a given structural model. It includes a flutter optimization procedure; i.e., a procedure for arriving at a structure with minimum mass for satisfying flutter constraints. Methods of solving the flutter equation and computing the generalized aerodynamic force coefficients in the repetitive analysis environment of a flutter optimization procedure are studied, and recommended approaches are presented. Five approaches to flutter optimization are explained in detail and compared. An approach to flutter optimization incorporating some of the methods discussed is presented. Problems related to flutter optimization in a realistic design environment are discussed and an integrated approach to the entire flutter task is presented. Recommendations for further investigations are made. Results of numerical evaluations, applying the five methods of flutter optimization to the same design task, are presented.
Cylindrically symmetric Green's function approach for modeling the crystal growth morphology of ice.
Libbrecht, K G
1999-08-01
We describe a front-tracking Green's function approach to modeling cylindrically symmetric crystal growth. This method is simple to implement, and with little computer power can adequately model a wide range of physical situations. We apply the method to modeling the hexagonal prism growth of ice crystals, which is governed primarily by diffusion along with anisotropic surface kinetic processes. From ice crystal growth observations in air, we derive measurements of the kinetic growth coefficients for the basal and prism faces as a function of temperature, for supersaturations near the water saturation level. These measurements are interpreted in the context of a model for the nucleation and growth of ice, in which the growth dynamics are dominated by the structure of a disordered layer on the ice surfaces.
Composite anion-exchangers modified with nanoparticles of hydrated oxides of multivalent metals
NASA Astrophysics Data System (ADS)
Maltseva, T. V.; Kolomiets, E. O.; Dzyazko, Yu. S.; Scherbakov, S.
2018-02-01
Organic-inorganic composite ion-exchangers based on anion exchange resins have been obtained. Particles of one-component and two-component modifier were embedded using the approach, which allows us to realize purposeful control of a size of the embedded particles. The approach is based on Ostwald-Freundlich equation, which was adapted to deposition in ion exchange matrix. The equation was obtained experimentally. Hydrated oxides of zirconium and iron were applied to modification, concentration of the reagents were varied. The embedded particles accelerate sorption, the rate of which is fitted by the model equation of chemical reactions of pseudo-second order. When sorption of arsenate ions from very diluted solution (50 µg dm-3) occurs, the composites show higher distribution coefficients comparing with the pristine resin.
Recent advances in statistical energy analysis
NASA Technical Reports Server (NTRS)
Heron, K. H.
1992-01-01
Statistical Energy Analysis (SEA) has traditionally been developed using modal summation and averaging approach, and has led to the need for many restrictive SEA assumptions. The assumption of 'weak coupling' is particularly unacceptable when attempts are made to apply SEA to structural coupling. It is now believed that this assumption is more a function of the modal formulation rather than a necessary formulation of SEA. The present analysis ignores this restriction and describes a wave approach to the calculation of plate-plate coupling loss factors. Predictions based on this method are compared with results obtained from experiments using point excitation on one side of an irregular six-sided box structure. Conclusions show that the use and calculation of infinite transmission coefficients is the way forward for the development of a purely predictive SEA code.
New Insights into Signed Path Coefficient Granger Causality Analysis.
Zhang, Jian; Li, Chong; Jiang, Tianzi
2016-01-01
Granger causality analysis, as a time series analysis technique derived from econometrics, has been applied in an ever-increasing number of publications in the field of neuroscience, including fMRI, EEG/MEG, and fNIRS. The present study mainly focuses on the validity of "signed path coefficient Granger causality," a Granger-causality-derived analysis method that has been adopted by many fMRI researches in the last few years. This method generally estimates the causality effect among the time series by an order-1 autoregression, and defines a positive or negative coefficient as an "excitatory" or "inhibitory" influence. In the current work we conducted a series of computations from resting-state fMRI data and simulation experiments to illustrate the signed path coefficient method was flawed and untenable, due to the fact that the autoregressive coefficients were not always consistent with the real causal relationships and this would inevitablely lead to erroneous conclusions. Overall our findings suggested that the applicability of this kind of causality analysis was rather limited, hence researchers should be more cautious in applying the signed path coefficient Granger causality to fMRI data to avoid misinterpretation.
Rigorous theory of graded thermoelectric converters including finite heat transfer coefficients
NASA Astrophysics Data System (ADS)
Gerstenmaier, York Christian; Wachutka, Gerhard
2017-11-01
Maximization of thermoelectric (TE) converter performance with an inhomogeneous material and electric current distribution has been investigated in previous literature neglecting thermal contact resistances to the heat reservoirs. The heat transfer coefficients (HTCs), defined as inverse thermal contact resistances per unit area, are thus infinite, whereas in reality, always parasitic thermal resistances, i.e., finite HTCs, are present. Maximization of the generated electric power and of cooling power in the refrigerator mode with respect to Seebeck coefficients and heat conductivity for a given profile of the material's TE figure of merit Z are mathematically ill-posed problems in the presence of infinite HTCs. As will be shown in this work, a fully self consistent solution is possible for finite HTCs, and in many respects, the results are fundamentally different. A previous theory for 3D devices will be extended to include finite HTCs and is applied to 1D devices. For the heat conductivity profile, an infinite number of solutions exist leading to the same device performance. Cooling power maximization for finite HTCs in 1D will lead to a strongly enhanced corresponding efficiency (coefficient of performance), whereas results with infinite HTCs lead to a non-monotonous temperature profile and coefficient of performance tending to zero for the prescribed heat conductivities. For maximized generated electric power, the corresponding generator efficiency is nearly a constant independent from the finite HTC values. The maximized efficiencies in the generator and cooling mode are equal to the efficiencies for the infinite HTC, provided that the corresponding powers approach zero. These and more findings are condensed in 4 theorems in the conclusions.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi
2018-04-01
The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.
Raevsky, O A; Grigor'ev, V J; Raevskaja, O E; Schaper, K-J
2006-06-01
QSPR analyses of a data set containing experimental partition coefficients in the three systems octanol-water, water-gas, and octanol-gas for 98 chemicals have shown that it is possible to calculate any partition coefficient in the system 'gas phase/octanol/water' by three different approaches: (1) from experimental partition coefficients obtained in the corresponding two other subsystems. However, in many cases these data may not be available. Therefore, a solution may be approached (2), a traditional QSPR analysis based on e.g. HYBOT descriptors (hydrogen bond acceptor and donor factors, SigmaCa and SigmaCd, together with polarisability alpha, a steric bulk effect descriptor) and supplemented with substructural indicator variables. (3) A very promising approach which is a combination of the similarity concept and QSPR based on HYBOT descriptors. In this approach observed partition coefficients of structurally nearest neighbours of a compound-of-interest are used. In addition, contributions arising from differences in alpha, SigmaCa, and SigmaCd values between the compound-of-interest and its nearest neighbour(s), respectively, are considered. In this investigation highly significant relationships were obtained by approaches (1) and (3) for the octanol/gas phase partition coefficient (log Log).
Standards for Standardized Logistic Regression Coefficients
ERIC Educational Resources Information Center
Menard, Scott
2011-01-01
Standardized coefficients in logistic regression analysis have the same utility as standardized coefficients in linear regression analysis. Although there has been no consensus on the best way to construct standardized logistic regression coefficients, there is now sufficient evidence to suggest a single best approach to the construction of a…
Using wave intensity analysis to determine local reflection coefficient in flexible tubes.
Li, Ye; Parker, Kim H; Khir, Ashraf W
2016-09-06
It has been shown that reflected waves affect the shape and magnitude of the arterial pressure waveform, and that reflected waves have physiological and clinical prognostic values. In general the reflection coefficient is defined as the ratio of the energy of the reflected to the incident wave. Since pressure has the units of energy per unit volume, arterial reflection coefficient are traditionally defined as the ratio of reflected to the incident pressure. We demonstrate that this approach maybe prone to inaccuracies when applied locally. One of the main objectives of this work is to examine the possibility of using wave intensity, which has units of energy flux per unit area, to determine the reflection coefficient. We used an in vitro experimental setting with a single inlet tube joined to a second tube with different properties to form a single reflection site. The second tube was long enough to ensure that reflections from its outlet did not obscure the interactions of the initial wave. We generated an approximately half sinusoidal wave at the inlet of the tube and took measurements of pressure and flow along the tube. We calculated the reflection coefficient using wave intensity (R dI and R dI 0.5 ) and wave energy (R I and R I 0.5 ) as well as the measured pressure (R dP ) and compared these results with the reflection coefficient calculated theoretically based on the mechanical properties of the tubes. The experimental results show that the reflection coefficients determined by all the techniques we studied increased or decreased with distance from the reflection site, depending on the type of reflection. In our experiments, R dP , R dI 0.5 and R I 0.5 are the most reliable parameters to measure the mean reflection coefficient, whilst R dI and R I provide the best measure of the local reflection coefficient, closest to the reflection site. Additional work with bifurcations, tapered tubes and in vivo experiments are needed to further understand, validate the method and assess its potential clinical use. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Thompson, R.S.; Anderson, K.H.; Bartlein, P.J.
2008-01-01
The method of modern analogs is widely used to obtain estimates of past climatic conditions from paleobiological assemblages, and despite its frequent use, this method involved so-far untested assumptions. We applied four analog approaches to a continental-scale set of bioclimatic and plant-distribution presence/absence data for North America to assess how well this method works under near-optimal modern conditions. For each point on the grid, we calculated the similarity between its vegetation assemblage and those of all other points on the grid (excluding nearby points). The climate of the points with the most similar vegetation was used to estimate the climate at the target grid point. Estimates based the use of the Jaccard similarity coefficient had smaller errors than those based on the use of a new similarity coefficient, although the latter may be more robust because it does not assume that the "fossil" assemblage is complete. The results of these analyses indicate that presence/absence vegetation assemblages provide a valid basis for estimating bioclimates on the continental scale. However, the accuracy of the estimates is strongly tied to the number of species in the target assemblage, and the analog method is necessarily constrained to produce estimates that fall within the range of observed values. We applied the four modern analog approaches and the mutual overlap (or "mutual climatic range") method to estimate bioclimatic conditions represented by the plant macrofossil assemblage from a packrat midden of Last Glacial Maximum age from southern Nevada. In general, the estimation approaches produced similar results in regard to moisture conditions, but there was a greater range of estimates for growing-degree days. Despite its limitations, the modern analog technique can provide paleoclimatic reconstructions that serve as the starting point to the interpretation of past climatic conditions.
Alexakis, Dimitrios D.; Mexis, Filippos-Dimitrios K.; Vozinaki, Anthi-Eirini K.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.
2017-01-01
A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R2 values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies. PMID:28635625
Zhang, Zhi-Hai; Yuan, Jian-Hui; Guo, Kang-Xian
2018-04-25
Studies aimed at understanding the nonlinear optical (NLO) properties of GaAs/Ga 0.7 Al 0.3 As morse quantum well (QW) have focused on the intersubband optical absorption coefficients (OACs) and refractive index changes (RICs). These studies have taken two complimentary approaches: (1) The compact-density-matrix approach and iterative method have been used to obtain the expressions of OACs and RICs in morse QW. (2) Finite difference techniques have been used to obtain energy eigenvalues and their corresponding eigenfunctions of GaAs/Ga 0.7 Al 0.3 As morse QW under an applied magnetic field, hydrostatic pressure, and temperature. Our results show that the hydrostatic pressure and magnetic field have a significant influence on the position and the magnitude of the resonant peaks of the nonlinear OACs and RICs. Simultaneously, a saturation case is observed on the total absorption spectrum, which is modulated by the hydrostatic pressure and magnetic field. Physical reasons have been analyzed in depth.
Alexakis, Dimitrios D; Mexis, Filippos-Dimitrios K; Vozinaki, Anthi-Eirini K; Daliakopoulos, Ioannis N; Tsanis, Ioannis K
2017-06-21
A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R² values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies.
Distance correlation methods for discovering associations in large astrophysical databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martínez-Gómez, Elizabeth; Richards, Mercedes T.; Richards, Donald St. P., E-mail: elizabeth.martinez@itam.mx, E-mail: mrichards@astro.psu.edu, E-mail: richards@stat.psu.edu
2014-01-20
High-dimensional, large-sample astrophysical databases of galaxy clusters, such as the Chandra Deep Field South COMBO-17 database, provide measurements on many variables for thousands of galaxies and a range of redshifts. Current understanding of galaxy formation and evolution rests sensitively on relationships between different astrophysical variables; hence an ability to detect and verify associations or correlations between variables is important in astrophysical research. In this paper, we apply a recently defined statistical measure called the distance correlation coefficient, which can be used to identify new associations and correlations between astrophysical variables. The distance correlation coefficient applies to variables of any dimension,more » can be used to determine smaller sets of variables that provide equivalent astrophysical information, is zero only when variables are independent, and is capable of detecting nonlinear associations that are undetectable by the classical Pearson correlation coefficient. Hence, the distance correlation coefficient provides more information than the Pearson coefficient. We analyze numerous pairs of variables in the COMBO-17 database with the distance correlation method and with the maximal information coefficient. We show that the Pearson coefficient can be estimated with higher accuracy from the corresponding distance correlation coefficient than from the maximal information coefficient. For given values of the Pearson coefficient, the distance correlation method has a greater ability than the maximal information coefficient to resolve astrophysical data into highly concentrated horseshoe- or V-shapes, which enhances classification and pattern identification. These results are observed over a range of redshifts beyond the local universe and for galaxies from elliptical to spiral.« less
Distribution coefficients of rare earth ions in cubic zirconium dioxide
NASA Astrophysics Data System (ADS)
Romer, H.; Luther, K.-D.; Assmus, W.
1994-08-01
Cubic zirconium dioxide crystals are grown with the skull melting technique. The effective distribution coefficients for Nd(exp 3+), Sm(exp 3+) and Er(sup 3+) as dopants are determined experimentally as a function of the crystal growth velocity. With the Burton-Prim-Slichter theory, the equilibrium distribution coefficients can be calculated. The distribution coefficients of all other trivalent rare earth ions can be estimated by applying the correlation towards the ionic radii.
NASA Astrophysics Data System (ADS)
Shaw, Jacob T.; Lidster, Richard T.; Cryer, Danny R.; Ramirez, Noelia; Whiting, Fiona C.; Boustead, Graham A.; Whalley, Lisa K.; Ingham, Trevor; Rickard, Andrew R.; Dunmore, Rachel E.; Heard, Dwayne E.; Lewis, Ally C.; Carpenter, Lucy J.; Hamilton, Jacqui F.; Dillon, Terry J.
2018-03-01
Gas-phase rate coefficients are fundamental to understanding atmospheric chemistry, yet experimental data are not available for the oxidation reactions of many of the thousands of volatile organic compounds (VOCs) observed in the troposphere. Here, a new experimental method is reported for the simultaneous study of reactions between multiple different VOCs and OH, the most important daytime atmospheric radical oxidant. This technique is based upon established relative rate concepts but has the advantage of a much higher throughput of target VOCs. By evaluating multiple VOCs in each experiment, and through measurement of the depletion in each VOC after reaction with OH, the OH + VOC reaction rate coefficients can be derived. Results from experiments conducted under controlled laboratory conditions were in good agreement with the available literature for the reaction of 19 VOCs, prepared in synthetic gas mixtures, with OH. This approach was used to determine a rate coefficient for the reaction of OH with 2,3-dimethylpent-1-ene for the first time; k = 5.7 (±0.3) × 10-11 cm3 molecule-1 s-1. In addition, a further seven VOCs had only two, or fewer, individual OH rate coefficient measurements available in the literature. The results from this work were in good agreement with those measurements. A similar dataset, at an elevated temperature of 323 (±10) K, was used to determine new OH rate coefficients for 12 aromatic, 5 alkane, 5 alkene and 3 monoterpene VOC + OH reactions. In OH relative reactivity experiments that used ambient air at the University of York, a large number of different VOCs were observed, of which 23 were positively identified. Due to difficulties with detection limits and fully resolving peaks, only 19 OH rate coefficients were derived from these ambient air samples, including 10 reactions for which data were previously unavailable at the elevated reaction temperature of T = 323 (±10) K.
NASA Astrophysics Data System (ADS)
Grabtchak, Serge; Montgomery, Logan G.; Whelan, William M.
2014-05-01
We demonstrated the application of relative radiance-based continuous wave (cw) measurements for recovering absorption and scattering properties (the effective attenuation coefficient, the diffusion coefficient, the absorption coefficient and the reduced scattering coefficient) of bulk porcine muscle phantoms in the 650-900 nm spectral range. Both the side-firing fiber (the detector) and the fiber with a spherical diffuser at the end (the source) were inserted interstitially at predetermined locations in the phantom. The porcine phantoms were prostate-shaped with ˜4 cm in diameter and ˜3 cm thickness and made from porcine loin or tenderloin muscles. The described method was previously validated using the diffusion approximation on simulated and experimental radiance data obtained for homogenous Intralipid-1% liquid phantom. The approach required performing measurements in two locations in the tissue with different distances to the source. Measurements were performed on 21 porcine phantoms. Spectral dependences of the effective attenuation and absorption coefficients for the loin phantom deviated from corresponding dependences for the tenderloin phantom for wavelengths <750 nm. The diffusion constant and the reduced scattering coefficient were very close for both phantom types. To quantify chromophore presence, the plot for the absorption coefficient was matched with a synthetic absorption spectrum constructed from deoxyhemoglobin, oxyhemoglobin and water. The closest match for the porcine loin spectrum was obtained with the following concentrations: 15.5 µM (±30% s.d.) Hb, 21 µM (±30% s.d.) HbO2 and 0.3 (±30% s.d.) fractional volume of water. The tenderloin absorption spectrum was best described by 30 µM Hb (±30% s.d), 19 µM (±30% s.d.) HbO2 and 0.3 (±30% s.d.) fractional volume of water. The higher concentration of Hb in tenderloin was consistent with a dark-red appearance of the tenderloin phantom. The method can be applied to a number of biological tissues and organs for interstitial optical interrogation.
Comparison of Satellite-based Basal and Adjusted Evapotranspiration for Several California Crops
NASA Astrophysics Data System (ADS)
Johnson, L.; Lund, C.; Melton, F. S.
2013-12-01
There is a continuing need to develop new sources of information on agricultural crop water consumption in the arid Western U.S. Pursuant to the California Water Conservation Act of 2009, for instance, the stakeholder community has developed a set of quantitative indicators involving measurement of evapotranspiration (ET) or crop consumptive use (Calif. Dept. Water Resources, 2012). Fraction of reference ET (or, crop coefficients) can be estimated from a biophysical description of the crop canopy involving green fractional cover (Fc) and height as per the FAO-56 practice standard of Allen et al. (1998). The current study involved 19 fields in California's San Joaquin Valley and Central Coast during 2011-12, growing a variety of specialty and commodity crops: lettuce, raisin, tomato, almond, melon, winegrape, garlic, peach, orange, cotton, corn and wheat. Most crops were on surface or subsurface drip, though micro-jet, sprinkler and flood were represented as well. Fc was retrospectively estimated every 8-16 days by optical satellite data and interpolated to a daily timestep. Crop height was derived as a capped linear function of Fc using published guideline maxima. These variables were used to generate daily basal crop coefficients (Kcb) per field through most or all of each respective growth cycle by the density coefficient approach of Allen & Pereira (2009). A soil water balance model for both topsoil and root zone, based on FAO-56 and using on-site measurements of applied irrigation and precipitation, was used to develop daily soil evaporation and crop water stress coefficients (Ke, Ks). Key meteorological variables (wind speed, relative humidity) were extracted from the California Irrigation Management Information System (CIMIS) for climate correction. Basal crop ET (ETcb) was then derived from Kcb using CIMIS reference ET. Adjusted crop ET (ETc_adj) was estimated by the dual coefficient approach involving Kcb, Ke, and incorporating Ks. Cumulative ETc_adj throughout each monitoring period was lower than cumulative ETb for most crops, indicating that effect of water stress tended to exceed that of soil evaporation relative to basal conditions. We present results from the analysis and discuss implications for operational use of satellite-based Kcb and ETcb estimates for agricultural water resource management.
On the diffusion of ferrocenemethanol in room-temperature ionic liquids: an electrochemical study.
Lovelock, Kevin R J; Ejigu, Andinet; Loh, Sook Fun; Men, Shuang; Licence, Peter; Walsh, Darren A
2011-06-07
The electrochemical behaviour of ferrocenemethanol (FcMeOH) has been studied in a range of room-temperature ionic liquids (RTILs) using cyclic voltammetry, chronoamperomery and scanning electrochemical microscopy (SECM). The diffusion coefficient of FcMeOH, measured using chronoamperometry, decreased with increasing RTIL viscosity. Analysis of the mass transport properties of the RTILs revealed that the Stokes-Einstein equation did not apply to our data. The "correlation length" was estimated from diffusion coefficient data and corresponded well to the average size of holes (voids) in the liquid, suggesting that a model in which the diffusing species jumps between holes in the liquid is appropriate in these liquids. Cyclic voltammetry at ultramicroelectrodes demonstrated that the ability to record steady-state voltammograms during ferrocenemethanol oxidation depended on the voltammetric scan rate, the electrode dimensions and the RTIL viscosity. Similarly, the ability to record steady-state SECM feedback approach curves depended on the RTIL viscosity, the SECM tip radius and the tip approach speed. Using 1.3 μm Pt SECM tips, steady-state SECM feedback approach curves were obtained in RTILs, provided that the tip approach speed was low enough to maintain steady-state diffusion at the SECM tip. In the case where tip-induced convection contributed significantly to the SECM tip current, this effect could be accounted for theoretically using mass transport equations that include diffusive and convective terms. Finally, the rate of heterogeneous electron transfer across the electrode/RTIL interface during ferrocenemethanol oxidation was estimated using SECM, and k(0) was at least 0.1 cm s(-1) in one of the least viscous RTILs studied.
Identifying the Oscillatory Mechanism of the Glucose Oxidase-Catalase Coupled Enzyme System.
Muzika, František; Jurašek, Radovan; Schreiberová, Lenka; Radojković, Vuk; Schreiber, Igor
2017-10-12
We provide experimental evidence of periodic and aperiodic oscillations in an enzymatic system of glucose oxidase-catalase in a continuous-flow stirred reactor coupled by a membrane with a continuous-flow reservoir supplied with hydrogen peroxide. To describe such dynamics, we formulate a detailed mechanism based on partial results in the literature. Finally, we introduce a novel method for estimation of unknown kinetic parameters. The method is based on matching experimental data at an oscillatory instability with stoichiometric constraints of the mechanism formulated by applying the stability theory of reaction networks. This approach has been used to estimate rate coefficients in the catalase part of the mechanism. Remarkably, model simulations show good agreement with the observed oscillatory dynamics, including apparently chaotic intermittent behavior. Our method can be applied to any reaction system with an experimentally observable dynamical instability.
Stress Optical Coefficient, Test Methodology, and Glass Standard Evaluation
2016-05-01
identifying and mapping flaw size distributions on glass surfaces for predicting mechanical response. International Journal of Applied Glass ...ARL-TN-0756 ● MAY 2016 US Army Research Laboratory Stress Optical Coefficient, Test Methodology, and Glass Standard Evaluation...Stress Optical Coefficient, Test Methodology, and Glass Standard Evaluation by Clayton M Weiss Oak Ridge Institute for Science and Education
A Novel Approach to ECG Classification Based upon Two-Layered HMMs in Body Sensor Networks
Liang, Wei; Zhang, Yinlong; Tan, Jindong; Li, Yang
2014-01-01
This paper presents a novel approach to ECG signal filtering and classification. Unlike the traditional techniques which aim at collecting and processing the ECG signals with the patient being still, lying in bed in hospitals, our proposed algorithm is intentionally designed for monitoring and classifying the patient's ECG signals in the free-living environment. The patients are equipped with wearable ambulatory devices the whole day, which facilitates the real-time heart attack detection. In ECG preprocessing, an integral-coefficient-band-stop (ICBS) filter is applied, which omits time-consuming floating-point computations. In addition, two-layered Hidden Markov Models (HMMs) are applied to achieve ECG feature extraction and classification. The periodic ECG waveforms are segmented into ISO intervals, P subwave, QRS complex and T subwave respectively in the first HMM layer where expert-annotation assisted Baum-Welch algorithm is utilized in HMM modeling. Then the corresponding interval features are selected and applied to categorize the ECG into normal type or abnormal type (PVC, APC) in the second HMM layer. For verifying the effectiveness of our algorithm on abnormal signal detection, we have developed an ECG body sensor network (BSN) platform, whereby real-time ECG signals are collected, transmitted, displayed and the corresponding classification outcomes are deduced and shown on the BSN screen. PMID:24681668
Experimental and numerical analysis of clamped joints in front motorbike suspensions
NASA Astrophysics Data System (ADS)
Croccolo, D.; de Agostinis, M.; Vincenzi, N.
2010-06-01
Clamped joints are shaft-hub connections used, as an instance, in front motorbike suspensions to lock the steering plates with the legs and the legs with the wheel pin, by means of one or two bolts. The preloading force, produced during the tightening process, should be evaluated accurately, since it must lock safely the shaft, without overcoming the yielding point of the hub. Firstly, friction coefficients have been evaluated on “ad-hoc designed” specimens, by applying the Design of Experiment approach: the applied tightening torque has been precisely related to the imposed preloading force. Then, the tensile state of clamps have been evaluated both via FEM and by leveraging some design formulae proposed by the Authors as function of the preloading force and of the clamp geometry. Finally, the results have been compared to those given by some strain gauges applied on the tested clamps: the discrepancies between numerical analyses, the design formulae and the experimental results remains under a threshold of 10%.
Dehesh, Tania; Zare, Najaf; Ayatollahi, Seyyed Mohammad Taghi
2015-01-01
Univariate meta-analysis (UM) procedure, as a technique that provides a single overall result, has become increasingly popular. Neglecting the existence of other concomitant covariates in the models leads to loss of treatment efficiency. Our aim was proposing four new approximation approaches for the covariance matrix of the coefficients, which is not readily available for the multivariate generalized least square (MGLS) method as a multivariate meta-analysis approach. We evaluated the efficiency of four new approaches including zero correlation (ZC), common correlation (CC), estimated correlation (EC), and multivariate multilevel correlation (MMC) on the estimation bias, mean square error (MSE), and 95% probability coverage of the confidence interval (CI) in the synthesis of Cox proportional hazard models coefficients in a simulation study. Comparing the results of the simulation study on the MSE, bias, and CI of the estimated coefficients indicated that MMC approach was the most accurate procedure compared to EC, CC, and ZC procedures. The precision ranking of the four approaches according to all above settings was MMC ≥ EC ≥ CC ≥ ZC. This study highlights advantages of MGLS meta-analysis on UM approach. The results suggested the use of MMC procedure to overcome the lack of information for having a complete covariance matrix of the coefficients.
Fast function-on-scalar regression with penalized basis expansions.
Reiss, Philip T; Huang, Lei; Mennes, Maarten
2010-01-01
Regression models for functional responses and scalar predictors are often fitted by means of basis functions, with quadratic roughness penalties applied to avoid overfitting. The fitting approach described by Ramsay and Silverman in the 1990 s amounts to a penalized ordinary least squares (P-OLS) estimator of the coefficient functions. We recast this estimator as a generalized ridge regression estimator, and present a penalized generalized least squares (P-GLS) alternative. We describe algorithms by which both estimators can be implemented, with automatic selection of optimal smoothing parameters, in a more computationally efficient manner than has heretofore been available. We discuss pointwise confidence intervals for the coefficient functions, simultaneous inference by permutation tests, and model selection, including a novel notion of pointwise model selection. P-OLS and P-GLS are compared in a simulation study. Our methods are illustrated with an analysis of age effects in a functional magnetic resonance imaging data set, as well as a reanalysis of a now-classic Canadian weather data set. An R package implementing the methods is publicly available.
Dawson, Paul R.; Boyce, Donald E.; Park, Jun-Sang; ...
2017-10-15
A robust methodology is presented to extract slip system strengths from lattice strain distributions for polycrystalline samples obtained from high-energy x-ray diffraction (HEXD) experiments with in situ loading. The methodology consists of matching the evolution of coefficients of a harmonic expansion of the distributions from simulation to the coefficients derived from measurements. Simulation results are generated via finite element simulations of virtual polycrystals that are subjected to the loading history applied in the HEXD experiments. Advantages of the methodology include: (1) its ability to utilize extensive data sets generated by HEXD experiments; (2) its ability to capture trends in distributionsmore » that may be noisy (both measured and simulated); and (3) its sensitivity to the ratios of the family strengths. The approach is used to evaluate the slip system strengths of Ti-6Al-4V using samples having relatively equiaxed grains. These strength estimates are compared to values in the literature.« less
Selection of Optical Glasses Using Buchdahl's Chromatic Coordinate
NASA Technical Reports Server (NTRS)
Griffin, DeVon W.
1999-01-01
This investigation attempted to extend the method of reducing the size of glass catalogs to a global glass selection technique with the hope of guiding glass catalog offerings. Buchdahl's development of optical aberration coefficients included a transformation of the variable in the dispersion equation from wavelength to a chromatic coordinate omega defined as omega = (lambda - lambda(sub 0))/ 1 + 2.5(lambda - lambda(sub 0)) where lambda is the wavelength at which the wavelength is calculated and lambda(sub 0) is a base wavelength about which the expansion is performed. The advantage of this approach is that the dispersion equation may be written in terms of a simple power series and permits direct calculation of dispersion coefficients. While several promising examples were given, a systematic application of the technique to an entire glass catalog and analysis of the subsequent predictions was not performed. The goal of this work was to apply the technique in a systematic fashion to glasses in the Schoft catalog and assess the quality of the predictions.
An equivalent dipole analysis of PZT ceramics and lead-free piezoelectric single crystals
NASA Astrophysics Data System (ADS)
Bell, Andrew J.
2016-04-01
The recently proposed Equivalent Dipole Model for describing the electromechanical properties of ionic solids in terms of 3 ions and 2 bonds has been applied to PZT ceramics and lead-free single crystal piezoelectric materials, providing analysis in terms of an effective ionic charge and the asymmetry of the interatomic force constants. For PZT it is shown that, as a function of composition across the morphotropic phase boundary, the dominant bond compliance peaks at 52% ZrO2. The stiffer of the two bonds shows little composition dependence with no anomaly at the phase boundary. The effective charge has a maximum value at 50% ZrO2, decreasing across the phase boundary region, but becoming constant in the rhombohedral phase. The single crystals confirm that both the asymmetry in the force constants and the magnitude of effective charge are equally important in determining the values of the piezoelectric charge coefficient and the electromechanical coupling coefficient. Both are apparently temperature dependent, increasing markedly on approaching the Curie temperature.
Boson expansion based on the extended commutator method in the Tamm-Dancoff representation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pedrocchi, V.G.; Tamura, T.
1983-07-01
Formal aspects of boson expansions in the Tamm-Dancoff representation are investigated in detail. This is carried out in the framework of the extended commutator method by solving in complete generality the coefficient equations, searching for Hermitian as well as non-Hermitian boson expansions. The solutions for the expansion coefficients are obtained in a new form, called the square root realization, which is then applied to carry out an analysis of the relationship between the type of expansion and the boson space in which the expansion is defined. It is shown that this new realization is reduced to various well-known boson theoriesmore » when the boson space is chosen in an appropriate manner. Further discussed, still on the basis of the square root realization, is the equivalence, on a practical level, of a few boson expansion approaches when the Tamm-Dancoff space is truncated to a single quadrupole collective component.« less
Spin-orbit torques from interfacial spin-orbit coupling for various interfaces
NASA Astrophysics Data System (ADS)
Kim, Kyoung-Whan; Lee, Kyung-Jin; Sinova, Jairo; Lee, Hyun-Woo; Stiles, M. D.
2017-09-01
We use a perturbative approach to study the effects of interfacial spin-orbit coupling in magnetic multilayers by treating the two-dimensional Rashba model in a fully three-dimensional description of electron transport near an interface. This formalism provides a compact analytic expression for current-induced spin-orbit torques in terms of unperturbed scattering coefficients, allowing computation of spin-orbit torques for various contexts, by simply substituting scattering coefficients into the formulas. It applies to calculations of spin-orbit torques for magnetic bilayers with bulk magnetism, those with interface magnetism, a normal-metal/ferromagnetic insulator junction, and a topological insulator/ferromagnet junction. It predicts a dampinglike component of spin-orbit torque that is distinct from any intrinsic contribution or those that arise from particular spin relaxation mechanisms. We discuss the effects of proximity-induced magnetism and insertion of an additional layer and provide formulas for in-plane current, which is induced by a perpendicular bias, anisotropic magnetoresistance, and spin memory loss in the same formalism.
Spin-orbit torques from interfacial spin-orbit coupling for various interfaces.
Kim, Kyoung-Whan; Lee, Kyung-Jin; Sinova, Jairo; Lee, Hyun-Woo; Stiles, M D
2017-09-01
We use a perturbative approach to study the effects of interfacial spin-orbit coupling in magnetic multilayers by treating the two-dimensional Rashba model in a fully three-dimensional description of electron transport near an interface. This formalism provides a compact analytic expression for current-induced spin-orbit torques in terms of unperturbed scattering coefficients, allowing computation of spin-orbit torques for various contexts, by simply substituting scattering coefficients into the formulas. It applies to calculations of spin-orbit torques for magnetic bilayers with bulk magnetism, those with interface magnetism, a normal metal/ferromagnetic insulator junction, and a topological insulator/ferromagnet junction. It predicts a dampinglike component of spin-orbit torque that is distinct from any intrinsic contribution or those that arise from particular spin relaxation mechanisms. We discuss the effects of proximity-induced magnetism and insertion of an additional layer and provide formulas for in-plane current, which is induced by a perpendicular bias, anisotropic magnetoresistance, and spin memory loss in the same formalism.
Spin-orbit torques from interfacial spin-orbit coupling for various interfaces
Kim, Kyoung-Whan; Lee, Kyung-Jin; Sinova, Jairo; Lee, Hyun-Woo; Stiles, M. D.
2017-01-01
We use a perturbative approach to study the effects of interfacial spin-orbit coupling in magnetic multilayers by treating the two-dimensional Rashba model in a fully three-dimensional description of electron transport near an interface. This formalism provides a compact analytic expression for current-induced spin-orbit torques in terms of unperturbed scattering coefficients, allowing computation of spin-orbit torques for various contexts, by simply substituting scattering coefficients into the formulas. It applies to calculations of spin-orbit torques for magnetic bilayers with bulk magnetism, those with interface magnetism, a normal metal/ferromagnetic insulator junction, and a topological insulator/ferromagnet junction. It predicts a dampinglike component of spin-orbit torque that is distinct from any intrinsic contribution or those that arise from particular spin relaxation mechanisms. We discuss the effects of proximity-induced magnetism and insertion of an additional layer and provide formulas for in-plane current, which is induced by a perpendicular bias, anisotropic magnetoresistance, and spin memory loss in the same formalism. PMID:29333523
An innovative approach to compensator design
NASA Technical Reports Server (NTRS)
Mitchell, J. R.
1972-01-01
The primary goal is to present for a control system a computer-aided-compensator design technique from a frequency domain point of view. The thesis for developing this technique is to describe the open loop frequency response by n discrete frequency points which result in n functions of the compensator coefficients. Several of these functions are chosen so that the system specifications are properly portrayed; then mathematical programming is used to improve all of these functions which have values below minimum standards. In order to do this several definitions in regard to measuring the performance of a system in the frequency domain are given. Next, theorems which govern the number of compensator coefficients necessary to make improvements in a certain number of functions are proved. After this a mathematical programming tool for aiding in the solution of the problem is developed. Then for applying the constraint improvement algorithm generalized gradients for the constraints are derived. Finally, the necessary theory is incorporated in a computer program called CIP (compensator improvement program).
Zhou, Jiawei; Zhu, Hangtian; Liu, Te-Huan; Song, Qichen; He, Ran; Mao, Jun; Liu, Zihang; Ren, Wuyang; Liao, Bolin; Singh, David J; Ren, Zhifeng; Chen, Gang
2018-04-30
Modern society relies on high charge mobility for efficient energy production and fast information technologies. The power factor of a material-the combination of electrical conductivity and Seebeck coefficient-measures its ability to extract electrical power from temperature differences. Recent advancements in thermoelectric materials have achieved enhanced Seebeck coefficient by manipulating the electronic band structure. However, this approach generally applies at relatively low conductivities, preventing the realization of exceptionally high-power factors. In contrast, half-Heusler semiconductors have been shown to break through that barrier in a way that could not be explained. Here, we show that symmetry-protected orbital interactions can steer electron-acoustic phonon interactions towards high mobility. This high-mobility regime enables large power factors in half-Heuslers, well above the maximum measured values. We anticipate that our understanding will spark new routes to search for better thermoelectric materials, and to discover high electron mobility semiconductors for electronic and photonic applications.
Spin and charge thermopower effects in the ferromagnetic graphene junction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vahedi, Javad, E-mail: javahedi@gmail.com; Center for Theoretical Physics of Complex Systems, Institute for Basic Science; Barimani, Fattaneh
2016-08-28
Using wave function matching approach and employing the Landauer-Buttiker formula, a ferromagnetic graphene junction with temperature gradient across the system is studied. We calculate the thermally induced charge and spin current as well as the thermoelectric voltage (Seebeck effect) in the linear and nonlinear regimes. Our calculation revealed that due to the electron-hole symmetry, the charge Seebeck coefficient is, for an undoped magnetic graphene, an odd function of chemical potential while the spin Seebeck coefficient is an even function regardless of the temperature gradient and junction length. We have also found with an accurate tuning external parameter, namely, the exchangemore » filed and gate voltage, the temperature gradient across the junction drives a pure spin current without accompanying the charge current. Another important characteristic of thermoelectric transport, thermally induced current in the nonlinear regime, is examined. It would be our main finding that with increasing thermal gradient applied to the junction the spin and charge thermovoltages decrease and even become zero for non zero temperature bias.« less
NASA Astrophysics Data System (ADS)
Filgueira, Ramón; Rosland, Rune; Grant, Jon
2011-11-01
Growth of Mytilus edulis was simulated using individual based models following both Scope For Growth (SFG) and Dynamic Energy Budget (DEB) approaches. These models were parameterized using independent studies and calibrated for each dataset by adjusting the half-saturation coefficient of the food ingestion function term, XK, a common parameter in both approaches related to feeding behavior. Auto-calibration was carried out using an optimization tool, which provides an objective way of tuning the model. Both approaches yielded similar performance, suggesting that although the basis for constructing the models is different, both can successfully reproduce M. edulis growth. The good performance of both models in different environments achieved by adjusting a single parameter, XK, highlights the potential of these models for (1) producing prospective analysis of mussel growth and (2) investigating mussel feeding response in different ecosystems. Finally, we emphasize that the convergence of two different modeling approaches via calibration of XK, indicates the importance of the feeding behavior and local trophic conditions for bivalve growth performance. Consequently, further investigations should be conducted to explore the relationship of XK to environmental variables and/or to the sophistication of the functional response to food availability with the final objective of creating a general model that can be applied to different ecosystems without the need for calibration.
NASA Technical Reports Server (NTRS)
Quigley, Hervey C.; Anderson, Seth B.; Innis, Robert C.
1960-01-01
A flight investigation has been conducted to study how pilots use the high lift available with blowing-type boundary-layer control applied to the leading- and trailing-edge flaps of a 45 deg. swept-wing airplane. The study includes documentation of the low-speed handling qualities as well as the pilots' evaluations of the landing-approach characteristics. All the pilots who flew the airplane considered it more comfortable to fly at low speeds than any other F-100 configuration they had flown. The major improvements noted were the reduced stall speed, the improved longitudinal stability at high lift, and the reduction in low-speed buffet. The study has shown the minimum comfortable landing-approach speeds are between 120.5 and 126.5 knots compared to 134 for the airplane with a slatted leading edge and the same trailing-edge flap. The limiting factors in the pilots' choices of landing-approach speeds were the limits of ability to control flight-path angle, lack of visibility, trim change with thrust, low static directional stability, and sluggish longitudinal control. Several of these factors were found to be associated with the high angles of attack, between 13 deg. and 15 deg., required for the low approach speeds. The angle of attack for maximum lift coefficient was 28 deg.
NASA Astrophysics Data System (ADS)
Baatz, D.; Kurtz, W.; Hendricks Franssen, H. J.; Vereecken, H.; Kollet, S. J.
2017-12-01
Parameter estimation for physically based, distributed hydrological models becomes increasingly challenging with increasing model complexity. The number of parameters is usually large and the number of observations relatively small, which results in large uncertainties. A moving transmitter - receiver concept to estimate spatially distributed hydrological parameters is presented by catchment tomography. In this concept, precipitation, highly variable in time and space, serves as a moving transmitter. As response to precipitation, runoff and stream discharge are generated along different paths and time scales, depending on surface and subsurface flow properties. Stream water levels are thus an integrated signal of upstream parameters, measured by stream gauges which serve as the receivers. These stream water level observations are assimilated into a distributed hydrological model, which is forced with high resolution, radar based precipitation estimates. Applying a joint state-parameter update with the Ensemble Kalman Filter, the spatially distributed Manning's roughness coefficient and saturated hydraulic conductivity are estimated jointly. The sequential data assimilation continuously integrates new information into the parameter estimation problem, especially during precipitation events. Every precipitation event constrains the possible parameter space. In the approach, forward simulations are performed with ParFlow, a variable saturated subsurface and overland flow model. ParFlow is coupled to the Parallel Data Assimilation Framework for the data assimilation and the joint state-parameter update. In synthetic, 3-dimensional experiments including surface and subsurface flow, hydraulic conductivity and the Manning's coefficient are efficiently estimated with the catchment tomography approach. A joint update of the Manning's coefficient and hydraulic conductivity tends to improve the parameter estimation compared to a single parameter update, especially in cases of biased initial parameter ensembles. The computational experiments additionally show to which degree of spatial heterogeneity and to which degree of uncertainty of subsurface flow parameters the Manning's coefficient and hydraulic conductivity can be estimated efficiently.
Passive sampling methods for contaminated sediments: State of the science for organic contaminants
Lydy, Michael J; Landrum, Peter F; Oen, Amy MP; Allinson, Mayumi; Smedes, Foppe; Harwood, Amanda D; Li, Huizhen; Maruya, Keith A; Liu, Jingfu
2014-01-01
This manuscript surveys the literature on passive sampler methods (PSMs) used in contaminated sediments to assess the chemical activity of organic contaminants. The chemical activity in turn dictates the reactivity and bioavailability of contaminants in sediment. Approaches to measure specific binding of compounds to sediment components, for example, amorphous carbon or specific types of reduced carbon, and the associated partition coefficients are difficult to determine, particularly for native sediment. Thus, the development of PSMs that represent the chemical activity of complex compound–sediment interactions, expressed as the freely dissolved contaminant concentration in porewater (Cfree), offer a better proxy for endpoints of concern, such as reactivity, bioaccumulation, and toxicity. Passive sampling methods have estimated Cfree using both kinetic and equilibrium operating modes and used various polymers as the sorbing phase, for example, polydimethylsiloxane, polyethylene, and polyoxymethylene in various configurations, such as sheets, coated fibers, or vials containing thin films. These PSMs have been applied in laboratory exposures and field deployments covering a variety of spatial and temporal scales. A wide range of calibration conditions exist in the literature to estimate Cfree, but consensus values have not been established. The most critical criteria are the partition coefficient between water and the polymer phase and the equilibrium status of the sampler. In addition, the PSM must not appreciably deplete Cfree in the porewater. Some of the future challenges include establishing a standard approach for PSM measurements, correcting for nonequilibrium conditions, establishing guidance for selection and implementation of PSMs, and translating and applying data collected by PSMs. Integr Environ Assess Manag 2014;10:167–178. © 2014 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of SETAC. PMID:24307344
Ren, Ji-Xia; Li, Cheng-Ping; Zhou, Xiu-Ling; Cao, Xue-Song; Xie, Yong
2017-08-22
Myeloid cell leukemia-1 (Mcl-1) has been a validated and attractive target for cancer therapy. Over-expression of Mcl-1 in many cancers allows cancer cells to evade apoptosis and contributes to the resistance to current chemotherapeutics. Here, we identified new Mcl-1 inhibitors using a multi-step virtual screening approach. First, based on two different ligand-receptor complexes, 20 pharmacophore models were established by simultaneously using 'Receptor-Ligand Pharmacophore Generation' method and manual build feature method, and then carefully validated by a test database. Then, pharmacophore-based virtual screening (PB-VS) could be performed by using the 20 pharmacophore models. In addition, docking study was used to predict the possible binding poses of compounds, and the docking parameters were optimized before performing docking-based virtual screening (DB-VS). Moreover, a 3D QSAR model was established by applying the 55 aligned Mcl-1 inhibitors. The 55 inhibitors sharing the same scaffold were docked into the Mcl-1 active site before alignment, then the inhibitors with possible binding conformations were aligned. For the training set, the 3D QSAR model gave a correlation coefficient r 2 of 0.996; for the test set, the correlation coefficient r 2 was 0.812. Therefore, the developed 3D QSAR model was a good model, which could be applied for carrying out 3D QSAR-based virtual screening (QSARD-VS). After the above three virtual screening methods orderly filtering, 23 potential inhibitors with novel scaffolds were identified. Furthermore, we have discussed in detail the mapping results of two potent compounds onto pharmacophore models, 3D QSAR model, and the interactions between the compounds and active site residues.
Measuring multivariate association and beyond
Josse, Julie; Holmes, Susan
2017-01-01
Simple correlation coefficients between two variables have been generalized to measure association between two matrices in many ways. Coefficients such as the RV coefficient, the distance covariance (dCov) coefficient and kernel based coefficients are being used by different research communities. Scientists use these coefficients to test whether two random vectors are linked. Once it has been ascertained that there is such association through testing, then a next step, often ignored, is to explore and uncover the association’s underlying patterns. This article provides a survey of various measures of dependence between random vectors and tests of independence and emphasizes the connections and differences between the various approaches. After providing definitions of the coefficients and associated tests, we present the recent improvements that enhance their statistical properties and ease of interpretation. We summarize multi-table approaches and provide scenarii where the indices can provide useful summaries of heterogeneous multi-block data. We illustrate these different strategies on several examples of real data and suggest directions for future research. PMID:29081877
Semiclassical approaches to nuclear dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magner, A. G., E-mail: magner@kinr.kiev.ua; Gorpinchenko, D. V.; Bartel, J.
The extended Gutzwiller trajectory approach is presented for the semiclassical description of nuclear collective dynamics, in line with the main topics of the fruitful activity of V.G. Solovjov. Within the Fermi-liquid droplet model, the leptodermous effective surface approximation was applied to calculations of energies, sum rules, and transition densities for the neutron–proton asymmetry of the isovector giant-dipole resonance and found to be in good agreement with the experimental data. By using the Strutinsky shell correction method, the semiclassical collective transport coefficients, such as nuclear inertia, friction, stiffness, and moments of inertia, can be derived beyond the quantum perturbation approximation ofmore » the response function theory and the cranking model. The averaged particle-number dependences of the low-lying collective vibrational states are described in good agreement with the basic experimental data, mainly due to the enhancement of the collective inertia as compared to its irrotational flow value. Shell components of the moment of inertia are derived in terms of the periodic-orbit free-energy shell corrections. A good agreement between the semiclassical extended Thomas–Fermi moments of inertia with shell corrections and the quantum results is obtained for different nuclear deformations and particle numbers. Shell effects are shown to be exponentially dampted out with increasing temperature in all the transport coefficients.« less
Nur-E-Alam, Mohammad; Belotelov, Vladimir; Alameh, Kamal
2018-01-01
This work is devoted to physical vapor deposition synthesis, and characterisation of bismuth and lutetium-substituted ferrite-garnet thin-film materials for magneto-optic (MO) applications. The properties of garnet thin films sputtered using a target of nominal composition type Bi0.9Lu1.85Y0.25Fe4.0Ga1O12 are studied. By measuring the optical transmission spectra at room temperature, the optical constants and the accurate film thicknesses can be evaluated using Swanepoel’s envelope method. The refractive index data are found to be matching very closely to these derived from Cauchy’s dispersion formula for the entire spectral range between 300 and 2500 nm. The optical absorption coefficient and the extinction coefficient data are studied for both the as-deposited and annealed garnet thin-film samples. A new approach is applied to accurately derive the optical constants data simultaneously with the physical layer thickness, using a combination approach employing custom-built spectrum-fitting software in conjunction with Swanepoel’s envelope method. MO properties, such as specific Faraday rotation, MO figure of merit and MO swing factor are also investigated for several annealed garnet-phase films. PMID:29789463
NASA Astrophysics Data System (ADS)
Guerrero, Massimo; Di Federico, Vittorio
2018-03-01
The use of acoustic techniques has become common for estimating suspended sediment in water environments. An emitted beam propagates into water producing backscatter and attenuation, which depend on scattering particles concentration and size distribution. Unfortunately, the actual particles size distribution (PSD) may largely affect the accuracy of concentration quantification through the unknown coefficients of backscattering strength, ks2, and normalized attenuation, ζs. This issue was partially solved by applying the multi-frequency approach. Despite this possibility, a relevant scientific and practical question remains regarding the possibility of using acoustic methods to investigate poorly sorted sediment in the spectrum ranging from clay to fine sand. The aim of this study is to investigate the possibility of combining the measurement of sound attenuation and backscatter to determine ζs for the suspended particles and the corresponding concentration. The proposed method is moderately dependent from actual PSD, thus relaxing the need of frequent calibrations to account for changes in ks2 and ζs coefficients. Laboratory tests were conducted under controlled conditions to validate this measurement technique. With respect to existing approaches, the developed method more accurately estimates the concentration of suspended particles ranging from clay to fine sand and, at the same time, gives an indication on their actual PSD.
NASA Astrophysics Data System (ADS)
Guissart, Amandine; Bernal, Luis; Dimitriadis, Gregorios; Terrapon, Vincent
2015-11-01
The direct measurement of loads with force balance can become challenging when the forces are small or when the body is moving. An alternative is the use of Particle Image Velocimetry (PIV) velocity fields to indirectly obtain the aerodynamic coefficients. This can be done by the use of control volume approaches which lead to the integration of velocities, and other fields deriving from them, on a contour surrounding the studied body and its supporting surface. This work exposes and discusses results obtained with two different methods: the direct use of the integral formulation of the Navier-Stokes equations and the so-called Noca's method. The latter is a reformulation of the integral Navier-Stokes equations in order to get rid of the pressure. Results obtained using the two methods are compared and the influence of different parameters is discussed. The methods are applied to PIV data obtained from water channel testing for the flow around a 16:1 plate. Two cases are considered: a static plate at high angle of attack and a large amplitude imposed pitching motion. Two-dimensional PIV velocity fields are used to compute the aerodynamic forces. Direct measurements of dynamic loads are also carried out in order to assess the quality of the indirectly calculated coefficients.
Modeling Spatial Dependence of Rainfall Extremes Across Multiple Durations
NASA Astrophysics Data System (ADS)
Le, Phuong Dong; Leonard, Michael; Westra, Seth
2018-03-01
Determining the probability of a flood event in a catchment given that another flood has occurred in a nearby catchment is useful in the design of infrastructure such as road networks that have multiple river crossings. These conditional flood probabilities can be estimated by calculating conditional probabilities of extreme rainfall and then transforming rainfall to runoff through a hydrologic model. Each catchment's hydrological response times are unlikely to be the same, so in order to estimate these conditional probabilities one must consider the dependence of extreme rainfall both across space and across critical storm durations. To represent these types of dependence, this study proposes a new approach for combining extreme rainfall across different durations within a spatial extreme value model using max-stable process theory. This is achieved in a stepwise manner. The first step defines a set of common parameters for the marginal distributions across multiple durations. The parameters are then spatially interpolated to develop a spatial field. Storm-level dependence is represented through the max-stable process for rainfall extremes across different durations. The dependence model shows a reasonable fit between the observed pairwise extremal coefficients and the theoretical pairwise extremal coefficient function across all durations. The study demonstrates how the approach can be applied to develop conditional maps of the return period and return level across different durations.
Li, Zuoping; Alonso, Jorge E; Kim, Jong-Eun; Davidson, James S; Etheridge, Brandon S; Eberhardt, Alan W
2006-09-01
Three-dimensional finite element (FE) models of human pubic symphyses were constructed from computed tomography image data of one male and one female cadaver pelvis. The pubic bones, interpubic fibrocartilaginous disc and four pubic ligaments were segmented semi-automatically and meshed with hexahedral elements using automatic mesh generation schemes. A two-term viscoelastic Prony series, determined by curve fitting results of compressive creep experiments, was used to model the rate-dependent effects of the interpubic disc and the pubic ligaments. Three-parameter Mooney-Rivlin material coefficients were calculated for the discs using a heuristic FE approach based on average experimental joint compression data. Similarly, a transversely isotropic hyperelastic material model was applied to the ligaments to capture average tensile responses. Linear elastic isotropic properties were assigned to bone. The applicability of the resulting models was tested in bending simulations in four directions and in tensile tests of varying load rates. The model-predicted results correlated reasonably with the joint bending stiffnesses and rate-dependent tensile responses measured in experiments, supporting the validity of the estimated material coefficients and overall modeling approach. This study represents an important and necessary step in the eventual development of biofidelic pelvis models to investigate symphysis response under high-energy impact conditions, such as motor vehicle collisions.
A Summary of Selected Data: DSDP Legs 1-19,
1980-09-01
100 minerals may be applied in the future (densi ty water when the mineralogy and attenuation wt. water + wt. dry sed. + salt coefficients become...may be applied in the future when densities of some common minerals are the exact quantitative mineralogy and listed in Harms and Choquette (1965...calculation. These measurements different attenuation coefficient than were used to get a " ball park" answer that of calcite. for a particular sediment type
Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob
2016-08-01
The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.
Bayesian Meta-Analysis of Coefficient Alpha
ERIC Educational Resources Information Center
Brannick, Michael T.; Zhang, Nanhua
2013-01-01
The current paper describes and illustrates a Bayesian approach to the meta-analysis of coefficient alpha. Alpha is the most commonly used estimate of the reliability or consistency (freedom from measurement error) for educational and psychological measures. The conventional approach to meta-analysis uses inverse variance weights to combine…
NASA Astrophysics Data System (ADS)
Ivanov, Sergey V.; Buzykin, Oleg G.
2016-12-01
A classical approach is applied to calculate pressure broadening coefficients of CO2 vibration-rotational spectral lines perturbed by Ar. Three types of spectra are examined: electric dipole (infrared) absorption; isotropic and anisotropic Raman Q branches. Simple and explicit formulae of the classical impact theory are used along with exact 3D Hamilton equations for CO2-Ar molecular motion. The calculations utilize vibrationally independent most accurate ab initio potential energy surface (PES) of Hutson et al. expanded in Legendre polynomial series up to lmax = 24. New improved algorithm of classical rotational frequency selection is applied. The dependences of CO2 half-widths on rotational quantum number J up to J=100 are computed for the temperatures between 77 and 765 K and compared with available experimental data as well as with the results of fully quantum dynamical calculations performed on the same PES. To make the picture complete, the predictions of two independent variants of the semi-classical Robert-Bonamy formalism for dipole absorption lines are included. This method. however, has demonstrated poor accuracy almost for all temperatures. On the contrary, classical broadening coefficients are in excellent agreement both with measurements and with quantum results at all temperatures. The classical impact theory in its present variant is capable to produce quickly and accurately the pressure broadening coefficients of spectral lines of linear molecules for any J value (including high Js) using full-dimensional ab initio - based PES in the cases where other computational methods are either extremely time consuming (like the quantum close coupling method) or give erroneous results (like semi-classical methods).
Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction
NASA Technical Reports Server (NTRS)
Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.
2013-01-01
The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.
Profile-Based LC-MS Data Alignment—A Bayesian Approach
Tsai, Tsung-Heng; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.
2014-01-01
A Bayesian alignment model (BAM) is proposed for alignment of liquid chromatography-mass spectrometry (LC-MS) data. BAM belongs to the category of profile-based approaches, which are composed of two major components: a prototype function and a set of mapping functions. Appropriate estimation of these functions is crucial for good alignment results. BAM uses Markov chain Monte Carlo (MCMC) methods to draw inference on the model parameters and improves on existing MCMC-based alignment methods through 1) the implementation of an efficient MCMC sampler and 2) an adaptive selection of knots. A block Metropolis-Hastings algorithm that mitigates the problem of the MCMC sampler getting stuck at local modes of the posterior distribution is used for the update of the mapping function coefficients. In addition, a stochastic search variable selection (SSVS) methodology is used to determine the number and positions of knots. We applied BAM to a simulated data set, an LC-MS proteomic data set, and two LC-MS metabolomic data sets, and compared its performance with the Bayesian hierarchical curve registration (BHCR) model, the dynamic time-warping (DTW) model, and the continuous profile model (CPM). The advantage of applying appropriate profile-based retention time correction prior to performing a feature-based approach is also demonstrated through the metabolomic data sets. PMID:23929872
Method for the depth corrected detection of ionizing events from a co-planar grids sensor
De Geronimo, Gianluigi [Syosset, NY; Bolotnikov, Aleksey E [South Setauket, NY; Carini, Gabriella [Port Jefferson, NY
2009-05-12
A method for the detection of ionizing events utilizing a co-planar grids sensor comprising a semiconductor substrate, cathode electrode, collecting grid and non-collecting grid. The semiconductor substrate is sensitive to ionizing radiation. A voltage less than 0 Volts is applied to the cathode electrode. A voltage greater than the voltage applied to the cathode is applied to the non-collecting grid. A voltage greater than the voltage applied to the non-collecting grid is applied to the collecting grid. The collecting grid and the non-collecting grid are summed and subtracted creating a sum and difference respectively. The difference and sum are divided creating a ratio. A gain coefficient factor for each depth (distance between the ionizing event and the collecting grid) is determined, whereby the difference between the collecting electrode and the non-collecting electrode multiplied by the corresponding gain coefficient is the depth corrected energy of an ionizing event. Therefore, the energy of each ionizing event is the difference between the collecting grid and the non-collecting grid multiplied by the corresponding gain coefficient. The depth of the ionizing event can also be determined from the ratio.
Laboratory investigation and simulation of breakthrough curves in karst conduits with pools
NASA Astrophysics Data System (ADS)
Zhao, Xiaoer; Chang, Yong; Wu, Jichun; Peng, Fu
2017-12-01
A series of laboratory experiments are performed under various hydrological conditions to analyze the effect of pools in pipes on breakthrough curves (BTCs). The BTCs are generated after instantaneous injections of NaCl tracer solution. In order to test the feasibility of reproducing the BTCs and obtain transport parameters, three modeling approaches have been applied: the equilibrium model, the linear graphical method and the two-region nonequilibrium model. The investigation results show that pools induce tailing of the BTCs, and the shapes of BTCs depend on pool geometries and hydrological conditions. The simulations reveal that the two-region nonequilibrium model yields the best fits to experimental BTCs because the model can describe the transient storage in pools by the partition coefficient and the mass transfer coefficient. The model parameters indicate that pools produce high dispersion. The increased tailing occurs mainly because the partition coefficient decreases, as the number of pools increases. When comparing the tracer BTCs obtained using the two types of pools with the same size, the more appreciable BTC tails that occur for symmetrical pools likely result mainly from the less intense exchange between the water in the pools and the water in the pipe, because the partition coefficients for the two types of pools are virtually identical. Dispersivity values decrease as flow rates increase; however, the trend in dispersion is not clear. The reduced tailing is attributed to a decrease in immobile water with increasing flow rate. It provides evidence for hydrodynamically controlled tailing effects.
Dependence of toxicity of silver nanoparticles on Pseudomonas putida biofilm structure.
Thuptimdang, Pumis; Limpiyakorn, Tawan; Khan, Eakalak
2017-12-01
Susceptibility of biofilms with different physical structures to silver nanoparticles (AgNPs) was studied. Biofilms of Pseudomonas putida KT2440 were formed in batch conditions under different carbon sources (glucose, glutamic acid, and citrate), glucose concentrations (5 and 50 mM), and incubation temperatures (25 and 30 °C). The biofilms were observed using confocal laser scanning microscopy for their physical characteristics (biomass amount, thickness, biomass volume, surface to volume ratio, and roughness coefficient). The biofilms forming under different growth conditions exhibited different physical structures. The biofilm thickness and the roughness coefficient were found negatively and positively correlated with the biofilm susceptibility to AgNPs, respectively. The effect of AgNPs on biofilms was low (1-log reduction of cell number) when the biofilms had high biomass amount, high thickness, high biomass volume, low surface to volume ratio, and low roughness coefficient. Furthermore, the extracellular polymeric substance (EPS) stripping process was applied to confirm the dependence of susceptibility to AgNPs on the structure of biofilm. After the EPS stripping process, the biofilms forming under different conditions showed reduction in thickness and biomass volume, and increases in surface to volume ratio and roughness coefficient, which led to more biofilm susceptibility to AgNPs. The results of this study suggest that controlling the growth conditions to alter the biofilm physical structure is a possible approach to reduce the impact of AgNPs on biofilms in engineered and natural systems. Copyright © 2017 Elsevier Ltd. All rights reserved.
A new method to estimate average hourly global solar radiation on the horizontal surface
NASA Astrophysics Data System (ADS)
Pandey, Pramod K.; Soupir, Michelle L.
2012-10-01
A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.
New Insights into Signed Path Coefficient Granger Causality Analysis
Zhang, Jian; Li, Chong; Jiang, Tianzi
2016-01-01
Granger causality analysis, as a time series analysis technique derived from econometrics, has been applied in an ever-increasing number of publications in the field of neuroscience, including fMRI, EEG/MEG, and fNIRS. The present study mainly focuses on the validity of “signed path coefficient Granger causality,” a Granger-causality-derived analysis method that has been adopted by many fMRI researches in the last few years. This method generally estimates the causality effect among the time series by an order-1 autoregression, and defines a positive or negative coefficient as an “excitatory” or “inhibitory” influence. In the current work we conducted a series of computations from resting-state fMRI data and simulation experiments to illustrate the signed path coefficient method was flawed and untenable, due to the fact that the autoregressive coefficients were not always consistent with the real causal relationships and this would inevitablely lead to erroneous conclusions. Overall our findings suggested that the applicability of this kind of causality analysis was rather limited, hence researchers should be more cautious in applying the signed path coefficient Granger causality to fMRI data to avoid misinterpretation. PMID:27833547
Presser, Cary; Nazarian, Ashot; Conny, Joseph M.; Chand, Duli; Sedlacek, Arthur; Hubbe, John M.
2017-01-01
Absorptivity measurements with a laser-heating approach, referred to as the laser-driven thermal reactor (LDTR), were carried out in the infrared and applied at ambient (laboratory) non-reacting conditions to particle-laden filters from a three-wavelength (visible) particle/soot absorption photometer (PSAP). The particles were obtained during the Biomass Burning Observation Project (BBOP) field campaign. The focus of this study was to determine the particle absorption coefficient from field-campaign filter samples using the LDTR approach, and compare results with other commercially available instrumentation (in this case with the PSAP, which has been compared with numerous other optical techniques). Advantages of the LDTR approach include 1) direct estimation of material absorption from temperature measurements (as opposed to resolving the difference between the measured reflection/scattering and transmission), 2) information on the filter optical properties, and 3) identification of the filter material effects on particle absorption (e.g., leading to particle absorption enhancement or shadowing). For measurements carried out under ambient conditions, the particle absorptivity is obtained with a thermocouple placed flush with the filter back surface and the laser probe beam impinging normal to the filter particle-laden surface. Thus, in principle one can employ a simple experimental arrangement to measure simultaneously both the transmissivity and absorptivity (at different discrete wavelengths) and ascertain the particle absorption coefficient. For this investigation, LDTR measurements were carried out with PSAP filters (pairs with both blank and exposed filters) from eight different days during the campaign, having relatively light but different particle loadings. The observed particles coating the filters were found to be carbonaceous (having broadband absorption characteristics). The LDTR absorption coefficient compared well with results from the PSAP. The analysis was also expanded to account for the filter fiber scattering on particle absorption in assessing particle absorption enhancement and shadowing effects. The results indicated that absorption enhancement effects were significant, and diminished with increased filter particle loading. PMID:28690360
Presser, Cary; Nazarian, Ashot; Conny, Joseph M; Chand, Duli; Sedlacek, Arthur; Hubbe, John M
2017-01-01
Absorptivity measurements with a laser-heating approach, referred to as the laser-driven thermal reactor (LDTR), were carried out in the infrared and applied at ambient (laboratory) non-reacting conditions to particle-laden filters from a three-wavelength (visible) particle/soot absorption photometer (PSAP). The particles were obtained during the Biomass Burning Observation Project (BBOP) field campaign. The focus of this study was to determine the particle absorption coefficient from field-campaign filter samples using the LDTR approach, and compare results with other commercially available instrumentation (in this case with the PSAP, which has been compared with numerous other optical techniques). Advantages of the LDTR approach include 1) direct estimation of material absorption from temperature measurements (as opposed to resolving the difference between the measured reflection/scattering and transmission), 2) information on the filter optical properties, and 3) identification of the filter material effects on particle absorption (e.g., leading to particle absorption enhancement or shadowing). For measurements carried out under ambient conditions, the particle absorptivity is obtained with a thermocouple placed flush with the filter back surface and the laser probe beam impinging normal to the filter particle-laden surface. Thus, in principle one can employ a simple experimental arrangement to measure simultaneously both the transmissivity and absorptivity (at different discrete wavelengths) and ascertain the particle absorption coefficient. For this investigation, LDTR measurements were carried out with PSAP filters (pairs with both blank and exposed filters) from eight different days during the campaign, having relatively light but different particle loadings. The observed particles coating the filters were found to be carbonaceous (having broadband absorption characteristics). The LDTR absorption coefficient compared well with results from the PSAP. The analysis was also expanded to account for the filter fiber scattering on particle absorption in assessing particle absorption enhancement and shadowing effects. The results indicated that absorption enhancement effects were significant, and diminished with increased filter particle loading.
Linear Estimation of Particle Bulk Parameters from Multi-Wavelength Lidar Measurements
NASA Technical Reports Server (NTRS)
Veselovskii, Igor; Dubovik, Oleg; Kolgotin, A.; Korenskiy, M.; Whiteman, D. N.; Allakhverdiev, K.; Huseyinoglu, F.
2012-01-01
An algorithm for linear estimation of aerosol bulk properties such as particle volume, effective radius and complex refractive index from multiwavelength lidar measurements is presented. The approach uses the fact that the total aerosol concentration can well be approximated as a linear combination of aerosol characteristics measured by multiwavelength lidar. Therefore, the aerosol concentration can be estimated from lidar measurements without the need to derive the size distribution, which entails more sophisticated procedures. The definition of the coefficients required for the linear estimates is based on an expansion of the particle size distribution in terms of the measurement kernels. Once the coefficients are established, the approach permits fast retrieval of aerosol bulk properties when compared with the full regularization technique. In addition, the straightforward estimation of bulk properties stabilizes the inversion making it more resistant to noise in the optical data. Numerical tests demonstrate that for data sets containing three aerosol backscattering and two extinction coefficients (so called 3 + 2 ) the uncertainties in the retrieval of particle volume and surface area are below 45% when input data random uncertainties are below 20 %. Moreover, using linear estimates allows reliable retrievals even when the number of input data is reduced. To evaluate the approach, the results obtained using this technique are compared with those based on the previously developed full inversion scheme that relies on the regularization procedure. Both techniques were applied to the data measured by multiwavelength lidar at NASA/GSFC. The results obtained with both methods using the same observations are in good agreement. At the same time, the high speed of the retrieval using linear estimates makes the method preferable for generating aerosol information from extended lidar observations. To demonstrate the efficiency of the method, an extended time series of observations acquired in Turkey in May 2010 was processed using the linear estimates technique permitting, for what we believe to be the first time, temporal-height distributions of particle parameters.
Transport coefficients in nonequilibrium gas-mixture flows with electronic excitation.
Kustova, E V; Puzyreva, L A
2009-10-01
In the present paper, a one-temperature model of transport properties in chemically nonequilibrium neutral gas-mixture flows with electronic excitation is developed. The closed set of governing equations for the macroscopic parameters taking into account electronic degrees of freedom of both molecules and atoms is derived using the generalized Chapman-Enskog method. The transport algorithms for the calculation of the thermal-conductivity, diffusion, and viscosity coefficients are proposed. The developed theoretical model is applied for the calculation of the transport coefficients in the electronically excited N/N(2) mixture. The specific heats and transport coefficients are calculated in the temperature range 50-50,000 K. Two sets of data for the collision integrals are applied for the calculations. An important contribution of the excited electronic states to the heat transfer is shown. The Prandtl number of atomic species is found to be substantially nonconstant.
Ponterotto, Joseph G; Ruckdeschel, Daniel E
2007-12-01
The present article addresses issues in reliability assessment that are often neglected in psychological research such as acceptable levels of internal consistency for research purposes, factors affecting the magnitude of coefficient alpha (alpha), and considerations for interpreting alpha within the research context. A new reliability matrix anchored in classical test theory is introduced to help researchers judge adequacy of internal consistency coefficients with research measures. Guidelines and cautions in applying the matrix are provided.
Artificial Bee Colony Optimization for Short-Term Hydrothermal Scheduling
NASA Astrophysics Data System (ADS)
Basu, M.
2014-12-01
Artificial bee colony optimization is applied to determine the optimal hourly schedule of power generation in a hydrothermal system. Artificial bee colony optimization is a swarm-based algorithm inspired by the food foraging behavior of honey bees. The algorithm is tested on a multi-reservoir cascaded hydroelectric system having prohibited operating zones and thermal units with valve point loading. The ramp-rate limits of thermal generators are taken into consideration. The transmission losses are also accounted for through the use of loss coefficients. The algorithm is tested on two hydrothermal multi-reservoir cascaded hydroelectric test systems. The results of the proposed approach are compared with those of differential evolution, evolutionary programming and particle swarm optimization. From numerical results, it is found that the proposed artificial bee colony optimization based approach is able to provide better solution.
An Adaptive Handover Prediction Scheme for Seamless Mobility Based Wireless Networks
Safa Sadiq, Ali; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches. PMID:25574490
An adaptive handover prediction scheme for seamless mobility based wireless networks.
Sadiq, Ali Safa; Fisal, Norsheila Binti; Ghafoor, Kayhan Zrar; Lloret, Jaime
2014-01-01
We propose an adaptive handover prediction (AHP) scheme for seamless mobility based wireless networks. That is, the AHP scheme incorporates fuzzy logic with AP prediction process in order to lend cognitive capability to handover decision making. Selection metrics, including received signal strength, mobile node relative direction towards the access points in the vicinity, and access point load, are collected and considered inputs of the fuzzy decision making system in order to select the best preferable AP around WLANs. The obtained handover decision which is based on the calculated quality cost using fuzzy inference system is also based on adaptable coefficients instead of fixed coefficients. In other words, the mean and the standard deviation of the normalized network prediction metrics of fuzzy inference system, which are collected from available WLANs are obtained adaptively. Accordingly, they are applied as statistical information to adjust or adapt the coefficients of membership functions. In addition, we propose an adjustable weight vector concept for input metrics in order to cope with the continuous, unpredictable variation in their membership degrees. Furthermore, handover decisions are performed in each MN independently after knowing RSS, direction toward APs, and AP load. Finally, performance evaluation of the proposed scheme shows its superiority compared with representatives of the prediction approaches.
Du, Qi-Shi; Huang, Ri-Bo; Wei, Yu-Tuo; Pang, Zong-Wen; Du, Li-Qin; Chou, Kuo-Chen
2009-01-30
In cooperation with the fragment-based design a new drug design method, the so-called "fragment-based quantitative structure-activity relationship" (FB-QSAR) is proposed. The essence of the new method is that the molecular framework in a family of drug candidates are divided into several fragments according to their substitutes being investigated. The bioactivities of molecules are correlated with the physicochemical properties of the molecular fragments through two sets of coefficients in the linear free energy equations. One coefficient set is for the physicochemical properties and the other for the weight factors of the molecular fragments. Meanwhile, an iterative double least square (IDLS) technique is developed to solve the two sets of coefficients in a training data set alternately and iteratively. The IDLS technique is a feedback procedure with machine learning ability. The standard Two-dimensional quantitative structure-activity relationship (2D-QSAR) is a special case, in the FB-QSAR, when the whole molecule is treated as one entity. The FB-QSAR approach can remarkably enhance the predictive power and provide more structural insights into rational drug design. As an example, the FB-QSAR is applied to build a predictive model of neuraminidase inhibitors for drug development against H5N1 influenza virus. (c) 2008 Wiley Periodicals, Inc.
Population pharmacokinetics of phenytoin in critically ill children.
Hennig, Stefanie; Norris, Ross; Tu, Quyen; van Breda, Karin; Riney, Kate; Foster, Kelly; Lister, Bruce; Charles, Bruce
2015-03-01
The objective was to study the population pharmacokinetics of bound and unbound phenytoin in critically ill children, including influences on the protein binding profile. A population pharmacokinetic approach was used to analyze paired protein-unbound and total phenytoin plasma concentrations (n = 146 each) from 32 critically ill children (0.08-17 years of age) who were admitted to a pediatric hospital, primarily intensive care unit. The pharmacokinetics of unbound and bound phenytoin and the influence of possible influential covariates were modeled and evaluated using visual predictive checks and bootstrapping. The pharmacokinetics of protein-unbound phenytoin was described satisfactorily by a 1-compartment model with first-order absorption in conjunction with a linear partition coefficient parameter to describe the binding of phenytoin to albumin. The partitioning coefficient describing protein binding and distribution to bound phenytoin was estimated to be 8.22. Nonlinear elimination of unbound phenytoin was not supported in this patient group. Weight, allometrically scaled for clearance and volume of distribution for the unbound and bound compartments, and albumin concentration significantly influenced the partition coefficient for protein binding of phenytoin. The population model can be applied to estimate the fraction of unbound phenytoin in critically ill children given an individual's albumin concentration. © 2014, The American College of Clinical Pharmacology.
Bedra, L; Rutigliano, M; Balat-Pichelin, M; Cacciatore, M
2006-08-15
A joint experimental and theoretical approach has been developed to study oxygen atom recombination on a beta-quartz surface. The experimental MESOX setup has been applied for the direct measurement of the atomic oxygen recombination coefficient gamma at T(S) = 1000 K. The time evolution of the relative atomic oxygen concentration in the cell is described by the diffusion equation because the mean free path of the atoms is less than the characteristic dimension of the reactor. The recombination coefficient gamma is then calculated from the concentration profile obtained by visible spectroscopy. We get an experimental value of gamma = 0.008, which is a factor of about 3 less than the gamma value reported for O recombination over beta-cristobalite. The experimental results are discussed and compared with the semiclassical collision dynamics calculations performed on the same catalytic system aimed at determining the basic features of the surface catalytic activity. Agreement, both qualitative and quantitative, between the experimental and the theoretical recombination coefficients has been found that supports the Eley-Rideal recombination mechanism and gives more evidence of the impact that surface crystallographic variation has on catalytic activity. Also, several interesting aspects concerning the energetics and the mechanism of the surface processes involving the oxygen atoms are pointed out and discussed.
NASA Astrophysics Data System (ADS)
Wang, Jun; Niino, Hiroyuki; Yabe, Akira
1999-02-01
We developed a novel method of obtaining an absorption coefficient which depends on the laser intensity, since a single-photon absorption coefficient of a polymer could not be applied to laser ablation. The relationship between the nonlinear absorption coefficient and the laser intensity was derived from experimental data of transmission and incident laser intensities. Using the nonlinear absorption coefficient of poly(methylmethacrylate) doped with benzil and pyrene, we succeeded in fitting the relationship of etch depth and laser intensity, obtained experimentally, and discussed the energy absorbed by the polymer at the threshold fluence.
Ebel, B.A.; Mirus, B.B.; Heppner, C.S.; VanderKwaak, J.E.; Loague, K.
2009-01-01
Distributed hydrologic models capable of simulating fully-coupled surface water and groundwater flow are increasingly used to examine problems in the hydrologic sciences. Several techniques are currently available to couple the surface and subsurface; the two most frequently employed approaches are first-order exchange coefficients (a.k.a., the surface conductance method) and enforced continuity of pressure and flux at the surface-subsurface boundary condition. The effort reported here examines the parameter sensitivity of simulated hydrologic response for the first-order exchange coefficients at a well-characterized field site using the fully coupled Integrated Hydrology Model (InHM). This investigation demonstrates that the first-order exchange coefficients can be selected such that the simulated hydrologic response is insensitive to the parameter choice, while simulation time is considerably reduced. Alternatively, the ability to choose a first-order exchange coefficient that intentionally decouples the surface and subsurface facilitates concept-development simulations to examine real-world situations where the surface-subsurface exchange is impaired. While the parameters comprising the first-order exchange coefficient cannot be directly estimated or measured, the insensitivity of the simulated flow system to these parameters (when chosen appropriately) combined with the ability to mimic actual physical processes suggests that the first-order exchange coefficient approach can be consistent with a physics-based framework. Copyright ?? 2009 John Wiley & Sons, Ltd.
Non-Contact Thrust Stand Calibration Method for Repetitively-Pulsed Electric Thrusters
NASA Technical Reports Server (NTRS)
Wong, Andrea R.; Toftul, Alexandra; Polzin, Kurt A.; Pearson, J. Boise
2011-01-01
A thrust stand calibration technique for use in testing repetitively-pulsed electric thrusters for in-space propulsion has been developed and tested using a modified hanging pendulum thrust stand. In the implementation of this technique, current pulses are applied to a solenoidal coil to produce a pulsed magnetic field that acts against the magnetic field produced by a permanent magnet mounted to the thrust stand pendulum arm. The force on the magnet is applied in this non-contact manner, with the entire pulsed force transferred to the pendulum arm through a piezoelectric force transducer to provide a time-accurate force measurement. Modeling of the pendulum arm dynamics reveals that after an initial transient in thrust stand motion the quasisteady average deflection of the thrust stand arm away from the unforced or zero position can be related to the average applied force through a simple linear Hooke s law relationship. Modeling demonstrates that this technique is universally applicable except when the pulsing period is increased to the point where it approaches the period of natural thrust stand motion. Calibration data were obtained using a modified hanging pendulum thrust stand previously used for steady-state thrust measurements. Data were obtained for varying impulse bit at constant pulse frequency and for varying pulse frequency. The two data sets exhibit excellent quantitative agreement with each other as the constant relating average deflection and average thrust match within the errors on the linear regression curve fit of the data. Quantitatively, the error on the calibration coefficient is roughly 1% of the coefficient value.
Tacey, Sean A; Xu, Lang; Szilvási, Tibor; Schauer, James J; Mavrikakis, Manos
2018-04-30
Gas-to-particle phase partitioning controls the pathways for oxidized mercury deposition from the atmosphere to the Earth's surface. The propensity of oxidized mercury species to transition between these two phases is described by the partitioning coefficient (K p ). Experimental measurements of K p values for HgCl 2 in the presence of atmospheric aerosols are difficult and time-consuming. Quantum chemical calculations, therefore, offer a promising opportunity to efficiently estimate partitioning coefficients for HgCl 2 on relevant aerosols. In this study, density functional theory (DFT) calculations are used to predict K p values for HgCl 2 on relevant iron-oxide surfaces. The model is first verified using a NaCl(100) surface, showing good agreement between the calculated (2.8) and experimental (29-43) dimensionless partitioning coefficients at room temperature. Then, the methodology is applied to six atmospherically relevant terminations of α-Fe 2 O 3 (0001): OH-Fe-R, (OH) 3 -Fe-R, (OH) 3 -R, O-Fe-R, Fe-O 3 -R, and O 3 -R (where R denotes bulk ordering). The OH-Fe-R termination is predicted to be the most stable under typical atmospheric conditions, and on this surface termination, a dimensionless HgCl 2 K p value of 5.2 × 10 3 at 295 K indicates a strong preference for the particle phase. This work demonstrates DFT as a promising approach to obtain partitioning coefficients, which can lead to improved models for the transport of mercury, as well as for other atmospheric pollutant species, through and between the anthroposphere and troposphere. Copyright © 2018 Elsevier B.V. All rights reserved.
Qu, Yanfei; Ma, Yongwen; Wan, Jinquan; Wang, Yan
2018-06-01
The silicon oil-air partition coefficients (K SiO/A ) of hydrophobic compounds are vital parameters for applying silicone oil as non-aqueous-phase liquid in partitioning bioreactors. Due to the limited number of K SiO/A values determined by experiment for hydrophobic compounds, there is an urgent need to model the K SiO/A values for unknown chemicals. In the present study, we developed a universal quantitative structure-activity relationship (QSAR) model using a sequential approach with macro-constitutional and micromolecular descriptors for silicone oil-air partition coefficients (K SiO/A ) of hydrophobic compounds with large structural variance. The geometry optimization and vibrational frequencies of each chemical were calculated using the hybrid density functional theory at the B3LYP/6-311G** level. Several quantum chemical parameters that reflect various intermolecular interactions as well as hydrophobicity were selected to develop QSAR model. The result indicates that a regression model derived from logK SiO/A , the number of non-hydrogen atoms (#nonHatoms) and energy gap of E LUMO and E HOMO (E LUMO -E HOMO ) could explain the partitioning mechanism of hydrophobic compounds between silicone oil and air. The correlation coefficient R 2 of the model is 0.922, and the internal and external validation coefficient, Q 2 LOO and Q 2 ext , are 0.91 and 0.89 respectively, implying that the model has satisfactory goodness-of-fit, robustness, and predictive ability and thus provides a robust predictive tool to estimate the logK SiO/A values for chemicals in application domain. The applicability domain of the model was visualized by the Williams plot.
Use of the Ames Check Standard Model for the Validation of Wall Interference Corrections
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Amaya, M.; Flach, R.
2018-01-01
The new check standard model of the NASA Ames 11-ft Transonic Wind Tunnel was chosen for a future validation of the facility's wall interference correction system. The chosen validation approach takes advantage of the fact that test conditions experienced by a large model in the slotted part of the tunnel's test section will change significantly if a subset of the slots is temporarily sealed. Therefore, the model's aerodynamic coefficients have to be recorded, corrected, and compared for two different test section configurations in order to perform the validation. Test section configurations with highly accurate Mach number and dynamic pressure calibrations were selected for the validation. First, the model is tested with all test section slots in open configuration while keeping the model's center of rotation on the tunnel centerline. In the next step, slots on the test section floor are sealed and the model is moved to a new center of rotation that is 33 inches below the tunnel centerline. Then, the original angle of attack sweeps are repeated. Afterwards, wall interference corrections are applied to both test data sets and response surface models of the resulting aerodynamic coefficients in interference-free flow are generated. Finally, the response surface models are used to predict the aerodynamic coefficients for a family of angles of attack while keeping dynamic pressure, Mach number, and Reynolds number constant. The validation is considered successful if the corrected aerodynamic coefficients obtained from the related response surface model pair show good agreement. Residual differences between the corrected coefficient sets will be analyzed as well because they are an indicator of the overall accuracy of the facility's wall interference correction process.
Experimental determination of the partitioning coefficient of β-pinene oxidation products in SOAs.
Hohaus, Thorsten; Gensch, Iulia; Kimmel, Joel; Worsnop, Douglas R; Kiendler-Scharr, Astrid
2015-06-14
The composition of secondary organic aerosols (SOAs) formed by β-pinene ozonolysis was experimentally investigated in the Juelich aerosol chamber. Partitioning of oxidation products between gas and particles was measured through concurrent concentration measurements in both phases. Partitioning coefficients (Kp) of 2.23 × 10(-5) ± 3.20 × 10(-6) m(3) μg(-1) for nopinone, 4.86 × 10(-4) ± 1.80 × 10(-4) m(3) μg(-1) for apoverbenone, 6.84 × 10(-4) ± 1.52 × 10(-4) m(3) μg(-1) for oxonopinone and 2.00 × 10(-3) ± 1.13 × 10(-3) m(3) μg(-1) for hydroxynopinone were derived, showing higher values for more oxygenated species. The observed Kp values were compared with values predicted using two different semi-empirical approaches. Both methods led to an underestimation of the partitioning coefficients with systematic differences between the methods. Assuming that the deviation between the experiment and the model is due to non-ideality of the mixed solution in particles, activity coefficients of 4.82 × 10(-2) for nopinone, 2.17 × 10(-3) for apoverbenone, 3.09 × 10(-1) for oxonopinone and 7.74 × 10(-1) for hydroxynopinone would result using the vapour pressure estimation technique that leads to higher Kp. We discuss that such large non-ideality for nopinone could arise due to particle phase processes lowering the effective nopinone vapour pressure such as diol- or dimer formation. The observed high partitioning coefficients compared to modelled results imply an underestimation of SOA mass by applying equilibrium conditions.
NMR investigation of water diffusion in different biofilm structures.
Herrling, Maria P; Weisbrodt, Jessica; Kirkland, Catherine M; Williamson, Nathan H; Lackner, Susanne; Codd, Sarah L; Seymour, Joseph D; Guthausen, Gisela; Horn, Harald
2017-12-01
Mass transfer in biofilms is determined by diffusion. Different mostly invasive approaches have been used to measure diffusion coefficients in biofilms, however, data on heterogeneous biomass under realistic conditions is still missing. To non-invasively elucidate fluid-structure interactions in complex multispecies biofilms pulsed field gradient-nuclear magnetic resonance (PFG-NMR) was applied to measure the water diffusion in five different types of biomass aggregates: one type of sludge flocs, two types of biofilm, and two types of granules. Data analysis is an important issue when measuring heterogeneous systems and is shown to significantly influence the interpretation and understanding of water diffusion. With respect to numerical reproducibility and physico-chemical interpretation, different data processing methods were explored: (bi)-exponential data analysis and the Γ distribution model. Furthermore, the diffusion coefficient distribution in relation to relaxation was studied by D-T 2 maps obtained by 2D inverse Laplace transform (2D ILT). The results show that the effective diffusion coefficients for all biofilm samples ranged from 0.36 to 0.96 relative to that of water. NMR diffusion was linked to biofilm structure (e.g., biomass density, organic and inorganic matter) as observed by magnetic resonance imaging and to traditional biofilm parameters: diffusion was most restricted in granules with compact structures, and fast diffusion was found in heterotrophic biofilms with fluffy structures. The effective diffusion coefficients in the biomass were found to be broadly distributed because of internal biomass heterogeneities, such as gas bubbles, precipitates, and locally changing biofilm densities. Thus, estimations based on biofilm bulk properties in multispecies systems can be overestimated and mean diffusion coefficients might not be sufficiently informative to describe mass transport in biofilms and the near bulk. © 2017 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Edwards, T. R. (Inventor)
1985-01-01
Apparatus for doubling the data density rate of an analog to digital converter or doubling the data density storage capacity of a memory deviced is discussed. An interstitial data point midway between adjacent data points in a data stream having an even number of equal interval data points is generated by applying a set of predetermined one-dimensional convolute integer coefficients which can include a set of multiplier coefficients and a normalizer coefficient. Interpolator means apply the coefficients to the data points by weighting equally on each side of the center of the even number of equal interval data points to obtain an interstital point value at the center of the data points. A one-dimensional output data set, which is twice as dense as a one-dimensional equal interval input data set, can be generated where the output data set includes interstitial points interdigitated between adjacent data points in the input data set. The method for generating the set of interstital points is a weighted, nearest-neighbor, non-recursive, moving, smoothing averaging technique, equivalent to applying a polynomial regression calculation to the data set.
Wang, Yi; Xiang, Ma; Wen, Ya-Dong; Yu, Chun-Xia; Wang, Luo-Ping; Zhao, Long-Lian; Li, Jun-Hui
2012-11-01
In this study, tobacco quality analysis of main Industrial classification of different years was carried out applying spectrum projection and correlation methods. The group of data was near-infrared (NIR) spectrum from Hongta Tobacco (Group) Co., Ltd. 5730 tobacco leaf Industrial classification samples from Yuxi in Yunnan province from 2007 to 2010 year were collected using near infrared spectroscopy, which from different parts and colors and all belong to tobacco varieties of HONGDA. The conclusion showed that, when the samples were divided to two part by the ratio of 2:1 randomly as analysis and verification sets in the same year, the verification set corresponded with the analysis set applying spectrum projection because their correlation coefficients were above 0.98. The correlation coefficients between two different years applying spectrum projection were above 0.97. The highest correlation coefficient was the one between 2008 and 2009 year and the lowest correlation coefficient was the one between 2007 and 2010 year. At the same time, The study discussed a method to get the quantitative similarity values of different industrial classification samples. The similarity and consistency values were instructive in combination and replacement of tobacco leaf blending.
Principal shapes and squeezed limits in the effective field theory of large scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertolini, Daniele; Solon, Mikhail P., E-mail: dbertolini@lbl.gov, E-mail: mpsolon@lbl.gov
2016-11-01
We apply an orthogonalization procedure on the effective field theory of large scale structure (EFT of LSS) shapes, relevant for the angle-averaged bispectrum and non-Gaussian covariance of the matter power spectrum at one loop. Assuming natural-sized EFT parameters, this identifies a linear combination of EFT shapes—referred to as the principal shape—that gives the dominant contribution for the whole kinematic plane, with subdominant combinations suppressed by a few orders of magnitude. For the covariance, our orthogonal transformation is in excellent agreement with a principal component analysis applied to available data. Additionally we find that, for both observables, the coefficients of themore » principal shapes are well approximated by the EFT coefficients appearing in the squeezed limit, and are thus measurable from power spectrum response functions. Employing data from N-body simulations for the growth-only response, we measure the single EFT coefficient describing the angle-averaged bispectrum with Ο (10%) precision. These methods of shape orthogonalization and measurement of coefficients from response functions are valuable tools for developing the EFT of LSS framework, and can be applied to more general observables.« less
Clustering stock market companies via chaotic map synchronization
NASA Astrophysics Data System (ADS)
Basalto, N.; Bellotti, R.; De Carlo, F.; Facchi, P.; Pascazio, S.
2005-01-01
A pairwise clustering approach is applied to the analysis of the Dow Jones index companies, in order to identify similar temporal behavior of the traded stock prices. To this end, the chaotic map clustering algorithm is used, where a map is associated to each company and the correlation coefficients of the financial time series to the coupling strengths between maps. The simulation of a chaotic map dynamics gives rise to a natural partition of the data, as companies belonging to the same industrial branch are often grouped together. The identification of clusters of companies of a given stock market index can be exploited in the portfolio optimization strategies.
Regression approach to non-invasive determination of bilirubin in neonatal blood
NASA Astrophysics Data System (ADS)
Lysenko, S. A.; Kugeiko, M. M.
2012-07-01
A statistical ensemble of structural and biophysical parameters of neonatal skin was modeled based on experimental data. Diffuse scattering coefficients of the skin in the visible and infrared regions were calculated by applying a Monte-Carlo method to each realization of the ensemble. The potential accuracy of recovering the bilirubin concentration in dermis (which correlates closely with that in blood) was estimated from spatially resolved spectrometric measurements of diffuse scattering. The possibility to determine noninvasively the bilirubin concentration was shown by measurements of diffuse scattering at λ = 460, 500, and 660 nm at three source-detector separations under conditions of total variability of the skin biophysical parameters.
Supercritical convection, critical heat flux, and coking characteristics of propane
NASA Technical Reports Server (NTRS)
Rousar, D. C.; Gross, R. S.; Boyd, W. C.
1984-01-01
The heat transfer characteristics of propane at subcritical and supercritical pressure were experimentally evaluated using electrically heated Monel K-500 tubes. A design correlation for supercritical heat transfer coefficient was established using the approach previously applied to supercritical oxygen. Flow oscillations were observed and the onset of these oscillations at supercritical pressures was correlated with wall-to-bulk temperature ratio and velocity. The critical heat flux measured at subcritical pressure was correlated with the product of velocity and subcooling. Long duration tests at fixed heat flux conditions were conducted to evaluate coking on the coolant side tube wall and coking rates comparable to RP-1 were observed.
Variational Solutions and Random Dynamical Systems to SPDEs Perturbed by Fractional Gaussian Noise
Zeng, Caibin; Yang, Qigui; Cao, Junfei
2014-01-01
This paper deals with the following type of stochastic partial differential equations (SPDEs) perturbed by an infinite dimensional fractional Brownian motion with a suitable volatility coefficient Φ: dX(t) = A(X(t))dt+Φ(t)dB H(t), where A is a nonlinear operator satisfying some monotonicity conditions. Using the variational approach, we prove the existence and uniqueness of variational solutions to such system. Moreover, we prove that this variational solution generates a random dynamical system. The main results are applied to a general type of nonlinear SPDEs and the stochastic generalized p-Laplacian equation. PMID:24574903
Demodulation of moire fringes in digital holographic interferometry using an extended Kalman filter.
Ramaiah, Jagadesh; Rastogi, Pramod; Rajshekhar, Gannavarpu
2018-03-10
This paper presents a method for extracting multiple phases from a single moire fringe pattern in digital holographic interferometry. The method relies on component separation using singular value decomposition and an extended Kalman filter for demodulating the moire fringes. The Kalman filter is applied by modeling the interference field locally as a multi-component polynomial phase signal and extracting the associated multiple polynomial coefficients using the state space approach. In addition to phase, the corresponding multiple phase derivatives can be simultaneously extracted using the proposed method. The applicability of the proposed method is demonstrated using simulation and experimental results.
Path-integral approach to the Wigner-Kirkwood expansion.
Jizba, Petr; Zatloukal, Václav
2014-01-01
We study the high-temperature behavior of quantum-mechanical path integrals. Starting from the Feynman-Kac formula, we derive a functional representation of the Wigner-Kirkwood perturbation expansion for quantum Boltzmann densities. As shown by its applications to different potentials, the presented expansion turns out to be quite efficient in generating analytic form of the higher-order expansion coefficients. To put some flesh on the bare bones, we apply the expansion to obtain basic thermodynamic functions of the one-dimensional anharmonic oscillator. Further salient issues, such as generalization to the Bloch density matrix and comparison with the more customary world-line formulation, are discussed.
Quantum chemical approach for condensed-phase thermochemistry (IV): Solubility of gaseous molecules
NASA Astrophysics Data System (ADS)
Ishikawa, Atsushi; Kamata, Masahiro; Nakai, Hiromi
2016-07-01
The harmonic solvation model (HSM) was applied to the solvation of gaseous molecules and compared to a procedure based on the ideal gas model (IGM). Examination of 25 molecules showed that (i) the accuracy of ΔGsolv was similar for both methods, but the HSM shows advantages for calculating ΔHsolv and TΔSsolv; (ii) TΔSsolv contributes more than ΔHsolv to ΔGsolv in the HSM, i.e. the solvation of gaseous molecules is entropy-driven, which agrees well with experimental understanding (the IGM does not show this); (iii) the temperature dependence of Henry's law coefficient was correctly reproduced with the HSM.
Disk in a groove with friction: An analysis of static equilibrium and indeterminacy
NASA Astrophysics Data System (ADS)
Donolato, Cesare
2018-05-01
This note studies the statics of a rigid disk placed in a V-shaped groove with frictional walls and subjected to gravity and a torque. The two-dimensional equilibrium problem is formulated in terms of the angles that contact forces form with the normal to the walls. This approach leads to a single trigonometric equation in two variables whose domain is determined by Coulomb's law of friction. The properties of solutions (existence, uniqueness, or indeterminacy) as functions of groove angle, friction coefficient and applied torque are derived by a simple geometric representation. The results modify some of the conclusions by other authors on the same problem.
Using recurrence plot analysis for software execution interpretation and fault detection
NASA Astrophysics Data System (ADS)
Mosdorf, M.
2015-09-01
This paper shows a method targeted at software execution interpretation and fault detection using recurrence plot analysis. In in the proposed approach recurrence plot analysis is applied to software execution trace that contains executed assembly instructions. Results of this analysis are subject to further processing with PCA (Principal Component Analysis) method that simplifies number coefficients used for software execution classification. This method was used for the analysis of five algorithms: Bubble Sort, Quick Sort, Median Filter, FIR, SHA-1. Results show that some of the collected traces could be easily assigned to particular algorithms (logs from Bubble Sort and FIR algorithms) while others are more difficult to distinguish.
Grid-free density functional calculations on periodic systems.
Varga, Stefan
2007-09-21
Density fitting scheme is applied to the exchange part of the Kohn-Sham potential matrix in a grid-free local density approximation for infinite systems with translational periodicity. It is shown that within this approach the computational demands for the exchange part scale in the same way as for the Coulomb part. The efficiency of the scheme is demonstrated on a model infinite polymer chain. For simplicity, the implementation with Dirac-Slater Xalpha exchange functional is presented only. Several choices of auxiliary basis set expansion coefficients were tested with both Coulomb and overlap metric. Their effectiveness is discussed also in terms of robustness and norm preservation.
Grid-free density functional calculations on periodic systems
NASA Astrophysics Data System (ADS)
Varga, Štefan
2007-09-01
Density fitting scheme is applied to the exchange part of the Kohn-Sham potential matrix in a grid-free local density approximation for infinite systems with translational periodicity. It is shown that within this approach the computational demands for the exchange part scale in the same way as for the Coulomb part. The efficiency of the scheme is demonstrated on a model infinite polymer chain. For simplicity, the implementation with Dirac-Slater Xα exchange functional is presented only. Several choices of auxiliary basis set expansion coefficients were tested with both Coulomb and overlap metric. Their effectiveness is discussed also in terms of robustness and norm preservation.
NASA Astrophysics Data System (ADS)
Xu, Yingru; Bernhard, Jonah E.; Bass, Steffen A.; Nahrgang, Marlene; Cao, Shanshan
2018-01-01
By applying a Bayesian model-to-data analysis, we estimate the temperature and momentum dependence of the heavy quark diffusion coefficient in an improved Langevin framework. The posterior range of the diffusion coefficient is obtained by performing a Markov chain Monte Carlo random walk and calibrating on the experimental data of D -meson RAA and v2 in three different collision systems at the Relativistic Heavy-Ion Collidaer (RHIC) and the Large Hadron Collider (LHC): Au-Au collisions at 200 GeV and Pb-Pb collisions at 2.76 and 5.02 TeV. The spatial diffusion coefficient is found to be consistent with lattice QCD calculations and comparable with other models' estimation. We demonstrate the capability of our improved Langevin model to simultaneously describe the RAA and v2 at both RHIC and the LHC energies, as well as the higher order flow coefficient such as D meson v3. We show that by applying a Bayesian analysis, we are able to quantitatively and systematically study the heavy flavor dynamics in heavy-ion collisions.
On the frequency spectra of the core magnetic field Gauss coefficients
NASA Astrophysics Data System (ADS)
Lesur, Vincent; Wardinski, Ingo; Baerenzung, Julien; Holschneider, Matthias
2018-03-01
From monthly mean observatory data spanning 1957-2014, geomagnetic field secular variation values were calculated by annual differences. Estimates of the spherical harmonic Gauss coefficients of the core field secular variation were then derived by applying a correlation based modelling. Finally, a Fourier transform was applied to the time series of the Gauss coefficients. This process led to reliable temporal spectra of the Gauss coefficients up to spherical harmonic degree 5 or 6, and down to periods as short as 1 or 2 years depending on the coefficient. We observed that a k-2 slope, where k is the frequency, is an acceptable approximation for these spectra, with possibly an exception for the dipole field. The monthly estimates of the core field secular variation at the observatory sites also show that large and rapid variations of the latter happen. This is an indication that geomagnetic jerks are frequent phenomena and that significant secular variation signals at short time scales - i.e. less than 2 years, could still be extracted from data to reveal an unexplored part of the core dynamics.
Kim, Kyungmok; Lee, Jaewook
2016-01-01
This paper describes a sliding friction model for an electro-deposited coating. Reciprocating sliding tests using ball-on-flat plate test apparatus are performed to determine an evolution of the kinetic friction coefficient. The evolution of the friction coefficient is classified into the initial running-in period, steady-state sliding, and transition to higher friction. The friction coefficient during the initial running-in period and steady-state sliding is expressed as a simple linear function. The friction coefficient in the transition to higher friction is described with a mathematical model derived from Kachanov-type damage law. The model parameters are then estimated using the Markov Chain Monte Carlo (MCMC) approach. It is identified that estimated friction coefficients obtained by MCMC approach are in good agreement with measured ones. PMID:28773359
On the methods for determining the transverse dispersion coefficient in river mixing
NASA Astrophysics Data System (ADS)
Baek, Kyong Oh; Seo, Il Won
2016-04-01
In this study, the strengths and weaknesses of existing methods for determining the dispersion coefficient in the two-dimensional river mixing model were assessed based on hydraulic and tracer data sets acquired from experiments conducted on either laboratory channels or natural rivers. From the results of this study, it can be concluded that, when the longitudinal dispersion coefficient as well as the transverse dispersion coefficients must be determined in the transient concentration situation, the two-dimensional routing procedures, 2D RP and 2D STRP, can be employed to calculate dispersion coefficients among the observation methods. For the steady concentration situation, the STRP can be applied to calculate the transverse dispersion coefficient. When the tracer data are not available, either theoretical or empirical equations by the estimation method can be used to calculate the dispersion coefficient using the geometric and hydraulic data sets. Application of the theoretical and empirical equations to the laboratory channel showed that equations by Baek and Seo [[3], 2011] predicted reasonable values while equations by Fischer [23] and Boxwall and Guymer (2003) overestimated by factors of ten to one hundred. Among existing empirical equations, those by Jeon et al. [28] and Baek and Seo [6] gave the agreeable values of the transverse dispersion coefficient for most cases of natural rivers. Further, the theoretical equation by Baek and Seo [5] has the potential to be broadly applied to both laboratory and natural channels.
ON THE APPROACH TO NON-EQUILIBRIUM STATIONARY STATES AND THE THEORY OF TRANSPORT COEFFICIENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balescu, R.
1961-07-01
A general formula for the time dependent electric current arising from a constant electric field is derived similarly to Kubo's theory. This formula connects the time dependence of the current to the singularities of the resolvent of Liouville's operator of a classical system. Direct contact is made with the general theory of approach to equilibrium developed by Prigogine and his coworkers. It constitutes a framework for a diagram expansion of transport coefficients. A proof of the existence of a stationary state and of its stability (to first order in the field) are given. It is rigorously shown that, whereas themore » approach to the stationary state is in general governed by complicated non-markoffian equations, the stationary state itself (and thus the calculation of transport coefficients) is always determined by an asymptotic cross section. This implies that transport coefficients can always be calculated from a markoffian Boltzmann-like equation even in situations in which that equation does not describe properly the approach to the stationary state. (auth)« less
NASA Astrophysics Data System (ADS)
Crevillén-García, D.; Power, H.
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Amezcua, Carlos A; Szabo, Christina M
2013-06-01
In this work, we applied nuclear magnetic resonance (NMR) spectroscopy to rapidly assess higher order structure (HOS) comparability in protein samples. Using a variation of the NMR fingerprinting approach described by Panjwani et al. [2010. J Pharm Sci 99(8):3334-3342], three nonglycosylated proteins spanning a molecular weight range of 6.5-67 kDa were analyzed. A simple statistical method termed easy comparability of HOS by NMR (ECHOS-NMR) was developed. In this method, HOS similarity between two samples is measured via the correlation coefficient derived from linear regression analysis of binned NMR spectra. Applications of this method include HOS comparability assessment during new product development, manufacturing process changes, supplier changes, next-generation products, and the development of biosimilars to name just a few. We foresee ECHOS-NMR becoming a routine technique applied to comparability exercises used to complement data from other analytical techniques. Copyright © 2013 Wiley Periodicals, Inc.
Crevillén-García, D; Power, H
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
Power, H.
2017-01-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974
Modeling Surface Roughness to Estimate Surface Moisture Using Radarsat-2 Quad Polarimetric SAR Data
NASA Astrophysics Data System (ADS)
Nurtyawan, R.; Saepuloh, A.; Budiharto, A.; Wikantika, K.
2016-08-01
Microwave backscattering from the earth's surface depends on several parameters such as surface roughness and dielectric constant of surface materials. The two parameters related to water content and porosity are crucial for estimating soil moisture. The soil moisture is an important parameter for ecological study and also a factor to maintain energy balance of land surface and atmosphere. Direct roughness measurements to a large area require extra time and cost. Heterogeneity roughness scale for some applications such as hydrology, climate, and ecology is a problem which could lead to inaccuracies of modeling. In this study, we modeled surface roughness using Radasat-2 quad Polarimetric Synthetic Aperture Radar (PolSAR) data. The statistical approaches to field roughness measurements were used to generate an appropriate roughness model. This modeling uses a physical SAR approach to predicts radar backscattering coefficient in the parameter of radar configuration (wavelength, polarization, and incidence angle) and soil parameters (surface roughness and dielectric constant). Surface roughness value is calculated using a modified Campbell and Shepard model in 1996. The modification was applied by incorporating the backscattering coefficient (σ°) of quad polarization HH, HV and VV. To obtain empirical surface roughness model from SAR backscattering intensity, we used forty-five sample points from field roughness measurements. We selected paddy field in Indramayu district, West Java, Indonesia as the study area. This area was selected due to intensive decreasing of rice productivity in the Northern Coast region of West Java. Third degree polynomial is the most suitable data fitting with coefficient of determination R2 and RMSE are about 0.82 and 1.18 cm, respectively. Therefore, this model is used as basis to generate the map of surface roughness.
Gravity field error analysis for pendulum formations by a semi-analytical approach
NASA Astrophysics Data System (ADS)
Li, Huishu; Reubelt, Tilo; Antoni, Markus; Sneeuw, Nico
2017-03-01
Many geoscience disciplines push for ever higher requirements on accuracy, homogeneity and time- and space-resolution of the Earth's gravity field. Apart from better instruments or new observables, alternative satellite formations could improve the signal and error structure compared to Grace. One possibility to increase the sensitivity and isotropy by adding cross-track information is a pair of satellites flying in a pendulum formation. This formation contains two satellites which have different ascending nodes and arguments of latitude, but have the same orbital height and inclination. In this study, the semi-analytical approach for efficient pre-mission error assessment is presented, and the transfer coefficients of range, range-rate and range-acceleration gravitational perturbations are derived analytically for the pendulum formation considering a set of opening angles. The new challenge is the time variations of the opening angle and the range, leading to temporally variable transfer coefficients. This is solved by Fourier expansion of the sine/cosine of the opening angle and the central angle. The transfer coefficients are further applied to assess the error patterns which are caused by different orbital parameters. The simulation results indicate that a significant improvement in accuracy and isotropy is obtained for small and medium initial opening angles of single polar pendulums, compared to Grace. The optimal initial opening angles are 45° and 15° for accuracy and isotropy, respectively. For a Bender configuration, which is constituted by a polar Grace and an inclined pendulum in this paper, the behaviour of results is dependent on the inclination (prograde vs. retrograde) and on the relative baseline orientation (left or right leading). The simulation for a sun-synchronous orbit shows better results for the left leading case.
Liu; Wene
2000-09-01
An empirical model describing the relationship between the partition coefficients (K) of perfume materials in the solid-phase microextraction (SPME) fiber stationary phase and the Linearly Temperature Programmed Retention Index (LTPRI) is obtained. This is established using a mixture of eleven selected fragrance materials spiked in mineral oil at different concentration levels to simulate liquid laundry detergent matrices. Headspace concentrations of the materials are measured using both static headspace and SPME-gas chromatography analysis. The empirical model is tested by measuring the K values for fourteen perfume materials experimentally. Three of the calculated K values are within 2-19% of the measured K value, and the other eleven calculated K values are within 22-59%. This range of deviation is understandable because a diverse mixture was used to cover most chemical functionalities in order to make the model generally applicable. Better prediction accuracy is expected when a model is established using a specific category of compounds, such as hydrocarbons or aromatics. The use of this method to estimate distribution constants of fragrance materials in liquid matrices is demonstrated. The headspace SPME using the established relationship between the gas-liquid partition coefficient and the LTPRI is applied to measure the headspace concentration of fragrances. It is demonstrated that this approach can be used to monitor the headspace perfume profiles over consumer laundry and cleaning products. This method can provide high sample throughput, reproducibility, simplicity, and accuracy for many applications for screening major fragrance materials over consumer products. The approach demonstrated here can be used to translate headspace SPME results into true static headspace concentration profiles. This translation is critical for obtaining the gas-phase composition by correcting for the inherent differential partitioning of analytes into the fiber stationary phase.
Malo de Molina, Paula; Alvarez, Fernando; Frick, Bernhard; Wildes, Andrew; Arbe, Arantxa; Colmenero, Juan
2017-10-18
We applied quasielastic neutron scattering (QENS) techniques to samples with two different contrasts (deuterated solute/hydrogenated solvent and the opposite label) to selectively study the component dynamics of proline/water solutions. Results on diluted and concentrated solutions (31 and 6 water molecules/proline molecule, respectively) were analyzed in terms of the susceptibility and considering a recently proposed model for water dynamics [Arbe et al., Phys. Rev. Lett., 2016, 117, 185501] which includes vibrations and the convolution of localized motions and diffusion. We found that proline molecules not only reduce the average diffusion coefficient of water but also extend the time/frequency range of the crossover region ('cage') between the vibrations and purely diffusive behavior. For the high proline concentration we also found experimental evidence of water heterogeneous dynamics and a distribution of diffusion coefficients. Complementary molecular dynamics simulations show that water molecules start to perform rotational diffusion when they escape the cage regime but before the purely diffusive behavior is established. The rotational diffusion regime is also retarded by the presence of proline molecules. On the other hand, a strong coupling between proline and water diffusive dynamics which persists with decreasing temperature is directly observed using QENS. Not only are the temperature dependences of the diffusion coefficients of both components the same, but their absolute values also approach each other with increasing proline concentration. We compared our results with those reported using other techniques, in particular using dielectric spectroscopy (DS). A simple approach based on molecular hydrodynamics and a molecular treatment of DS allows rationalizing the a priori puzzling inconsistency between QENS and dielectric results regarding the dynamic coupling of the two components. The interpretation proposed is based on general grounds and therefore should be applicable to other biomolecular solutions.
Distinguishing dose, focus, and blur for lithography characterization and control
NASA Astrophysics Data System (ADS)
Ausschnitt, Christopher P.; Brunner, Timothy A.
2007-03-01
We derive a physical model to describe the dependence of pattern dimensions on dose, defocus and blur. The coefficients of our model are constants of a given lithographic process. Model inversion applied to dimensional measurements then determines effective dose, defocus and blur for wafers patterned with the same process. In practice, our approach entails the measurement of proximate grating targets of differing dose and focus sensitivity. In our embodiment, the measured attribute of one target is exclusively sensitive to dose, whereas the measured attributes of a second target are distinctly sensitive to defocus and blur. On step-and-scan exposure tools, z-blur is varied in a controlled manner by adjusting the across slit tilt of the image plane. The effects of z-blur and x,y-blur are shown to be equivalent. Furthermore, the exposure slit width is shown to determine the tilt response of the grating attributes. Thus, the response of the measured attributes can be characterized by a conventional focus-exposure matrix (FEM), over which the exposure tool settings are intentionally changed. The model coefficients are determined by a fit to the measured FEM response. The model then fully defines the response for wafers processed under "fixed" dose, focus and blur conditions. Model inversion applied to measurements from the same targets on all such wafers enables the simultaneous determination of effective dose and focus/tilt (DaFT) at each measurement site.
Effect of air turbulence on gas transport in soil; comparison of approaches
NASA Astrophysics Data System (ADS)
Pourbakhtiar, Alireza; Papadikis, Konstantinos; Poulsen, Tjalfe; Bridge, Jonathan; Wilkinson, Stephen
2017-04-01
Greenhouse gases are playing the key role in global warming. Soil is a source of greenhouse gases such as methane (CH4). Radon (Rn) which is a radioactive gas can emit form subsurface into the atmosphere and leads to health concerns in urban areas. Temperature, humidity, air pressure and vegetation of soil can affect gas emissions inside soil (Oertel et al., 2016). It's shown in many cases that wind induced fluctuations is an important factor in transport of gas through soil and other porous media. An example is: landfill gas emissions (Poulsen et al., 2001). We applied an experimental equipment for measuring controlled air turbulence on gas transport in soil in relation to the depth of sample. Two approaches for measurement of effect of wind turbulence on gas transport were applied and compared. Experiments were carried out with diffusion of CO2 and air as tracer gases with average vertical wind speeds of 0 to 0.83 m s-1. In approach A, Six different sample thicknesses from 5 to 30 cm were selected and total of 4 different wind conditions with different speed and fluctuations were applied. In approach B, a sample with constant depth was used. Five oxygen sensors were places inside sample at different depths. Total of 111 experiments were carried out. Gas transport is described by advection-dispersion equation. Gas transport is quantified as a dispersion coefficient. Oxygen breakthrough curves as a function of distance to the surface of the sample exposed to wind were derived numerically with an explicit forward time, central space finite-difference based model to evaluate gas transport. We showed that wind turbulence-induced fluctuations is an important factor in gas transport that can increase gas transport with average of 45 times more than molecular diffusion under zero wind condition. Comparison of two strategies for experiments, indicated that, constant deep samples (Approach B) are more reliable for measurement of gas transport under influence of wind turbulence. They are more similar to natural conditions and also the lower layers of soil are affecting the diffusion and dispersion coefficients of soil in the upper layers. Power spectrum density is calculated for all the all wind conditions to determine strength vibration of all the wind speeds and its relation to gas transport. Differential pressure for different wind conditions are measured at two sides of samples. References Oertel, C., Matschullat, J., Zurba, K., Zimmermann, F. & Erasmi, S. 2016. Greenhouse gas emissions from soils—A review. Chemie der Erde - Geochemistry, 76, 327-352. Poulsen, T.G., Christophersen, M., Moldrup, P. & Kjeldsen, P. 2001. Modeling lateral gas transport in soil adjacent to old landfill. Journal of Environmental Engineering (ASCE), 127, 145-153.
Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application
NASA Astrophysics Data System (ADS)
Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni
2018-06-01
Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.
Abrahams, Elihu; Wölfle, Peter
2012-01-01
We use the recently developed critical quasiparticle theory to derive the scaling behavior associated with a quantum critical point in a correlated metal. This is applied to the magnetic-field induced quantum critical point observed in YbRh2Si2, for which we also derive the critical behavior of the specific heat, resistivity, thermopower, magnetization and susceptibility, the Grüneisen coefficient, and the thermal expansion coefficient. The theory accounts very well for the available experimental results. PMID:22331893
NASA Astrophysics Data System (ADS)
Pitoňák, Martin; Šprlák, Michal; Tenzer, Robert
2017-05-01
We investigate a numerical performance of four different schemes applied to a regional recovery of the gravity anomalies from the third-order gravitational tensor components (assumed to be observable in the future) synthetized at the satellite altitude of 200 km above the mean sphere. The first approach is based on applying a regional inversion without modelling the far-zone contribution or long-wavelength support. In the second approach we separate integral formulas into two parts, that is, the effects of the third-order disturbing tensor data within near and far zones. Whereas the far-zone contribution is evaluated by using existing global geopotential model (GGM) with spectral weights given by truncation error coefficients, the near-zone contribution is solved by applying a regional inversion. We then extend this approach for a smoothing procedure, in which we remove the gravitational contributions of the topographic-isostatic and atmospheric masses. Finally, we apply the remove-compute-restore (r-c-r) scheme in order to reduce the far-zone contribution by subtracting the reference (long-wavelength) gravity field, which is computed for maximum degree 80. We apply these four numerical schemes to a regional recovery of the gravity anomalies from individual components of the third-order gravitational tensor as well as from their combinations, while applying two different levels of a white noise. We validated our results with respect to gravity anomalies evaluated at the mean sphere from EGM2008 up to the degree 250. Not surprisingly, better fit in terms of standard deviation (STD) was attained using lower level of noise. The worst results were gained applying classical approach, STD values of our solution from Tzzz are 1.705 mGal (noise value with a standard deviation 0.01 × 10 - 15m - 1s - 2) and 2.005 mGal (noise value with a standard deviation 0.05 × 10 - 15m - 1s - 2), while the superior from r-c-r up to the degree 80, STD fit of gravity anomalies from Tzzz with respect to the same counterpart from EGM2008 is 0.510 mGal (noise value with a standard deviation 0.01 × 10 - 15m - 1s - 2) and 1.190 mGal (noise value with a standard deviation 0.05 × 10 - 15m - 1s - 2).
Estimating JPEG2000 compression for image forensics using Benford's Law
NASA Astrophysics Data System (ADS)
Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.
2010-05-01
With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result clearly indicates a presence of compression in the image. Moreover, we compare the results of 1st digit probability and divergence among JPEG2000 compression rates at 0.1, 0.3, 0.5 and 0.9. The initial results show that the expected difference among them could be used for further analysis to estimate the unknown JPEG2000 compression rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hub, Martina; Thieke, Christian; Kessler, Marc L.
2012-04-15
Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts formore » the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well.« less
Hub, Martina; Thieke, Christian; Kessler, Marc L.; Karger, Christian P.
2012-01-01
Purpose: In fractionated radiation therapy, image guidance with daily tomographic imaging becomes more and more clinical routine. In principle, this allows for daily computation of the delivered dose and for accumulation of these daily dose distributions to determine the actually delivered total dose to the patient. However, uncertainties in the mapping of the images can translate into errors of the accumulated total dose, depending on the dose gradient. In this work, an approach to estimate the uncertainty of mapping between medical images is proposed that identifies areas bearing a significant risk of inaccurate dose accumulation. Methods: This method accounts for the geometric uncertainty of image registration and the heterogeneity of the dose distribution, which is to be mapped. Its performance is demonstrated in context of dose mapping based on b-spline registration. It is based on evaluation of the sensitivity of dose mapping to variations of the b-spline coefficients combined with evaluation of the sensitivity of the registration metric with respect to the variations of the coefficients. It was evaluated based on patient data that was deformed based on a breathing model, where the ground truth of the deformation, and hence the actual true dose mapping error, is known. Results: The proposed approach has the potential to distinguish areas of the image where dose mapping is likely to be accurate from other areas of the same image, where a larger uncertainty must be expected. Conclusions: An approach to identify areas where dose mapping is likely to be inaccurate was developed and implemented. This method was tested for dose mapping, but it may be applied in context of other mapping tasks as well. PMID:22482640
NASA Astrophysics Data System (ADS)
Montejo, Ludguier D.; Jia, Jingfei; Kim, Hyun K.; Hielscher, Andreas H.
2013-03-01
We apply the Fourier Transform to absorption and scattering coefficient images of proximal interphalangeal (PIP) joints and evaluate the performance of these coefficients as classifiers using receiver operator characteristic (ROC) curve analysis. We find 25 features that yield a Youden index over 0.7, 3 features that yield a Youden index over 0.8, and 1 feature that yields a Youden index over 0.9 (90.0% sensitivity and 100% specificity). In general, scattering coefficient images yield better one-dimensional classifiers compared to absorption coefficient images. Using features derived from scattering coefficient images we obtain an average Youden index of 0.58 +/- 0.16, and an average Youden index of 0.45 +/- 0.15 when using features from absorption coefficient images.
A data-drive analysis for heavy quark diffusion coefficient
NASA Astrophysics Data System (ADS)
Xu, Yingru; Nahrgang, Marlene; Cao, Shanshan; Bernhard, Jonah E.; Bass, Steffen A.
2018-02-01
We apply a Bayesian model-to-data analysis on an improved Langevin framework to estimate the temperature and momentum dependence of the heavy quark diffusion coefficient in the quark-gluon plasma (QGP). The spatial diffusion coefficient is found to have a minimum around 1-3 near Tc in the zero momentum limit, and has a non-trivial momentum dependence. With the estimated diffusion coefficient, our improved Langevin model is able to simultaneously describe the D-meson RAA and v2 in three different systems at RHIC and the LHC.
Volumetric runoff coefficients for experimental rural catchments in the Iberian Peninsula
NASA Astrophysics Data System (ADS)
Taguas, Encarnación V.; Molina, Cecilio; Nadal-Romero, Estela; Ayuso, José L.; Casalí, Javier; Cid, Patricio; Dafonte, Jorge; Duarte, Antonio C.; Farguell, Joaquim; Giménez, Rafael; Giráldez, Juan V.; Gómez, Helena; Gómez, Jose A.; González-Hidalgo, J. Carlos; Keizer, J. Jacob; Lucía, Ana; Mateos, Luciano; Rodríguez-Blanco, M. Luz; Schnabel, Sussane; Serrano-Muela, M. Pilar
2015-04-01
Analysis of runoff and peaks therein is essential for designing hydraulic infrastructures and for assessing the hydrological implications of likely scenarios of climate and/or land-use change. Different methods are available to calculate runoff coefficients. For instance, the runoff coefficient of a catchment can be described either as the ratio of total depth of runoff to total depth of rainfall or as the ratio of peak flow to rainfall intensity for the time of concentration (Dhakal et al. 2012). If the first definition is considered, runoff coefficients represent the global effect of different features and states of catchments and its determination requires a suitable analysis according to the objectives pursued (Chow et al., 1988). In this work, rainfall-runoff data and physical attributes from small rural catchments located in the Iberian Peninsula (Portugal and Spain) were examined in order to compare the representative values of runoff coefficients using three different approaches: i) statistical analysis of rainfall-runoff data and their quantiles (Dhakal et al., 2012); ii) probabilistic runoff coefficients from the rank-ordered pairs of observed rainfall-runoff data and their relationships with rainfall depths (Schaake et al., 1967); iii) finally, a multiple linear model based on geomorphological attributes. These catchments exhibit great variety with respect to their natural settings, such as climate, topography and lithology. We present a preliminary analysis of the rainfall-runoff relationships as well as their variability in a complex context such as the Iberian Peninsula where contrasted environmental systems coexist. We also discuss reference parameters representing runoff coefficients commonly included into hydrological models. This study is conceived as the first step to explore further working protocols and modeling gaps in a very susceptible area to the climate change such as the Iberian Peninsula's, where the analysis of runoff coefficients is crucial for designing appropriate decision making tools for water management. REFERENCES Chow V.T., Maidment D.R. and Mays, L.W. 1988. Applied Hydrology. MCGraw Hill, Nueva York. Dhakal, N., Fang, X., Cleveland, T., Thompson, D., Asquith, W., and Marzen, L. (2012). "Estimation of Volumetric Runoff Coefficients for Texas Watersheds Using Land-Use and Rainfall-Runoff Data." Journal of Irrigation and Drainage Engineering, 1(2012):43-54. Schaake JC, Geyer JC,Knapp JW. 1967. Experimental examination of the rational method. J. Hydr.Div. 93(6),353-70
Transfer having a coupling coefficient higher than its active material
NASA Technical Reports Server (NTRS)
Lesieutre, George A. (Inventor); Davis, Christopher L. (Inventor)
2001-01-01
A coupling coefficient is a measure of the effectiveness with which a shape-changing material (or a device employing such a material) converts the energy in an imposed signal to useful mechanical energy. Device coupling coefficients are properties of the device and, although related to the material coupling coefficients, are generally different from them. This invention describes a class of devices wherein the apparent coupling coefficient can, in principle, approach 1.0, corresponding to perfect electromechanical energy conversion. The key feature of this class of devices is the use of destabilizing mechanical pre-loads to counter inherent stiffness. The approach is illustrated for piezoelectric and thermoelectrically actuated devices. The invention provides a way to simultaneously increase both displacement and force, distinguishing it from alternatives such as motion amplification, and allows transducer designers to achieve substantial performance gains for actuator and sensor devices.
Calculation of thermal expansion coefficient of glasses based on topological constraint theory
NASA Astrophysics Data System (ADS)
Zeng, Huidan; Ye, Feng; Li, Xiang; Wang, Ling; Yang, Bin; Chen, Jianding; Zhang, Xianghua; Sun, Luyi
2016-10-01
In this work, the thermal expansion behavior and the structure configuration evolution of glasses were studied. Degree of freedom based on the topological constraint theory is correlated with configuration evolution; considering the chemical composition and the configuration change, the analytical equation for calculating the thermal expansion coefficient of glasses from degree of freedom was derived. The thermal expansion of typical silicate and chalcogenide glasses was examined by calculating their thermal expansion coefficients (TEC) using the approach stated above. The results showed that this approach was energetically favorable for glass materials and revealed the corresponding underlying essence from viewpoint of configuration entropy. This work establishes a configuration-based methodology to calculate the thermal expansion coefficient of glasses that, lack periodic order.
Automation of Endmember Pixel Selection in SEBAL/METRIC Model
NASA Astrophysics Data System (ADS)
Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.
2015-12-01
The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.
ERIC Educational Resources Information Center
Camporesi, Roberto
2011-01-01
We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of…
Interaction between photons and leaf canopies
NASA Technical Reports Server (NTRS)
Knyazikhin, Yuri V.; Marshak, Alexander L.; Myneni, Ranga B.
1991-01-01
The physics of neutral particle interaction for photons traveling in media consisting of finite-dimensional scattering centers that cross-shade mutually is investigated. A leaf canopy is a typical example of such media. The leaf canopy is idealized as a binary medium consisting of randomly distributed gaps (voids) and regions with phytoelements (turbid phytomedium). In this approach, the leaf canopy is represented by a combination of all possible open oriented spheres. The mathematical approach for characterizing the structure of the host medium is considered. The extinction coefficient at any phase-space location in a leaf canopy is the product of the extinction coefficient in the turbid phytomedium and the probability of absence gaps at that location. Using a similar approach, an expression for the differential scattering coefficient is derived.
Culzoni, María J; Aucelio, Ricardo Q; Escandar, Graciela M
2012-08-31
Based on green analytical chemistry principles, an efficient approach was applied for the simultaneous determination of galantamine, a widely used cholinesterase inhibitor for the treatment of Alzheimer's disease, and its major metabolites in serum samples. After a simple serum deproteinization step, second-order data were rapidly obtained (less than 6 min) with a chromatographic system operating in the isocratic regime using ammonium acetate/acetonitrile (94:6) as mobile phase. Detection was made with a fast-scanning spectrofluorimeter, which allowed the efficient collection of data to obtain matrices of fluorescence intensity as a function of retention time and emission wavelength. Successful resolution was achieved in the presence of matrix interferences in serum samples using multivariate curve resolution-alternating least-squares (MCR-ALS). The developed approach allows the quantification of the analytes at levels found in treated patients, without the need of applying either preconcentration or extraction steps. Limits of detection in the range between 8 and 11 ng mL(-1), relative prediction errors from 7 to 12% and coefficients of variation from 4 to 7% were achieved. Copyright © 2012 Elsevier B.V. All rights reserved.
Joint multi-object registration and segmentation of left and right cardiac ventricles in 4D cine MRI
NASA Astrophysics Data System (ADS)
Ehrhardt, Jan; Kepp, Timo; Schmidt-Richberg, Alexander; Handels, Heinz
2014-03-01
The diagnosis of cardiac function based on cine MRI requires the segmentation of cardiac structures in the images, but the problem of automatic cardiac segmentation is still open, due to the imaging characteristics of cardiac MR images and the anatomical variability of the heart. In this paper, we present a variational framework for joint segmentation and registration of multiple structures of the heart. To enable the simultaneous segmentation and registration of multiple objects, a shape prior term is introduced into a region competition approach for multi-object level set segmentation. The proposed algorithm is applied for simultaneous segmentation of the myocardium as well as the left and right ventricular blood pool in short axis cine MRI images. Two experiments are performed: first, intra-patient 4D segmentation with a given initial segmentation for one time-point in a 4D sequence, and second, a multi-atlas segmentation strategy is applied to unseen patient data. Evaluation of segmentation accuracy is done by overlap coefficients and surface distances. An evaluation based on clinical 4D cine MRI images of 25 patients shows the benefit of the combined approach compared to sole registration and sole segmentation.
Optimisation of active suspension control inputs for improved performance of active safety systems
NASA Astrophysics Data System (ADS)
Čorić, Mirko; Deur, Joško; Xu, Li; Tseng, H. Eric; Hrovat, Davor
2018-01-01
A collocation-type control variable optimisation method is used to investigate the extent to which the fully active suspension (FAS) can be applied to improve the vehicle electronic stability control (ESC) performance and reduce the braking distance. First, the optimisation approach is applied to the scenario of vehicle stabilisation during the sine-with-dwell manoeuvre. The results are used to provide insights into different FAS control mechanisms for vehicle performance improvements related to responsiveness and yaw rate error reduction indices. The FAS control performance is compared to performances of the standard ESC system, optimal active brake system and combined FAS and ESC configuration. Second, the optimisation approach is employed to the task of FAS-based braking distance reduction for straight-line vehicle motion. Here, the scenarios of uniform and longitudinally or laterally non-uniform tyre-road friction coefficient are considered. The influences of limited anti-lock braking system (ABS) actuator bandwidth and limit-cycle ABS behaviour are also analysed. The optimisation results indicate that the FAS can provide competitive stabilisation performance and improved agility when compared to the ESC system, and that it can reduce the braking distance by up to 5% for distinctively non-uniform friction conditions.
Feature selection gait-based gender classification under different circumstances
NASA Astrophysics Data System (ADS)
Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah
2014-05-01
This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.
Villa-Parra, Ana Cecilia; Bastos-Filho, Teodiano; López-Delis, Alberto; Frizera-Neto, Anselmo; Krishnan, Sridhar
2017-01-01
This work presents a new on-line adaptive filter, which is based on a similarity analysis between standard electrode locations, in order to reduce artifacts and common interferences throughout electroencephalography (EEG) signals, but preserving the useful information. Standard deviation and Concordance Correlation Coefficient (CCC) between target electrodes and its correspondent neighbor electrodes are analyzed on sliding windows to select those neighbors that are highly correlated. Afterwards, a model based on CCC is applied to provide higher values of weight to those correlated electrodes with lower similarity to the target electrode. The approach was applied to brain computer-interfaces (BCIs) based on Canonical Correlation Analysis (CCA) to recognize 40 targets of steady-state visual evoked potential (SSVEP), providing an accuracy (ACC) of 86.44 ± 2.81%. In addition, also using this approach, features of low frequency were selected in the pre-processing stage of another BCI to recognize gait planning. In this case, the recognition was significantly (p<0.01) improved for most of the subjects (ACC≥74.79%), when compared with other BCIs based on Common Spatial Pattern, Filter Bank-Common Spatial Pattern, and Riemannian Geometry. PMID:29186848
JPEG2000-coded image error concealment exploiting convex sets projections.
Atzori, Luigi; Ginesu, Giaime; Raccis, Alessio
2005-04-01
Transmission errors in JPEG2000 can be grouped into three main classes, depending on the affected area: LL, high frequencies at the lower decomposition levels, and high frequencies at the higher decomposition levels. The first type of errors are the most annoying but can be concealed exploiting the signal spatial correlation like in a number of techniques proposed in the past; the second are less annoying but more difficult to address; the latter are often imperceptible. In this paper, we address the problem of concealing the second class or errors when high bit-planes are damaged by proposing a new approach based on the theory of projections onto convex sets. Accordingly, the error effects are masked by iteratively applying two procedures: low-pass (LP) filtering in the spatial domain and restoration of the uncorrupted wavelet coefficients in the transform domain. It has been observed that a uniform LP filtering brought to some undesired side effects that negatively compensated the advantages. This problem has been overcome by applying an adaptive solution, which exploits an edge map to choose the optimal filter mask size. Simulation results demonstrated the efficiency of the proposed approach.
Control analysis for autonomously oscillating biochemical networks.
Reijenga, Karin A; Westerhoff, Hans V; Kholodenko, Boris N; Snoep, Jacky L
2002-01-01
It has hitherto not been possible to analyze the control of oscillatory dynamic cellular processes in other than qualitative ways. The control coefficients, used in metabolic control analyses of steady states, cannot be applied directly to dynamic systems. We here illustrate a way out of this limitation that uses Fourier transforms to convert the time domain into the stationary frequency domain, and then analyses the control of limit cycle oscillations. In addition to the already known summation theorems for frequency and amplitude, we reveal summation theorems that apply to the control of average value, waveform, and phase differences of the oscillations. The approach is made fully operational in an analysis of yeast glycolytic oscillations. It follows an experimental approach, sampling from the model output and using discrete Fourier transforms of this data set. It quantifies the control of various aspects of the oscillations by the external glucose concentration and by various internal molecular processes. We show that the control of various oscillatory properties is distributed over the system enzymes in ways that differ among those properties. The models that are described in this paper can be accessed on http://jjj.biochem.sun.ac.za. PMID:11751299
ERIC Educational Resources Information Center
Weber, Deborah A.
Greater understanding and use of confidence intervals is central to changes in statistical practice (G. Cumming and S. Finch, 2001). Reliability coefficients and confidence intervals for reliability coefficients can be computed using a variety of methods. Estimating confidence intervals includes both central and noncentral distribution approaches.…
Strength of Wet and Dry Montmorillonite
NASA Astrophysics Data System (ADS)
Morrow, C. A.; Lockner, D. A.; Moore, D. E.
2015-12-01
Montmorillonite, an expandable smectite clay, is a common mineral in fault zones to a depth of around 3 km. Its low strength relative to other common fault gouge minerals is important in many models of fault rheology. However, the coefficient of friction is not well constrained in the literature due to the difficulty of establishing fully drained or fully dried states in the laboratory. For instance, in some reported studies, samples were either partially saturated or possibly over pressured, leading to wide variability in reported shear strength. In this study, the coefficient of friction, μ, of both saturated and oven-dried (at 150°C) Na-montmorillonite was measured at normal stresses up to 680 MPa at room temperature and shortening rates from 1.0 to 0.01 μm/s. Care was taken to shear saturated samples slowly enough to avoid pore fluid overpressure in the clay layers. Coefficients of friction are reported after 8 mm of axial displacement in a triaxial apparatus on saw-cut samples containing a layer of montmorillonite gouge, with either granite or sandstone driving blocks. For saturated samples, μ increased from around 0.1 at low pressure to 0.25 at the highest test pressures. In contrast, values for oven-dried samples decreased asymptotically from approximately 0.78 at 10 MPa normal stress to around 0.45 at 400-680 MPa. While wet and dry strengths approached each other with increasing effective normal stress, wet strength remained only about half of the dry strength at 600 MPa effective normal stress. The increased coefficient of friction can be correlated with a reduction in the number of loosely bound lubricating surface water layers on the clay platelets due to applied normal stress under saturated conditions. The steady-state rate dependence of friction, a-b, was positive and dependent on normal stress. For saturated samples, a-b increased linearly with applied normal stress from ~0 to 0.004, while for dry samples a-b decreased with increasing normal stress from 0.008 to 0.002. All values were either neutral or rate strengthening, indicating a tendency for stable sliding.
The Evaluation on the Cadmium Net Concentration for Soil Ecosystems.
Yao, Yu; Wang, Pei-Fang; Wang, Chao; Hou, Jun; Miao, Ling-Zhan
2017-03-12
Yixing, known as the "City of Ceramics", is facing a new dilemma: a raw material crisis. Cadmium (Cd) exists in extremely high concentrations in soil due to the considerable input of industrial wastewater into the soil ecosystem. The in situ technique of diffusive gradients in thin film (DGT), the ex situ static equilibrium approach (HAc, EDTA and CaCl2), and the dissolved concentration in soil solution, as well as microwave digestion, were applied to predict the Cd bioavailability of soil, aiming to provide a robust and accurate method for Cd bioavailability evaluation in Yixing. Moreover, the typical local cash crops-paddy and zizania aquatica-were selected for Cd accumulation, aiming to select the ideal plants with tolerance to the soil Cd contamination. The results indicated that the biomasses of the two applied plants were sufficiently sensitive to reflect the stark regional differences of different sampling sites. The zizania aquatica could effectively reduce the total Cd concentration, as indicated by the high accumulation coefficients. However, the fact that the zizania aquatica has extremely high transfer coefficients, and its stem, as the edible part, might accumulate large amounts of Cd, led to the conclusion that zizania aquatica was not an ideal cash crop in Yixing. Furthermore, the labile Cd concentrations which were obtained by the DGT technique and dissolved in the soil solution showed a significant correlation with the Cd concentrations of the biota accumulation. However, the ex situ methods and the microwave digestion-obtained Cd concentrations showed a poor correlation with the accumulated Cd concentration in plant tissue. Correspondingly, the multiple linear regression models were built for fundamental analysis of the performance of different methods available for Cd bioavailability evaluation. The correlation coefficients of DGT obtained by the improved multiple linear regression model have not significantly improved compared to the coefficients obtained by the simple linear regression model. The results revealed that DGT was a robust measurement, which could obtain the labile Cd concentrations independent of the physicochemical features' variation in the soil ecosystem. Consequently, these findings provide stronger evidence that DGT is an effective and ideal tool for labile Cd evaluation in Yixing.
The Evaluation on the Cadmium Net Concentration for Soil Ecosystems
Yao, Yu; Wang, Pei-Fang; Wang, Chao; Hou, Jun; Miao, Ling-Zhan
2017-01-01
Yixing, known as the “City of Ceramics”, is facing a new dilemma: a raw material crisis. Cadmium (Cd) exists in extremely high concentrations in soil due to the considerable input of industrial wastewater into the soil ecosystem. The in situ technique of diffusive gradients in thin film (DGT), the ex situ static equilibrium approach (HAc, EDTA and CaCl2), and the dissolved concentration in soil solution, as well as microwave digestion, were applied to predict the Cd bioavailability of soil, aiming to provide a robust and accurate method for Cd bioavailability evaluation in Yixing. Moreover, the typical local cash crops—paddy and zizania aquatica—were selected for Cd accumulation, aiming to select the ideal plants with tolerance to the soil Cd contamination. The results indicated that the biomasses of the two applied plants were sufficiently sensitive to reflect the stark regional differences of different sampling sites. The zizania aquatica could effectively reduce the total Cd concentration, as indicated by the high accumulation coefficients. However, the fact that the zizania aquatica has extremely high transfer coefficients, and its stem, as the edible part, might accumulate large amounts of Cd, led to the conclusion that zizania aquatica was not an ideal cash crop in Yixing. Furthermore, the labile Cd concentrations which were obtained by the DGT technique and dissolved in the soil solution showed a significant correlation with the Cd concentrations of the biota accumulation. However, the ex situ methods and the microwave digestion-obtained Cd concentrations showed a poor correlation with the accumulated Cd concentration in plant tissue. Correspondingly, the multiple linear regression models were built for fundamental analysis of the performance of different methods available for Cd bioavailability evaluation. The correlation coefficients of DGT obtained by the improved multiple linear regression model have not significantly improved compared to the coefficients obtained by the simple linear regression model. The results revealed that DGT was a robust measurement, which could obtain the labile Cd concentrations independent of the physicochemical features’ variation in the soil ecosystem. Consequently, these findings provide stronger evidence that DGT is an effective and ideal tool for labile Cd evaluation in Yixing. PMID:28287500
NASA Astrophysics Data System (ADS)
Bordoloi, Ankur D.; Ding, Liuyang; Martinez, Adam A.; Prestridge, Katherine; Adrian, Ronald J.
2018-07-01
We introduce a new method (piecewise integrated dynamics equation fit, PIDEF) that uses the particle dynamics equation to determine unsteady kinematics and drag coefficient (C D) for a particle in subsonic post-shock flow. The uncertainty of this method is assessed based on simulated trajectories for both quasi-steady and unsteady flow conditions. Traditional piecewise polynomial fitting (PPF) shows high sensitivity to measurement error and the function used to describe C D, creating high levels of relative error (1) when applied to unsteady shock-accelerated flows. The PIDEF method provides reduced uncertainty in calculations of unsteady acceleration and drag coefficient for both quasi-steady and unsteady flows. This makes PIDEF a preferable method over PPF for complex flows where the temporal response of C D is unknown. We apply PIDEF to experimental measurements of particle trajectories from 8-pulse particle tracking and determine the effect of incident Mach number on relaxation kinematics and drag coefficient of micron-sized particles.
Test-Retest Reliability of Graph Metrics in Functional Brain Networks: A Resting-State fNIRS Study
Niu, Haijing; Li, Zhen; Liao, Xuhong; Wang, Jinhui; Zhao, Tengda; Shu, Ni; Zhao, Xiaohu; He, Yong
2013-01-01
Recent research has demonstrated the feasibility of combining functional near-infrared spectroscopy (fNIRS) and graph theory approaches to explore the topological attributes of human brain networks. However, the test-retest (TRT) reliability of the application of graph metrics to these networks remains to be elucidated. Here, we used resting-state fNIRS and a graph-theoretical approach to systematically address TRT reliability as it applies to various features of human brain networks, including functional connectivity, global network metrics and regional nodal centrality metrics. Eighteen subjects participated in two resting-state fNIRS scan sessions held ∼20 min apart. Functional brain networks were constructed for each subject by computing temporal correlations on three types of hemoglobin concentration information (HbO, HbR, and HbT). This was followed by a graph-theoretical analysis, and then an intraclass correlation coefficient (ICC) was further applied to quantify the TRT reliability of each network metric. We observed that a large proportion of resting-state functional connections (∼90%) exhibited good reliability (0.6< ICC <0.74). For global and nodal measures, reliability was generally threshold-sensitive and varied among both network metrics and hemoglobin concentration signals. Specifically, the majority of global metrics exhibited fair to excellent reliability, with notably higher ICC values for the clustering coefficient (HbO: 0.76; HbR: 0.78; HbT: 0.53) and global efficiency (HbO: 0.76; HbR: 0.70; HbT: 0.78). Similarly, both nodal degree and efficiency measures also showed fair to excellent reliability across nodes (degree: 0.52∼0.84; efficiency: 0.50∼0.84); reliability was concordant across HbO, HbR and HbT and was significantly higher than that of nodal betweenness (0.28∼0.68). Together, our results suggest that most graph-theoretical network metrics derived from fNIRS are TRT reliable and can be used effectively for brain network research. This study also provides important guidance on the choice of network metrics of interest for future applied research in developmental and clinical neuroscience. PMID:24039763
Haughey, Simon A; Graham, Stewart F; Cancouët, Emmanuelle; Elliott, Christopher T
2013-02-15
Soya bean products are used widely in the animal feed industry as a protein based feed ingredient and have been found to be adulterated with melamine. This was highlighted in the Chinese scandal of 2008. Dehulled soya (GM and non-GM), soya hulls and toasted soya were contaminated with melamine and spectra were generated using Near Infrared Reflectance Spectroscopy (NIRS). By applying chemometrics to the spectral data, excellent calibration models and prediction statistics were obtained. The coefficients of determination (R(2)) were found to be 0.89-0.99 depending on the mathematical algorithm used, the data pre-processing applied and the sample type used. The corresponding values for the root mean square error of calibration and prediction were found to be 0.081-0.276% and 0.134-0.368%, respectively, again depending on the chemometric treatment applied to the data and sample type. In addition, adopting a qualitative approach with the spectral data and applying PCA, it was possible to discriminate between the four samples types and also, by generation of Cooman's plots, possible to distinguish between adulterated and non-adulterated samples. Copyright © 2012 Elsevier Ltd. All rights reserved.
Navier-Stokes predictions of pitch damping for axisymmetric shell using steady coning motion
NASA Technical Reports Server (NTRS)
Weinacht, Paul; Sturek, Walter B.; Schiff, Lewis B.
1991-01-01
Previous theoretical investigations have proposed that the side force and moment acting on a body of revolution in steady coning motion could be related to the pitch-damping force and moment. In the current research effort, this approach is applied to produce predictions of the pitch damping for axisymmetric shell. The flow fields about these projectiles undergoing steady coning motion are successfully computed using a parabolized Navier-Stokes computational approach which makes use of a rotating coordinate frame. The governing equations are modified to include the centrifugal and Coriolis force terms due to the rotating coordinate frame. From the computed flow field, the side moments due to coning motion, spinning motion, and combined spinning and coning motion are used to determine the pitch-damping coefficients. Computations are performed for two generic shell configurations, a secant-ogive-cylinder and a secant-ogive-cylinder-boattail.
HOTEX: An Approach for Global Mapping of Human Built-Up and Settlement Extent
NASA Technical Reports Server (NTRS)
Wang, Panshi; Huang, Chengquan; Tilton, James C.; Tan, Bin; Brown De Colstoun, Eric C.
2017-01-01
Understanding the impacts of urbanization requires accurate and updatable urban extent maps. Here we present an algorithm for mapping urban extent at global scale using Landsat data. An innovative hierarchical object-based texture (HOTex) classification approach was designed to overcome spectral confusion between urban and nonurban land cover types. VIIRS nightlights data and MODIS vegetation index datasets are integrated as high-level features under an object-based framework. We applied the HOTex method to the GLS-2010 Landsat images to produce a global map of human built-up and settlement extent. As shown by visual assessments, our method could effectively map urban extent and generate consistent results using images with inconsistent acquisition time and vegetation phenology. Using scene-level cross validation for results in Europe, we assessed the performance of HOTex and achieved a kappa coefficient of 0.91, compared to 0.74 from a baseline method using per-pixel classification using spectral information.
Andrés, Axel; Rosés, Martí; Bosch, Elisabeth
2014-11-28
In previous work, a two-parameter model to predict chromatographic retention of ionizable analytes in gradient mode was proposed. However, the procedure required some previous experimental work to get a suitable description of the pKa change with the mobile phase composition. In the present study this previous experimental work has been simplified. The analyte pKa values have been calculated through equations whose coefficients vary depending on their functional group. Forced by this new approach, other simplifications regarding the retention of the totally neutral and totally ionized species also had to be performed. After the simplifications were applied, new prediction values were obtained and compared with the previously acquired experimental data. The simplified model gave pretty good predictions while saving a significant amount of time and resources. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Schmidt, R. F.
1987-01-01
This document discusses the determination of caustic surfaces in terms of rays, reflectors, and wavefronts. Analytical caustics are obtained as a family of lines, a set of points, and several types of equations for geometries encountered in optics and microwave applications. Standard methods of differential geometry are applied under different approaches: directly to reflector surfaces, and alternatively, to wavefronts, to obtain analytical caustics of two sheets or branches. Gauss/Seidel aberrations are introduced into the wavefront approach, forcing the retention of all three coefficients of both the first- and the second-fundamental forms of differential geometry. An existing method for obtaining caustic surfaces through exploitation of the singularities in flux density is examined, and several constant-intensity contour maps are developed using only the intrinsic Gaussian, mean, and normal curvatures of the reflector. Numerous references are provided for extending the material of the present document to the morphologies of caustics and their associated diffraction patterns.
Probability and Cumulative Density Function Methods for the Stochastic Advection-Reaction Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barajas-Solano, David A.; Tartakovsky, Alexandre M.
We present a cumulative density function (CDF) method for the probabilistic analysis of $d$-dimensional advection-dominated reactive transport in heterogeneous media. We employ a probabilistic approach in which epistemic uncertainty on the spatial heterogeneity of Darcy-scale transport coefficients is modeled in terms of random fields with given correlation structures. Our proposed CDF method employs a modified Large-Eddy-Diffusivity (LED) approach to close and localize the nonlocal equations governing the one-point PDF and CDF of the concentration field, resulting in a $(d + 1)$ dimensional PDE. Compared to the classsical LED localization, the proposed modified LED localization explicitly accounts for the mean-field advectivemore » dynamics over the phase space of the PDF and CDF. To illustrate the accuracy of the proposed closure, we apply our CDF method to one-dimensional single-species reactive transport with uncertain, heterogeneous advection velocities and reaction rates modeled as random fields.« less
An analytic approach to optimize tidal turbine fields
NASA Astrophysics Data System (ADS)
Pelz, P.; Metzler, M.
2013-12-01
Motivated by global warming due to CO2-emission various technologies for harvesting of energy from renewable sources are developed. Hydrokinetic turbines get applied to surface watercourse or tidal flow to gain electrical energy. Since the available power for hydrokinetic turbines is proportional to the projected cross section area, fields of turbines are installed to scale shaft power. Each hydrokinetic turbine of a field can be considered as a disk actuator. In [1], the first author derives the optimal operation point for hydropower in an open-channel. The present paper concerns about a 0-dimensional model of a disk-actuator in an open-channel flow with bypass, as a special case of [1]. Based on the energy equation, the continuity equation and the momentum balance an analytical approach is made to calculate the coefficient of performance for hydrokinetic turbines with bypass flow as function of the turbine head and the ratio of turbine width to channel width.
Ogier, Augustin; Sdika, Michael; Foure, Alexandre; Le Troter, Arnaud; Bendahan, David
2017-07-01
Manual and automated segmentation of individual muscles in magnetic resonance images have been recognized as challenging given the high variability of shapes between muscles and subjects and the discontinuity or lack of visible boundaries between muscles. In the present study, we proposed an original algorithm allowing a semi-automatic transversal propagation of manually-drawn masks. Our strategy was based on several ascending and descending non-linear registration approaches which is similar to the estimation of a Lagrangian trajectory applied to manual masks. Using several manually-segmented slices, we have evaluated our algorithm on the four muscles of the quadriceps femoris group. We mainly showed that our 3D propagated segmentation was very accurate with an averaged Dice similarity coefficient value higher than 0.91 for the minimal manual input of only two manually-segmented slices.
A Galerkin least squares approach to viscoelastic flow.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Rekha R.; Schunk, Peter Randall
2015-10-01
A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity andmore » pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.« less
The IfE Global Gravity Field Model Recovered from GOCE Orbit and Gradiometer Data
NASA Astrophysics Data System (ADS)
Wu, Hu; Muiller, Jurgen; Brieden, Phillip
2015-03-01
An independent global gravity field model is computed from the GOCE orbit and gradiometer data using our own IfE software. We analysed the same data period that were considered for the first released GOCE models. The Acceleration Approach is applied to process the orbit data. The gravity gradients are processed in the framework of the remove-restore technique by which the low-frequency noise of the original gradients are removed. For the combined solution, the normal equations are summed by the Variance Component Estimation Approach. The result in terms of accumulated geoid height error calculated from the coefficient difference w.r.t. EGM2008 is about 11 cm at D/O 200, which corresponds to the accuracy level of the first released TIM and DIR solutions. This indicates that our IfE model has a comparable performance as the other official GOCE models.
NASA Astrophysics Data System (ADS)
Dai, C.; Qin, X. S.; Chen, Y.; Guo, H. C.
2018-06-01
A Gini-coefficient based stochastic optimization (GBSO) model was developed by integrating the hydrological model, water balance model, Gini coefficient and chance-constrained programming (CCP) into a general multi-objective optimization modeling framework for supporting water resources allocation at a watershed scale. The framework was advantageous in reflecting the conflicting equity and benefit objectives for water allocation, maintaining the water balance of watershed, and dealing with system uncertainties. GBSO was solved by the non-dominated sorting Genetic Algorithms-II (NSGA-II), after the parameter uncertainties of the hydrological model have been quantified into the probability distribution of runoff as the inputs of CCP model, and the chance constraints were converted to the corresponding deterministic versions. The proposed model was applied to identify the Pareto optimal water allocation schemes in the Lake Dianchi watershed, China. The optimal Pareto-front results reflected the tradeoff between system benefit (αSB) and Gini coefficient (αG) under different significance levels (i.e. q) and different drought scenarios, which reveals the conflicting nature of equity and efficiency in water allocation problems. A lower q generally implies a lower risk of violating the system constraints and a worse drought intensity scenario corresponds to less available water resources, both of which would lead to a decreased system benefit and a less equitable water allocation scheme. Thus, the proposed modeling framework could help obtain the Pareto optimal schemes under complexity and ensure that the proposed water allocation solutions are effective for coping with drought conditions, with a proper tradeoff between system benefit and water allocation equity.
NASA Astrophysics Data System (ADS)
Zhu, Junjie
2017-02-01
Localized surface plasmon resonances arising from the free carriers in copper-deficient copper chalcogenides nanocrystals (Cu2-xE, E=S,Se) enables them with high extinction coefficient in the near-infrared range, which was superior for photothermal related purpose. Although Cu2-xE nanocrystals with different compositions (0< x≪1) all possess NIR absorption, their extinction coefficients were significantly different due to their distinct valence band free carrier concentration. Herein, by optimizing the synthetic conditions, we were able to obtain pure covellite phase CuS nanoparticles with maximized free carrier concentration (x=1), which provides extremely high mass extinction coefficient (up to 60 Lg-1cm-1 at 980 nm and 32.4 Lg-1cm-1 at 800 nm). To the best of our knowledge, these values was maximal among all inorganic nanomaterials. High quality Cu2-xSe can also be obtained with a similar approach. In order to introduce CuS nanocrystals for biomedical applications, we further transferred these nanocrystals into aqueous solution with an amphiphilic polymer and colvalently linked with beta-cyclodextrin. Using host-guest interaction, adamantine-modified RGD peptide can be further anchored on the nanoparticles for the recognition of integrin-positive cancer cells. Together with the high extinction coefficient and outstand photothermal conversion efficiency (determined to be higher than 40%), these CuS nanocrystals were applied for photothermal therapy of cancer cells and photoacoustic imaging. In addition, anticancer drug doxorubicin can also be loading onto the nanoparticles through either hydrophobic or electrostatic interaction for chemotherapy.
Inertial frictional ratchets and their load bearing efficiencies
NASA Astrophysics Data System (ADS)
Kharkongor, D.; Reenbohn, W. L.; Mahato, Mangal C.
2018-03-01
We investigate the performance of an inertial frictional ratchet in a sinusoidal potential driven by a sinusoidal external field. The dependence of the performance on the parameters of the sinusoidally varying friction, such as the mean friction coefficient and its phase difference with the potential, is studied in detail. Interestingly, under certain circumstances, the thermodynamic efficiency of the ratchet against an applied load shows a non-monotonic behaviour as a function of the mean friction coefficient. Also, in the large friction ranges, the efficiency is shown to increase with increasing applied load even though the corresponding ratchet current decreases as the applied load increases. These counterintuitive numerical results are explained in the text.
Three Least-Squares Minimization Approaches to Interpret Gravity Data Due to Dipping Faults
NASA Astrophysics Data System (ADS)
Abdelrahman, E. M.; Essa, K. S.
2015-02-01
We have developed three different least-squares minimization approaches to determine, successively, the depth, dip angle, and amplitude coefficient related to the thickness and density contrast of a buried dipping fault from first moving average residual gravity anomalies. By defining the zero-anomaly distance and the anomaly value at the origin of the moving average residual profile, the problem of depth determination is transformed into a constrained nonlinear gravity inversion. After estimating the depth of the fault, the dip angle is estimated by solving a nonlinear inverse problem. Finally, after estimating the depth and dip angle, the amplitude coefficient is determined using a linear equation. This method can be applied to residuals as well as to measured gravity data because it uses the moving average residual gravity anomalies to estimate the model parameters of the faulted structure. The proposed method was tested on noise-corrupted synthetic and real gravity data. In the case of the synthetic data, good results are obtained when errors are given in the zero-anomaly distance and the anomaly value at the origin, and even when the origin is determined approximately. In the case of practical data (Bouguer anomaly over Gazal fault, south Aswan, Egypt), the fault parameters obtained are in good agreement with the actual ones and with those given in the published literature.
Construction and comparison of gene co-expression networks shows complex plant immune responses
López, Camilo; López-Kleine, Liliana
2014-01-01
Gene co-expression networks (GCNs) are graphic representations that depict the coordinated transcription of genes in response to certain stimuli. GCNs provide functional annotations of genes whose function is unknown and are further used in studies of translational functional genomics among species. In this work, a methodology for the reconstruction and comparison of GCNs is presented. This approach was applied using gene expression data that were obtained from immunity experiments in Arabidopsis thaliana, rice, soybean, tomato and cassava. After the evaluation of diverse similarity metrics for the GCN reconstruction, we recommended the mutual information coefficient measurement and a clustering coefficient-based method for similarity threshold selection. To compare GCNs, we proposed a multivariate approach based on the Principal Component Analysis (PCA). Branches of plant immunity that were exemplified by each experiment were analyzed in conjunction with the PCA results, suggesting both the robustness and the dynamic nature of the cellular responses. The dynamic of molecular plant responses produced networks with different characteristics that are differentiable using our methodology. The comparison of GCNs from plant pathosystems, showed that in response to similar pathogens plants could activate conserved signaling pathways. The results confirmed that the closeness of GCNs projected on the principal component space is an indicative of similarity among GCNs. This also can be used to understand global patterns of events triggered during plant immune responses. PMID:25320678
Combined analysis of magnetic and gravity anomalies using normalized source strength (NSS)
NASA Astrophysics Data System (ADS)
Li, L.; Wu, Y.
2017-12-01
Gravity field and magnetic field belong to potential fields which lead inherent multi-solution. Combined analysis of magnetic and gravity anomalies based on Poisson's relation is used to determinate homology gravity and magnetic anomalies and decrease the ambiguity. The traditional combined analysis uses the linear regression of the reduction to pole (RTP) magnetic anomaly to the first order vertical derivative of the gravity anomaly, and provides the quantitative or semi-quantitative interpretation by calculating the correlation coefficient, slope and intercept. In the calculation process, due to the effect of remanent magnetization, the RTP anomaly still contains the effect of oblique magnetization. In this case the homology gravity and magnetic anomalies display irrelevant results in the linear regression calculation. The normalized source strength (NSS) can be transformed from the magnetic tensor matrix, which is insensitive to the remanence. Here we present a new combined analysis using NSS. Based on the Poisson's relation, the gravity tensor matrix can be transformed into the pseudomagnetic tensor matrix of the direction of geomagnetic field magnetization under the homologous condition. The NSS of pseudomagnetic tensor matrix and original magnetic tensor matrix are calculated and linear regression analysis is carried out. The calculated correlation coefficient, slope and intercept indicate the homology level, Poisson's ratio and the distribution of remanent respectively. We test the approach using synthetic model under complex magnetization, the results show that it can still distinguish the same source under the condition of strong remanence, and establish the Poisson's ratio. Finally, this approach is applied in China. The results demonstrated that our approach is feasible.
NASA Astrophysics Data System (ADS)
Bucha, Blažej; Janák, Juraj
2013-07-01
We present a novel graphical user interface program GrafLab (GRAvity Field LABoratory) for spherical harmonic synthesis (SHS) created in MATLAB®. This program allows to comfortably compute 38 various functionals of the geopotential up to ultra-high degrees and orders of spherical harmonic expansion. For the most difficult part of the SHS, namely the evaluation of the fully normalized associated Legendre functions (fnALFs), we used three different approaches according to required maximum degree: (i) the standard forward column method (up to maximum degree 1800, in some cases up to degree 2190); (ii) the modified forward column method combined with Horner's scheme (up to maximum degree 2700); (iii) the extended-range arithmetic (up to an arbitrary maximum degree). For the maximum degree 2190, the SHS with fnALFs evaluated using the extended-range arithmetic approach takes only approximately 2-3 times longer than its standard arithmetic counterpart, i.e. the standard forward column method. In the GrafLab, the functionals of the geopotential can be evaluated on a regular grid or point-wise, while the input coordinates can either be read from a data file or entered manually. For the computation on a regular grid we decided to apply the lumped coefficients approach due to significant time-efficiency of this method. Furthermore, if a full variance-covariances matrix of spherical harmonic coefficients is available, it is possible to compute the commission errors of the functionals. When computing on a regular grid, the output functionals or their commission errors may be depicted on a map using automatically selected cartographic projection.
Fahrenfeld, Nicole; Knowlton, Katharine; Krometis, Leigh Anne; Hession, W Cully; Xia, Kang; Lipscomb, Emily; Libuit, Kevin; Green, Breanna Lee; Pruden, Amy
2014-01-01
The development of models for understanding antibiotic resistance gene (ARG) persistence and transport is a critical next step toward informing mitigation strategies to prevent the spread of antibiotic resistance in the environment. A field study was performed that used a mass balance approach to gain insight into the transport and dissipation of ARGs following land application of manure. Soil from a small drainage plot including a manure application site, an unmanured control site, and an adjacent stream and buffer zone were sampled for ARGs and metals before and after application of dairy manure slurry and a dry stack mixture of equine, bovine, and ovine manure. Results of mass balance suggest growth of bacterial hosts containing ARGs and/or horizontal gene transfer immediately following slurry application with respect to ermF, sul1, and sul2 and following a lag (13 days) for dry-stack-amended soils. Generally no effects on tet(G), tet(O), or tet(W) soil concentrations were observed despite the presence of these genes in applied manure. Dissipation rates were fastest for ermF in slurry-treated soils (logarithmic decay coefficient of -3.5) and for sul1 and sul2 in dry-stack-amended soils (logarithmic decay coefficients of -0.54 and -0.48, respectively), and evidence for surface and subsurface transport was not observed. Results provide a mass balance approach for tracking ARG fate and insights to inform modeling and limiting the transport of manure-borne ARGs to neighboring surface water.
A multivariate extension of mutual information for growing neural networks.
Ball, Kenneth R; Grant, Christopher; Mundy, William R; Shafer, Timothy J
2017-11-01
Recordings of neural network activity in vitro are increasingly being used to assess the development of neural network activity and the effects of drugs, chemicals and disease states on neural network function. The high-content nature of the data derived from such recordings can be used to infer effects of compounds or disease states on a variety of important neural functions, including network synchrony. Historically, synchrony of networks in vitro has been assessed either by determination of correlation coefficients (e.g. Pearson's correlation), by statistics estimated from cross-correlation histograms between pairs of active electrodes, and/or by pairwise mutual information and related measures. The present study examines the application of Normalized Multiinformation (NMI) as a scalar measure of shared information content in a multivariate network that is robust with respect to changes in network size. Theoretical simulations are designed to investigate NMI as a measure of complexity and synchrony in a developing network relative to several alternative approaches. The NMI approach is applied to these simulations and also to data collected during exposure of in vitro neural networks to neuroactive compounds during the first 12 days in vitro, and compared to other common measures, including correlation coefficients and mean firing rates of neurons. NMI is shown to be more sensitive to developmental effects than first order synchronous and nonsynchronous measures of network complexity. Finally, NMI is a scalar measure of global (rather than pairwise) mutual information in a multivariate network, and hence relies on less assumptions for cross-network comparisons than historical approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ji, Hong; Petro, Nathan M; Chen, Badong; Yuan, Zejian; Wang, Jianji; Zheng, Nanning; Keil, Andreas
2018-02-06
Over the past decade, the simultaneous recording of electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) data has garnered growing interest because it may provide an avenue towards combining the strengths of both imaging modalities. Given their pronounced differences in temporal and spatial statistics, the combination of EEG and fMRI data is however methodologically challenging. Here, we propose a novel screening approach that relies on a Cross Multivariate Correlation Coefficient (xMCC) framework. This approach accomplishes three tasks: (1) It provides a measure for testing multivariate correlation and multivariate uncorrelation of the two modalities; (2) it provides criterion for the selection of EEG features; (3) it performs a screening of relevant EEG information by grouping the EEG channels into clusters to improve efficiency and to reduce computational load when searching for the best predictors of the BOLD signal. The present report applies this approach to a data set with concurrent recordings of steady-state-visual evoked potentials (ssVEPs) and fMRI, recorded while observers viewed phase-reversing Gabor patches. We test the hypothesis that fluctuations in visuo-cortical mass potentials systematically covary with BOLD fluctuations not only in visual cortical, but also in anterior temporal and prefrontal areas. Results supported the hypothesis and showed that the xMCC-based analysis provides straightforward identification of neurophysiological plausible brain regions with EEG-fMRI covariance. Furthermore xMCC converged with other extant methods for EEG-fMRI analysis. © 2018 The Authors Journal of Neuroscience Research Published by Wiley Periodicals, Inc.
Bounds on OPE coefficients from interference effects in the conformal collider
NASA Astrophysics Data System (ADS)
Córdova, Clay; Maldacena, Juan; Turiaci, Gustavo J.
2017-11-01
We apply the average null energy condition to obtain upper bounds on the three-point function coefficients of stress tensors and a scalar operator, < TTOi>, in general CFTs. We also constrain the gravitational anomaly of U(1) currents in four-dimensional CFTs, which are encoded in three-point functions of the form 〈 T T J 〉. In theories with a large N AdS dual we translate these bounds into constraints on the coefficient of a higher derivative bulk term of the form ∫ϕ W 2. We speculate that these bounds also apply in de-Sitter. In this case our results constrain inflationary observables, such as the amplitude for chiral gravity waves that originate from higher derivative terms in the Lagrangian of the form ϕ W W ∗.
Eck, Simon; Wörz, Stefan; Müller-Ott, Katharina; Hahn, Matthias; Biesdorf, Andreas; Schotta, Gunnar; Rippe, Karsten; Rohr, Karl
2016-08-01
The genome is partitioned into regions of euchromatin and heterochromatin. The organization of heterochromatin is important for the regulation of cellular processes such as chromosome segregation and gene silencing, and their misregulation is linked to cancer and other diseases. We present a model-based approach for automatic 3D segmentation and 3D shape analysis of heterochromatin foci from 3D confocal light microscopy images. Our approach employs a novel 3D intensity model based on spherical harmonics, which analytically describes the shape and intensities of the foci. The model parameters are determined by fitting the model to the image intensities using least-squares minimization. To characterize the 3D shape of the foci, we exploit the computed spherical harmonics coefficients and determine a shape descriptor. We applied our approach to 3D synthetic image data as well as real 3D static and real 3D time-lapse microscopy images, and compared the performance with that of previous approaches. It turned out that our approach yields accurate 3D segmentation results and performs better than previous approaches. We also show that our approach can be used for quantifying 3D shape differences of heterochromatin foci. Copyright © 2016 Elsevier B.V. All rights reserved.
Convergence behavior of delayed discrete cellular neural network without periodic coefficients.
Wang, Jinling; Jiang, Haijun; Hu, Cheng; Ma, Tianlong
2014-05-01
In this paper, we study convergence behaviors of delayed discrete cellular neural networks without periodic coefficients. Some sufficient conditions are derived to ensure all solutions of delayed discrete cellular neural network without periodic coefficients converge to a periodic function, by applying mathematical analysis techniques and the properties of inequalities. Finally, some examples showing the effectiveness of the provided criterion are given. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mota, Bernardo; Wooster, Martin J.
2016-04-01
The approach to estimating landscape fire fuel consumption based on the remotely sensed fire radiative power (FRP) thermal energy release rate, as opposed to burned area, is now relatively widely used in studies of fire emissions, including operationally within the Copernicus Atmosphere Monitoring Service (CAMS). Nevertheless, there are still limitations to the approach, including uncertainties associated with using only the few daily overpasses typically provided by polar orbiting satellite systems, the conversion between FRP and smoke emissions, and the increased likelihood that the more frequent data from geostationary systems fails to detect the (probably highly numerous) smaller (i.e. low FRP) component of a regions fire regime. In this study, we address these limitations to directly estimate fire emissions of Particular Matter (PM; or smoke aerosols) by presenting an approach combining the "bottom-up" FRP observations available every 15 minutes across Africa from the Meteosat Spinning Enhanced Visible and Infrared Imager (SEVIRI) Fire Radiative Product (FRP) processed at the EUMETSAT LSA SAF, and the "top-down" aerosol optical thickness (AOT) measures of the fire plumes themselves as measured by the Moderate-resolution Imaging Spectro-radiometer (MODIS) sensors aboard the Terra (MOD04_L2) and Aqua (MYD04_L2) satellites. We determine PM emission coefficients that relate directly to FRP measures by combining these two datasets, and the use of the almost continuous geostationary FRP observations allows us to do this without recourse to (uncertain) data on wind speed at the (unknown) height of the matching plume. We also develop compensation factors to address the detection limitations of small/low intensity (low FRP) fires, and remove the need to estimate fuel consumption by going directly from FRP to PM emissions. We derive the smoke PM emissions coefficients per land cover class by comparing the total fire radiative energy (FRE) released from individual fires and the MODIS AOD seen in the corresponding plume. Analysis was performed for plumes extracted from 31 study sites covering 10,000km2each, during 10 consecutive days, for the 2011 southern Africa fire season. Compensation factors associated with undetected low FRP fires was based on extraction and application of frequency density function shape parameters, characterized by analyzing 4 years (2009-2013) of MSG-SEVIRI FRP data in 0.5o degree cells. Using the derived emission coefficients and compensation factors we estimate Total Particulate Matter (TPM) emissions for 2011 on a daily basis and 0.25o spatial resolution across southern Africa. Preliminary results show agreement between our derived emission coefficients and those of past studies following similar methods but with MODIS FRP data, and our annual TPM estimate is in reasonable agreement with those of other emission inventories based on burned area approaches. The proposed approach shows strong potential to be applied to other regions, and also to other geostationary satellite FRP products. Once the smoke emissions coefficients have been derived via comparison to the AOD data, the method requires only the FRP data, which is available at very high temporal frequency from geostationary orbit. Therefore our approach can provide near real time smoke emissions estimates which are essential for operational activities such as NRT smoke dispersion modeling and air quality forecasting.
Switching theory-based steganographic system for JPEG images
NASA Astrophysics Data System (ADS)
Cherukuri, Ravindranath C.; Agaian, Sos S.
2007-04-01
Cellular communications constitute a significant portion of the global telecommunications market. Therefore, the need for secured communication over a mobile platform has increased exponentially. Steganography is an art of hiding critical data into an innocuous signal, which provide answers to the above needs. The JPEG is one of commonly used format for storing and transmitting images on the web. In addition, the pictures captured using mobile cameras are in mostly in JPEG format. In this article, we introduce a switching theory based steganographic system for JPEG images which is applicable for mobile and computer platforms. The proposed algorithm uses the fact that energy distribution among the quantized AC coefficients varies from block to block and coefficient to coefficient. Existing approaches are effective with a part of these coefficients but when employed over all the coefficients they show there ineffectiveness. Therefore, we propose an approach that works each set of AC coefficients with different frame work thus enhancing the performance of the approach. The proposed system offers a high capacity and embedding efficiency simultaneously withstanding to simple statistical attacks. In addition, the embedded information could be retrieved without prior knowledge of the cover image. Based on simulation results, the proposed method demonstrates an improved embedding capacity over existing algorithms while maintaining a high embedding efficiency and preserving the statistics of the JPEG image after hiding information.
General Series Solutions for Stresses and Displacements in an Inner-fixed Ring
NASA Astrophysics Data System (ADS)
Jiao, Yongshu; Liu, Shuo; Qi, Dexuan
2018-03-01
The general series solution approach is provided to get the stress and displacement fields in the inner-fixed ring. After choosing an Airy stress function in series form, stresses are expressed by infinite coefficients. Displacements are obtained by integrating the geometric equations. For an inner-fixed ring, the arbitrary loads acting on outer edge are extended into two sets of Fourier series. The zero displacement boundary conditions on inner surface are utilized. Then the stress (and displacement) coefficients are expressed by loading coefficients. A numerical example shows the validity of this approach.
Semi-automatic aircraft control system
NASA Technical Reports Server (NTRS)
Gilson, Richard D. (Inventor)
1978-01-01
A flight control type system which provides a tactile readout to the hand of a pilot for directing elevator control during both approach to flare-out and departure maneuvers. For altitudes above flare-out, the system sums the instantaneous coefficient of lift signals of a lift transducer with a generated signal representing ideal coefficient of lift for approach to flare-out, i.e., a value of about 30% below stall. Error signals resulting from the summation are read out by the noted tactile device. Below flare altitude, an altitude responsive variation is summed with the signal representing ideal coefficient of lift to provide error signal readout.
40 CFR 799.6755 - TSCA partition coefficient (n-octanol/water), shake flask method.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) Qualifying statements. This method applies only to pure, water soluble substances which do not dissociate or... applied. The values presented in table 1 of this section are not necessarily representative of the results... Law applies only at constant temperature, pressure, and pH for dilute solutions. It strictly applies...
Wang, Xia; Zhang, Luyan; Chen, Gang
2011-11-01
As a self-regulating heating device, positive temperature coefficient ceramic heater was employed for hot embossing and thermal bonding of poly(methyl methacrylate) microfluidic chip because it supplied constant-temperature heating without electrical control circuits. To emboss a channel plate, a piece of poly(methyl methacrylate) plate was sandwiched between a template and a microscopic glass slide on a positive temperature coefficient ceramic heater. All the assembled components were pressed between two elastic press heads of a spring-driven press while a voltage was applied to the heater for 10 min. Subsequently, the embossed poly(methyl methacrylate) plate bearing negative relief of channel networks was bonded with a piece of poly(methyl methacrylate) cover sheet to obtain a complete microchip using a positive temperature coefficient ceramic heater and a spring-driven press. High quality microfluidic chips fabricated by using the novel embossing/bonding device were successfully applied in the electrophoretic separation of three cations. Positive temperature coefficient ceramic heater indicates great promise for the low-cost production of poly(methyl methacrylate) microchips and should find wide applications in the fabrication of other thermoplastic polymer microfluidic devices.
Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images
NASA Astrophysics Data System (ADS)
Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.
2017-10-01
Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.
Modal Substructuring of Geometrically Nonlinear Finite-Element Models
Kuether, Robert J.; Allen, Matthew S.; Hollkamp, Joseph J.
2015-12-21
The efficiency of a modal substructuring method depends on the component modes used to reduce each subcomponent model. Methods such as Craig–Bampton have been used extensively to reduce linear finite-element models with thousands or even millions of degrees of freedom down orders of magnitude while maintaining acceptable accuracy. A novel reduction method is proposed here for geometrically nonlinear finite-element models using the fixed-interface and constraint modes of the linearized system to reduce each subcomponent model. The geometric nonlinearity requires an additional cubic and quadratic polynomial function in the modal equations, and the nonlinear stiffness coefficients are determined by applying amore » series of static loads and using the finite-element code to compute the response. The geometrically nonlinear, reduced modal equations for each subcomponent are then coupled by satisfying compatibility and force equilibrium. This modal substructuring approach is an extension of the Craig–Bampton method and is readily applied to geometrically nonlinear models built directly within commercial finite-element packages. The efficiency of this new approach is demonstrated on two example problems: one that couples two geometrically nonlinear beams at a shared rotational degree of freedom, and another that couples an axial spring element to the axial degree of freedom of a geometrically nonlinear beam. The nonlinear normal modes of the assembled models are compared with those of a truth model to assess the accuracy of the novel modal substructuring approach.« less
Modal Substructuring of Geometrically Nonlinear Finite-Element Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuether, Robert J.; Allen, Matthew S.; Hollkamp, Joseph J.
The efficiency of a modal substructuring method depends on the component modes used to reduce each subcomponent model. Methods such as Craig–Bampton have been used extensively to reduce linear finite-element models with thousands or even millions of degrees of freedom down orders of magnitude while maintaining acceptable accuracy. A novel reduction method is proposed here for geometrically nonlinear finite-element models using the fixed-interface and constraint modes of the linearized system to reduce each subcomponent model. The geometric nonlinearity requires an additional cubic and quadratic polynomial function in the modal equations, and the nonlinear stiffness coefficients are determined by applying amore » series of static loads and using the finite-element code to compute the response. The geometrically nonlinear, reduced modal equations for each subcomponent are then coupled by satisfying compatibility and force equilibrium. This modal substructuring approach is an extension of the Craig–Bampton method and is readily applied to geometrically nonlinear models built directly within commercial finite-element packages. The efficiency of this new approach is demonstrated on two example problems: one that couples two geometrically nonlinear beams at a shared rotational degree of freedom, and another that couples an axial spring element to the axial degree of freedom of a geometrically nonlinear beam. The nonlinear normal modes of the assembled models are compared with those of a truth model to assess the accuracy of the novel modal substructuring approach.« less
ERIC Educational Resources Information Center
Camporesi, Roberto
2016-01-01
We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as…
ERIC Educational Resources Information Center
Bloom, Howard S.; Raudenbush, Stephen W.; Weiss, Michael J.; Porter, Kristin
2017-01-01
The present article considers a fundamental question in evaluation research: "By how much do program effects vary across sites?" The article first presents a theoretical model of cross-site impact variation and a related estimation model with a random treatment coefficient and fixed site-specific intercepts. This approach eliminates…
Higher-Order Fermi-Liquid Corrections for an Anderson Impurity Away from Half Filling
NASA Astrophysics Data System (ADS)
Oguri, Akira; Hewson, A. C.
2018-03-01
We study the higher-order Fermi-liquid relations of Kondo systems for arbitrary impurity-electron fillings, extending the many-body quantum theoretical approach of Yamada and Yosida. It includes, partly, a microscopic clarification of the related achievements based on Nozières' phenomenological description: Filippone, Moca, von Delft, and Mora [Phys. Rev. B 95, 165404 (2017), 10.1103/PhysRevB.95.165404]. In our formulation, the Fermi-liquid parameters such as the quasiparticle energy, damping, and transport coefficients are related to each other through the total vertex Γσ σ';σ'σ(ω ,ω';ω',ω ), which may be regarded as a generalized Landau quasiparticle interaction. We obtain exactly this function up to linear order with respect to the frequencies ω and ω' using the antisymmetry and analytic properties. The coefficients acquire additional contributions of three-body fluctuations away from half filling through the nonlinear susceptibilities. We also apply the formulation to nonequilibrium transport through a quantum dot, and clarify how the zero-bias peak evolves in a magnetic field.
Xiao, Yanwen; Xu, Wei; Wang, Liang
2016-03-01
This paper focuses on the study of the stochastic Van der Pol vibro-impact system with fractional derivative damping under Gaussian white noise excitation. The equations of the original system are simplified by non-smooth transformation. For the simplified equation, the stochastic averaging approach is applied to solve it. Then, the fractional derivative damping term is facilitated by a numerical scheme, therewith the fourth-order Runge-Kutta method is used to obtain the numerical results. And the numerical simulation results fit the analytical solutions. Therefore, the proposed analytical means to study this system are proved to be feasible. In this context, the effects on the response stationary probability density functions (PDFs) caused by noise excitation, restitution condition, and fractional derivative damping are considered, in addition the stochastic P-bifurcation is also explored in this paper through varying the value of the coefficient of fractional derivative damping and the restitution coefficient. These system parameters not only influence the response PDFs of this system but also can cause the stochastic P-bifurcation.
Comba, Peter; Martin, Bodo; Sanyal, Avik; Stephan, Holger
2013-08-21
A QSPR scheme for the computation of lipophilicities of ⁶⁴Cu complexes was developed with a training set of 24 tetraazamacrocylic and bispidine-based Cu(II) compounds and their experimentally available 1-octanol-water distribution coefficients. A minimum number of physically meaningful parameters were used in the scheme, and these are primarily based on data available from molecular mechanics calculations, using an established force field for Cu(II) complexes and a recently developed scheme for the calculation of fluctuating atomic charges. The developed model was also applied to an independent validation set and was found to accurately predict distribution coefficients of potential ⁶⁴Cu PET (positron emission tomography) systems. A possible next step would be the development of a QSAR-based biodistribution model to track the uptake of imaging agents in different organs and tissues of the body. It is expected that such simple, empirical models of lipophilicity and biodistribution will be very useful in the design and virtual screening of positron emission tomography (PET) imaging agents.
NASA Astrophysics Data System (ADS)
Zhang, Qian-Ming; Shang, Ming-Sheng; Zeng, Wei; Chen, Yong; Lü, Linyuan
2010-08-01
Collaborative filtering is one of the most successful recommendation techniques, which can effectively predict the possible future likes of users based on their past preferences. The key problem of this method is how to define the similarity between users. A standard approach is using the correlation between the ratings that two users give to a set of objects, such as Cosine index and Pearson correlation coefficient. However, the costs of computing this kind of indices are relatively high, and thus it is impossible to be applied in the huge-size systems. To solve this problem, in this paper, we introduce six local-structure-based similarity indices and compare their performances with the above two benchmark indices. Experimental results on two data sets demonstrate that the structure-based similarity indices overall outperform the Pearson correlation coefficient. When the data is dense, the structure-based indices can perform competitively good as Cosine index, while with lower computational complexity. Furthermore, when the data is sparse, the structure-based indices give even better results than Cosine index.
NASA Astrophysics Data System (ADS)
Zheng, Chang-Jun; Chen, Hai-Bo; Chen, Lei-Lei
2013-04-01
This paper presents a novel wideband fast multipole boundary element approach to 3D half-space/plane-symmetric acoustic wave problems. The half-space fundamental solution is employed in the boundary integral equations so that the tree structure required in the fast multipole algorithm is constructed for the boundary elements in the real domain only. Moreover, a set of symmetric relations between the multipole expansion coefficients of the real and image domains are derived, and the half-space fundamental solution is modified for the purpose of applying such relations to avoid calculating, translating and saving the multipole/local expansion coefficients of the image domain. The wideband adaptive multilevel fast multipole algorithm associated with the iterative solver GMRES is employed so that the present method is accurate and efficient for both lowand high-frequency acoustic wave problems. As for exterior acoustic problems, the Burton-Miller method is adopted to tackle the fictitious eigenfrequency problem involved in the conventional boundary integral equation method. Details on the implementation of the present method are described, and numerical examples are given to demonstrate its accuracy and efficiency.
Higher-Order Fermi-Liquid Corrections for an Anderson Impurity Away from Half Filling.
Oguri, Akira; Hewson, A C
2018-03-23
We study the higher-order Fermi-liquid relations of Kondo systems for arbitrary impurity-electron fillings, extending the many-body quantum theoretical approach of Yamada and Yosida. It includes, partly, a microscopic clarification of the related achievements based on Nozières' phenomenological description: Filippone, Moca, von Delft, and Mora [Phys. Rev. B 95, 165404 (2017)PRBMDO2469-995010.1103/PhysRevB.95.165404]. In our formulation, the Fermi-liquid parameters such as the quasiparticle energy, damping, and transport coefficients are related to each other through the total vertex Γ_{σσ^{'};σ^{'}σ}(ω,ω^{'};ω^{'},ω), which may be regarded as a generalized Landau quasiparticle interaction. We obtain exactly this function up to linear order with respect to the frequencies ω and ω^{'} using the antisymmetry and analytic properties. The coefficients acquire additional contributions of three-body fluctuations away from half filling through the nonlinear susceptibilities. We also apply the formulation to nonequilibrium transport through a quantum dot, and clarify how the zero-bias peak evolves in a magnetic field.
An innovative approach to compensator design
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Mcdaniel, W. L., Jr.
1973-01-01
The design is considered of a computer-aided-compensator for a control system from a frequency domain point of view. The design technique developed is based on describing the open loop frequency response by n discrete frequency points which result in n functions of the compensator coefficients. Several of these functions are chosen so that the system specifications are properly portrayed; then mathematical programming is used to improve all of these functions which have values below minimum standards. To do this, several definitions in regard to measuring the performance of a system in the frequency domain are given, e.g., relative stability, relative attenuation, proper phasing, etc. Next, theorems which govern the number of compensator coefficients necessary to make improvements in a certain number of functions are proved. After this a mathematical programming tool for aiding in the solution of the problem is developed. This tool is called the constraint improvement algorithm. Then for applying the constraint improvement algorithm generalized, gradients for the constraints are derived. Finally, the necessary theory is incorporated in a Computer program called CIP (compensator Improvement Program). The practical usefulness of CIP is demonstrated by two large system examples.
Satellite-based monitoring of cotton evapotranspiration
NASA Astrophysics Data System (ADS)
Dalezios, Nicolas; Dercas, Nicholas; Tarquis, Ana Maria
2016-04-01
Water for agricultural use represents the largest share among all water uses. Vulnerability in agriculture is influenced, among others, by extended periods of water shortage in regions exposed to droughts. Advanced technological approaches and methodologies, including remote sensing, are increasingly incorporated for the assessment of irrigation water requirements. In this paper, remote sensing techniques are integrated for the estimation and monitoring of crop evapotranspiration ETc. The study area is Thessaly central Greece, which is a drought-prone agricultural region. Cotton fields in a small agricultural sub-catchment in Thessaly are used as an experimental site. Daily meteorological data and weekly field data are recorded throughout seven (2004-2010) growing seasons for the computation of reference evapotranspiration ETo, crop coefficient Kc and cotton crop ETc based on conventional data. Satellite data (Landsat TM) for the corresponding period are processed to estimate cotton crop coefficient Kc and cotton crop ETc and delineate its spatiotemporal variability. The methodology is applied for monitoring Kc and ETc during the growing season in the selected sub-catchment. Several error statistics are used showing very good agreement with ground-truth observations.