Is site-specific APEX calibration necessary for field scale BMP assessment?
USDA-ARS?s Scientific Manuscript database
The possibility of extending parameter sets obtained at one site to sites with similar characteristics is appealing. This study was undertaken to test model performance and compare the effectiveness of best management practices (BMPs) using three parameters sets obtained from three watersheds when a...
Vařeková, Radka Svobodová; Jiroušková, Zuzana; Vaněk, Jakub; Suchomel, Šimon; Koča, Jaroslav
2007-01-01
The Electronegativity Equalization Method (EEM) is a fast approach for charge calculation. A challenging part of the EEM is the parameterization, which is performed using ab initio charges obtained for a set of molecules. The goal of our work was to perform the EEM parameterization for selected sets of organic, organohalogen and organometal molecules. We have performed the most robust parameterization published so far. The EEM parameterization was based on 12 training sets selected from a database of predicted 3D structures (NCI DIS) and from a database of crystallographic structures (CSD). Each set contained from 2000 to 6000 molecules. We have shown that the number of molecules in the training set is very important for quality of the parameters. We have improved EEM parameters (STO-3G MPA charges) for elements that were already parameterized, specifically: C, O, N, H, S, F and Cl. The new parameters provide more accurate charges than those published previously. We have also developed new parameters for elements that were not parameterized yet, specifically for Br, I, Fe and Zn. We have also performed crossover validation of all obtained parameters using all training sets that included relevant elements and confirmed that calculated parameters provide accurate charges.
Automatic tissue characterization from ultrasound imagery
NASA Astrophysics Data System (ADS)
Kadah, Yasser M.; Farag, Aly A.; Youssef, Abou-Bakr M.; Badawi, Ahmed M.
1993-08-01
In this work, feature extraction algorithms are proposed to extract the tissue characterization parameters from liver images. Then the resulting parameter set is further processed to obtain the minimum number of parameters representing the most discriminating pattern space for classification. This preprocessing step was applied to over 120 pathology-investigated cases to obtain the learning data for designing the classifier. The extracted features are divided into independent training and test sets and are used to construct both statistical and neural classifiers. The optimal criteria for these classifiers are set to have minimum error, ease of implementation and learning, and the flexibility for future modifications. Various algorithms for implementing various classification techniques are presented and tested on the data. The best performance was obtained using a single layer tensor model functional link network. Also, the voting k-nearest neighbor classifier provided comparably good diagnostic rates.
Equal Area Logistic Estimation for Item Response Theory
NASA Astrophysics Data System (ADS)
Lo, Shih-Ching; Wang, Kuo-Chang; Chang, Hsin-Li
2009-08-01
Item response theory (IRT) models use logistic functions exclusively as item response functions (IRFs). Applications of IRT models require obtaining the set of values for logistic function parameters that best fit an empirical data set. However, success in obtaining such set of values does not guarantee that the constructs they represent actually exist, for the adequacy of a model is not sustained by the possibility of estimating parameters. In this study, an equal area based two-parameter logistic model estimation algorithm is proposed. Two theorems are given to prove that the results of the algorithm are equivalent to the results of fitting data by logistic model. Numerical results are presented to show the stability and accuracy of the algorithm.
Classification of materials using nuclear magnetic resonance dispersion and/or x-ray absorption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Espy, Michelle A.; Matlashov, Andrei N.; Schultz, Larry J.
Methods for determining the identity of a substance are provided. A classification parameter set is defined to allow identification of substances that previously could not be identified or to allow identification of substances with a higher degree of confidence. The classification parameter set may include at least one of relative nuclear susceptibility (RNS) or an x-ray linear attenuation coefficient (LAC). RNS represents the density of hydrogen nuclei present in a substance relative to the density of hydrogen nuclei present in water. The extended classification parameter set may include T.sub.1, T.sub.2, and/or T.sub.1.rho. as well as at least one additional classificationmore » parameter comprising one of RNS or LAC. Values obtained for additional classification parameters as well as values obtained for T.sub.1, T.sub.2, and T.sub.1.rho. can be compared to known classification parameter values to determine whether a particular substance is a known material.« less
Performance Analysis of Hybrid Electric Vehicle over Different Driving Cycles
NASA Astrophysics Data System (ADS)
Panday, Aishwarya; Bansal, Hari Om
2017-02-01
Article aims to find the nature and response of a hybrid vehicle on various standard driving cycles. Road profile parameters play an important role in determining the fuel efficiency. Typical parameters of road profile can be reduced to a useful smaller set using principal component analysis and independent component analysis. Resultant data set obtained after size reduction may result in more appropriate and important parameter cluster. With reduced parameter set fuel economies over various driving cycles, are ranked using TOPSIS and VIKOR multi-criteria decision making methods. The ranking trend is then compared with the fuel economies achieved after driving the vehicle over respective roads. Control strategy responsible for power split is optimized using genetic algorithm. 1RC battery model and modified SOC estimation method are considered for the simulation and improved results compared with the default are obtained.
Jirousková, Zuzana; Vareková, Radka Svobodová; Vanek, Jakub; Koca, Jaroslav
2009-05-01
The electronegativity equalization method (EEM) was developed by Mortier et al. as a semiempirical method based on the density-functional theory. After parameterization, in which EEM parameters A(i), B(i), and adjusting factor kappa are obtained, this approach can be used for calculation of average electronegativity and charge distribution in a molecule. The aim of this work is to perform the EEM parameterization using the Merz-Kollman-Singh (MK) charge distribution scheme obtained from B3LYP/6-31G* and HF/6-31G* calculations. To achieve this goal, we selected a set of 380 organic molecules from the Cambridge Structural Database (CSD) and used the methodology, which was recently successfully applied to EEM parameterization to calculate the HF/STO-3G Mulliken charges on large sets of molecules. In the case of B3LYP/6-31G* MK charges, we have improved the EEM parameters for already parameterized elements, specifically C, H, N, O, and F. Moreover, EEM parameters for S, Br, Cl, and Zn, which have not as yet been parameterized for this level of theory and basis set, we also developed. In the case of HF/6-31G* MK charges, we have developed the EEM parameters for C, H, N, O, S, Br, Cl, F, and Zn that have not been parameterized for this level of theory and basis set so far. The obtained EEM parameters were verified by a previously developed validation procedure and used for the charge calculation on a different set of 116 organic molecules from the CSD. The calculated EEM charges are in a very good agreement with the quantum mechanically obtained ab initio charges. 2008 Wiley Periodicals, Inc.
Estimation of parameters of dose volume models and their confidence limits
NASA Astrophysics Data System (ADS)
van Luijk, P.; Delvigne, T. C.; Schilstra, C.; Schippers, J. M.
2003-07-01
Predictions of the normal-tissue complication probability (NTCP) for the ranking of treatment plans are based on fits of dose-volume models to clinical and/or experimental data. In the literature several different fit methods are used. In this work frequently used methods and techniques to fit NTCP models to dose response data for establishing dose-volume effects, are discussed. The techniques are tested for their usability with dose-volume data and NTCP models. Different methods to estimate the confidence intervals of the model parameters are part of this study. From a critical-volume (CV) model with biologically realistic parameters a primary dataset was generated, serving as the reference for this study and describable by the NTCP model. The CV model was fitted to this dataset. From the resulting parameters and the CV model, 1000 secondary datasets were generated by Monte Carlo simulation. All secondary datasets were fitted to obtain 1000 parameter sets of the CV model. Thus the 'real' spread in fit results due to statistical spreading in the data is obtained and has been compared with estimates of the confidence intervals obtained by different methods applied to the primary dataset. The confidence limits of the parameters of one dataset were estimated using the methods, employing the covariance matrix, the jackknife method and directly from the likelihood landscape. These results were compared with the spread of the parameters, obtained from the secondary parameter sets. For the estimation of confidence intervals on NTCP predictions, three methods were tested. Firstly, propagation of errors using the covariance matrix was used. Secondly, the meaning of the width of a bundle of curves that resulted from parameters that were within the one standard deviation region in the likelihood space was investigated. Thirdly, many parameter sets and their likelihood were used to create a likelihood-weighted probability distribution of the NTCP. It is concluded that for the type of dose response data used here, only a full likelihood analysis will produce reliable results. The often-used approximations, such as the usage of the covariance matrix, produce inconsistent confidence limits on both the parameter sets and the resulting NTCP values.
NASA Astrophysics Data System (ADS)
Jagadeesha, C. B.
2017-12-01
Even though friction stir welding was invented long back (1991) by TWI England, till now there has no method or procedure or approach developed, which helps to obtain quickly optimum or exact parameters yielding good or sound weld. An approach has developed in which an equation has been derived, by which approximate rpm can be obtained and by setting range of rpm ±100 or 50 rpm over approximate rpm and by setting welding speed equal to 60 mm/min or 50 mm/min one can conduct FSW experiment to reach optimum parameters; one can reach quickly to optimum parameters, i.e. desired rpm, and welding speed, which yield sound weld by the approach. This approach can be effectively used to obtain sound welds for all similar and dissimilar combinations of materials such as Steel, Al, Mg, Ti, etc.
Analysis of the shrinkage at the thick plate part using response surface methodology
NASA Astrophysics Data System (ADS)
Hatta, N. M.; Azlan, M. Z.; Shayfull, Z.; Roselina, S.; Nasir, S. M.
2017-09-01
Injection moulding is well known for its manufacturing process especially in producing plastic products. To measure the final product quality, there are lots of precautions to be taken into such as parameters setting at the initial stage of the process. Sometimes, if these parameters were set up wrongly, defects may be occurred and one of the well-known defects in the injection moulding process is a shrinkage. To overcome this problem, a maximisation at the precaution stage by making an optimal adjustment on the parameter setting need to be done and this paper focuses on analysing the shrinkage by optimising the parameter at thick plate part with the help of Response Surface Methodology (RSM) and ANOVA analysis. From the previous study, the outstanding parameter gained from the optimisation method in minimising the shrinkage at the moulded part was packing pressure. Therefore, with the reference from the previous literature, packing pressure was selected as the parameter setting for this study with other three parameters which are melt temperature, cooling time and mould temperature. The analysis of the process was obtained from the simulation by Autodesk Moldflow Insight (AMI) software and the material used for moulded part was Acrylonitrile Butadiene Styrene (ABS). The analysis and result were obtained and it found that the shrinkage can be minimised and the significant parameters were found as packing pressure, mould temperature and melt temperature.
Optimization of seismic isolation systems via harmony search
NASA Astrophysics Data System (ADS)
Melih Nigdeli, Sinan; Bekdaş, Gebrail; Alhan, Cenk
2014-11-01
In this article, the optimization of isolation system parameters via the harmony search (HS) optimization method is proposed for seismically isolated buildings subjected to both near-fault and far-fault earthquakes. To obtain optimum values of isolation system parameters, an optimization program was developed in Matlab/Simulink employing the HS algorithm. The objective was to obtain a set of isolation system parameters within a defined range that minimizes the acceleration response of a seismically isolated structure subjected to various earthquakes without exceeding a peak isolation system displacement limit. Several cases were investigated for different isolation system damping ratios and peak displacement limitations of seismic isolation devices. Time history analyses were repeated for the neighbouring parameters of optimum values and the results proved that the parameters determined via HS were true optima. The performance of the optimum isolation system was tested under a second set of earthquakes that was different from the first set used in the optimization process. The proposed optimization approach is applicable to linear isolation systems. Isolation systems composed of isolation elements that are inherently nonlinear are the subject of a future study. Investigation of the optimum isolation system parameters has been considered in parametric studies. However, obtaining the best performance of a seismic isolation system requires a true optimization by taking the possibility of both near-fault and far-fault earthquakes into account. HS optimization is proposed here as a viable solution to this problem.
Cervantes-Sanchez, Fernando; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Ornelas-Rodriguez, Manuel; Torres-Cisneros, Miguel
2016-01-01
This paper presents a novel method for improving the training step of the single-scale Gabor filters by using the Boltzmann univariate marginal distribution algorithm (BUMDA) in X-ray angiograms. Since the single-scale Gabor filters (SSG) are governed by three parameters, the optimal selection of the SSG parameters is highly desirable in order to maximize the detection performance of coronary arteries while reducing the computational time. To obtain the best set of parameters for the SSG, the area (A z) under the receiver operating characteristic curve is used as fitness function. Moreover, to classify vessel and nonvessel pixels from the Gabor filter response, the interclass variance thresholding method has been adopted. The experimental results using the proposed method obtained the highest detection rate with A z = 0.9502 over a training set of 40 images and A z = 0.9583 with a test set of 40 images. In addition, the experimental results of vessel segmentation provided an accuracy of 0.944 with the test set of angiograms. PMID:27738422
A Regionalization Approach to select the final watershed parameter set among the Pareto solutions
NASA Astrophysics Data System (ADS)
Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.
2017-12-01
The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.
NASA Astrophysics Data System (ADS)
Mizukami, N.; Clark, M. P.; Newman, A. J.; Wood, A.; Gutmann, E. D.
2017-12-01
Estimating spatially distributed model parameters is a grand challenge for large domain hydrologic modeling, especially in the context of hydrologic model applications such as streamflow forecasting. Multi-scale Parameter Regionalization (MPR) is a promising technique that accounts for the effects of fine-scale geophysical attributes (e.g., soil texture, land cover, topography, climate) on model parameters and nonlinear scaling effects on model parameters. MPR computes model parameters with transfer functions (TFs) that relate geophysical attributes to model parameters at the native input data resolution and then scales them using scaling functions to the spatial resolution of the model implementation. One of the biggest challenges in the use of MPR is identification of TFs for each model parameter: both functional forms and geophysical predictors. TFs used to estimate the parameters of hydrologic models typically rely on previous studies or were derived in an ad-hoc, heuristic manner, potentially not utilizing maximum information content contained in the geophysical attributes for optimal parameter identification. Thus, it is necessary to first uncover relationships among geophysical attributes, model parameters, and hydrologic processes (i.e., hydrologic signatures) to obtain insight into which and to what extent geophysical attributes are related to model parameters. We perform multivariate statistical analysis on a large-sample catchment data set including various geophysical attributes as well as constrained VIC model parameters at 671 unimpaired basins over the CONUS. We first calibrate VIC model at each catchment to obtain constrained parameter sets. Additionally, parameter sets sampled during the calibration process are used for sensitivity analysis using various hydrologic signatures as objectives to understand the relationships among geophysical attributes, parameters, and hydrologic processes.
NASA Astrophysics Data System (ADS)
Chrobak, Ł.; Maliński, M.
2018-06-01
This paper presents a comparison of three nondestructive and contactless techniques used for determination of recombination parameters of silicon samples. They are: photoacoustic method, modulated free carriers absorption method and the photothermal radiometry method. In the paper the experimental set-ups used for measurements of the recombination parameters in these methods as also theoretical models used for interpretation of obtained experimental data have been presented and described. The experimental results and their respective fits obtained with these nondestructive techniques are shown and discussed. The values of the recombination parameters obtained with these methods are also presented and compared. Main advantages and disadvantages of presented methods have been discussed.
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
Kuu, Wei Y; Nail, Steven L; Sacha, Gregory
2009-03-01
The purpose of this study was to perform a rapid determination of vial heat transfer parameters, that is, the contact parameter K(cs) and the separation distance l(v), using the sublimation rate profiles measured by tunable diode laser absorption spectroscopy (TDLAS). In this study, each size of vial was filled with pure water followed by a freeze-drying cycle using a LyoStar II dryer (FTS Systems) with step-changes of the chamber pressure set-point at to 25, 50, 100, 200, 300, and 400 mTorr. K(cs) was independently determined by nonlinear parameter estimation using the sublimation rates measured at the pressure set-point of 25 mTorr. After obtaining K(cs), the l(v) value for each vial size was determined by nonlinear parameter estimation using the pooled sublimation rate profiles obtained at 25 to 400 mTorr. The vial heat transfer coefficient K(v), as a function of the chamber pressure, was readily calculated, using the obtained K(cs) and l(v) values. It is interesting to note the significant difference in K(v) of two similar types of 10 mL Schott tubing vials, primary due to the geometry of the vial-bottom, as demonstrated by the images of the contact areas of the vial-bottom. (c) 2008 Wiley-Liss, Inc. and the American Pharmacists Association
Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods
NASA Astrophysics Data System (ADS)
Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan
2017-03-01
Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.
NASA Astrophysics Data System (ADS)
Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.
2017-12-01
The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.
Choi, Yun Jeong; Jeoung, Jin Wook; Park, Ki Ho; Kim, Dong Myung
2016-03-01
To determine and validate the diagnostic ability of a linear discriminant function (LDF) based on retinal nerve fiber layer (RNFL) and ganglion cell-inner plexiform layer (GCIPL) thickness obtained using high-definition optical coherence tomography (Cirrus HD-OCT) for discriminating between healthy controls and early glaucoma subjects. We prospectively selected 214 healthy controls and 152 glaucoma subjects (teaching set) and another independent sample of 86 healthy controls and 71 glaucoma subjects (validating set). Two scans, including 1 macular and 1 peripapillary RNFL scan, were obtained. After calculating the LDF in the teaching set using the binary logistic regression analysis, receiver operating characteristic curves were plotted and compared between the OCT-provided parameters and LDF in the validating set. The proposed LDF was 16.529-(0.132×superior RNFL)-(0.064×inferior RNFL)+(0.039×12 o'clock RNFL)+(0.038×1 o'clock RNFL)+(0.084×superior GCIPL)-(0.144×minimum GCIPL). The highest area under the receiver operating characteristic (AUROC) curve was obtained for LDF in both sets (AUROC=0.95 and 0.96). In the validating set, the LDF showed significantly higher AUROC than the best RNFL (inferior RNFL=0.91) and GCIPL parameter (minimum GCIPL=0.88). The LDF yielded a sensitivity of 93.0% at a fixed specificity of 85.0%. The LDF showed better diagnostic ability for differentiating between healthy and early glaucoma subjects than individual OCT parameters. A classification algorithm based on the LDF can be used in the OCT analysis for glaucoma diagnosis.
NASA Astrophysics Data System (ADS)
da Costa, Diogo Ricardo; Hansen, Matheus; Guarise, Gustavo; Medrano-T, Rene O.; Leonel, Edson D.
2016-04-01
We show that extreme orbits, trajectories that connect local maximum and minimum values of one dimensional maps, play a major role in the parameter space of dissipative systems dictating the organization for the windows of periodicity, hence producing sets of shrimp-like structures. Here we solve three fundamental problems regarding the distribution of these sets and give: (i) their precise localization in the parameter space, even for sets of very high periods; (ii) their local and global distributions along cascades; and (iii) the association of these cascades to complicate sets of periodicity. The extreme orbits are proved to be a powerful indicator to investigate the organization of windows of periodicity in parameter planes. As applications of the theory, we obtain some results for the circle map and perturbed logistic map. The formalism presented here can be extended to many other different nonlinear and dissipative systems.
Moore, G.K.; Baten, L.G.; Allord, G.J.; Robinove, C.J.
1983-01-01
The Fox-Wolf River basin in east-central Wisconsin was selected to test concepts for a water-resources information system using digital mapping technology. This basin of 16,800 sq km is typical of many areas in the country. Fifty digital data sets were included in the Fox-Wolf information system. Many data sets were digitized from 1:500,000 scale maps and overlays. Some thematic data were acquired from WATSTORE and other digital data files. All data were geometrically transformed into a Lambert Conformal Conic map projection and converted to a raster format with a 1-km resolution. The result of this preliminary processing was a group of spatially registered, digital data sets in map form. Parameter evaluation, areal stratification, data merging, and data integration were used to achieve the processing objectives and to obtain analysis results for the Fox-Wolf basin. Parameter evaluation includes the visual interpretation of single data sets and digital processing to obtain new derived data sets. In the areal stratification stage, masks were used to extract from one data set all features that are within a selected area on another data set. Most processing results were obtained by data merging. Merging is the combination of two or more data sets into a composite product, in which the contribution of each original data set is apparent and can be extracted from the composite. One processing result was also obtained by data integration. Integration is the combination of two or more data sets into a single new product, from which the original data cannot be separated or calculated. (USGS)
Design Optimization of a Hybrid Electric Vehicle Powertrain
NASA Astrophysics Data System (ADS)
Mangun, Firdause; Idres, Moumen; Abdullah, Kassim
2017-03-01
This paper presents an optimization work on hybrid electric vehicle (HEV) powertrain using Genetic Algorithm (GA) method. It focused on optimization of the parameters of powertrain components including supercapacitors to obtain maximum fuel economy. Vehicle modelling is based on Quasi-Static-Simulation (QSS) backward-facing approach. A combined city (FTP-75)-highway (HWFET) drive cycle is utilized for the design process. Seeking global optimum solution, GA was executed with different initial settings to obtain sets of optimal parameters. Starting from a benchmark HEV, optimization results in a smaller engine (2 l instead of 3 l) and a larger battery (15.66 kWh instead of 2.01 kWh). This leads to a reduction of 38.3% in fuel consumption and 30.5% in equivalent fuel consumption. Optimized parameters are also compared with actual values for HEV in the market.
Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki
2015-01-01
Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.
Chakraborty, Mousumi; Ridgway, Cathy; Bawuah, Prince; Markl, Daniel; Gane, Patrick A C; Ketolainen, Jarkko; Zeitler, J Axel; Peiponen, Kai-Erik
2017-06-15
The objective of this study is to propose a novel optical compressibility parameter for porous pharmaceutical tablets. This parameter is defined with the aid of the effective refractive index of a tablet that is obtained from non-destructive and contactless terahertz (THz) time-delay transmission measurement. The optical compressibility parameter of two training sets of pharmaceutical tablets with a priori known porosity and mass fraction of a drug was investigated. Both pharmaceutical sets were compressed with one of the most commonly used excipients, namely microcrystalline cellulose (MCC) and drug Indomethacin. The optical compressibility clearly correlates with the skeletal bulk modulus determined by mercury porosimetry and the recently proposed terahertz lumped structural parameter calculated from terahertz measurements. This lumped structural parameter can be used to analyse the pattern of arrangement of excipient and drug particles in porous pharmaceutical tablets. Therefore, we propose that the optical compressibility can serve as a quality parameter of a pharmaceutical tablet corresponding with the skeletal bulk modulus of the porous tablet, which is related to structural arrangement of the powder particles in the tablet. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Brown, G.; Rikvold, P. A.
2017-05-01
A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.
Padró, Juan M; Ponzinibbio, Agustín; Mesa, Leidy B Agudelo; Reta, Mario
2011-03-01
The partition coefficients, P(IL/w), for different probe molecules as well as for compounds of biological interest between the room-temperature ionic liquids (RTILs) 1-butyl-3-methylimidazolium hexafluorophosphate, [BMIM][PF(6)], 1-hexyl-3-methylimidazolium hexafluorophosphate, [HMIM][PF(6)], 1-octyl-3-methylimidazolium tetrafluoroborate, [OMIM][BF(4)] and water were accurately measured. [BMIM][PF(6)] and [OMIM][BF(4)] were synthesized by adapting a procedure from the literature to a simpler, single-vessel and faster methodology, with a much lesser consumption of organic solvent. We employed the solvation-parameter model to elucidate the general chemical interactions involved in RTIL/water partitioning. With this purpose, we have selected different solute descriptor parameters that measure polarity, polarizability, hydrogen-bond-donor and hydrogen-bond-acceptor interactions, and cavity formation for a set of specifically selected probe molecules (the training set). The obtained multiparametric equations were used to predict the partition coefficients for compounds not present in the training set (the test set), most being of biological interest. Partial solubility of the ionic liquid in water (and water into the ionic liquid) was taken into account to explain the obtained results. This fact has not been deeply considered up to date. Solute descriptors were obtained from the literature, when available, or else calculated through commercial software. An excellent agreement between calculated and experimental log P(IL/w) values was obtained, which demonstrated that the resulting multiparametric equations are robust and allow predicting partitioning for any organic molecule in the biphasic systems studied.
Extracting the QCD ΛMS¯ parameter in Drell-Yan process using Collins-Soper-Sterman approach
NASA Astrophysics Data System (ADS)
Taghavi, R.; Mirjalili, A.
2017-03-01
In this work, we directly fit the QCD dimensional transmutation parameter, ΛMS¯, to experimental data of Drell-Yan (DY) observables. For this purpose, we first obtain the evolution of transverse momentum dependent parton distribution functions (TMDPDFs) up to the next-to-next-to-leading logarithm (NNLL) approximation based on Collins-Soper-Sterman (CSS) formalism. As is expecting the TMDPDFs are appearing at larger values of transverse momentum by increasing the energy scales and also the order of approximation. Then we calculate the cross-section related to the TMDPDFs in the DY process. As a consequence of global fitting to the five sets of experimental data at different low center-of-mass energies and one set at high center-of-mass energy, using CETQ06 parametrizations as our boundary condition, we obtain ΛMS¯ = 221 ± 7(stat) ± 54(theory) MeV corresponding to the renormalized coupling constant αs(Mz2) = 0.117 ± 0.001(stat) ± 0.004(theory) which is within the acceptable range for this quantity. The goodness of χ2/d.o.f = 1.34 shows the results for DY cross-section are in good agreement with different experimental sets, containing E288, E605 and R209 at low center-of-mass energies and D0, CDF data at high center-of-mass energy. The repeated calculations, using HERAPDFs parametrizations is yielding us numerical values for fitted parameters very close to what we obtain using CETQ06 PDFs set. This indicates that the obtained results have enough stability by variations in the boundary conditions.
NASA Astrophysics Data System (ADS)
Bars, Itzhak; Chen, Shih-Hung; Steinhardt, Paul J.; Turok, Neil
2012-10-01
We study a model of a scalar field minimally coupled to gravity, with a specific potential energy for the scalar field, and include curvature and radiation as two additional parameters. Our goal is to obtain analytically the complete set of configurations of a homogeneous and isotropic universe as a function of time. This leads to a geodesically complete description of the Universe, including the passage through the cosmological singularities, at the classical level. We give all the solutions analytically without any restrictions on the parameter space of the model or initial values of the fields. We find that for generic solutions the Universe goes through a singular (zero-size) bounce by entering a period of antigravity at each big crunch and exiting from it at the following big bang. This happens cyclically again and again without violating the null-energy condition. There is a special subset of geodesically complete nongeneric solutions which perform zero-size bounces without ever entering the antigravity regime in all cycles. For these, initial values of the fields are synchronized and quantized but the parameters of the model are not restricted. There is also a subset of spatial curvature-induced solutions that have finite-size bounces in the gravity regime and never enter the antigravity phase. These exist only within a small continuous domain of parameter space without fine-tuning the initial conditions. To obtain these results, we identified 25 regions of a 6-parameter space in which the complete set of analytic solutions are explicitly obtained.
Torres, Edmanuel; DiLabio, Gino A
2013-08-13
Large clusters of noncovalently bonded molecules can only be efficiently modeled by classical mechanics simulations. One prominent challenge associated with this approach is obtaining force-field parameters that accurately describe noncovalent interactions. High-level correlated wave function methods, such as CCSD(T), are capable of correctly predicting noncovalent interactions, and are widely used to produce reference data. However, high-level correlated methods are generally too computationally costly to generate the critical reference data required for good force-field parameter development. In this work we present an approach to generate Lennard-Jones force-field parameters to accurately account for noncovalent interactions. We propose the use of a computational step that is intermediate to CCSD(T) and classical molecular mechanics, that can bridge the accuracy and computational efficiency gap between them, and demonstrate the efficacy of our approach with methane clusters. On the basis of CCSD(T)-level binding energy data for a small set of methane clusters, we develop methane-specific, atom-centered, dispersion-correcting potentials (DCPs) for use with the PBE0 density-functional and 6-31+G(d,p) basis sets. We then use the PBE0-DCP approach to compute a detailed map of the interaction forces associated with the removal of a single methane molecule from a cluster of eight methane molecules and use this map to optimize the Lennard-Jones parameters for methane. The quality of the binding energies obtained by the Lennard-Jones parameters we obtained is assessed on a set of methane clusters containing from 2 to 40 molecules. Our Lennard-Jones parameters, used in combination with the intramolecular parameters of the CHARMM force field, are found to closely reproduce the results of our dispersion-corrected density-functional calculations. The approach outlined can be used to develop Lennard-Jones parameters for any kind of molecular system.
OPC modeling by genetic algorithm
NASA Astrophysics Data System (ADS)
Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Tsay, C. S.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.; Lin, B. J.
2005-05-01
Optical proximity correction (OPC) is usually used to pre-distort mask layouts to make the printed patterns as close to the desired shapes as possible. For model-based OPC, a lithographic model to predict critical dimensions after lithographic processing is needed. The model is usually obtained via a regression of parameters based on experimental data containing optical proximity effects. When the parameters involve a mix of the continuous (optical and resist models) and the discrete (kernel numbers) sets, the traditional numerical optimization method may have difficulty handling model fitting. In this study, an artificial-intelligent optimization method was used to regress the parameters of the lithographic models for OPC. The implemented phenomenological models were constant-threshold models that combine diffused aerial image models with loading effects. Optical kernels decomposed from Hopkin"s equation were used to calculate aerial images on the wafer. Similarly, the numbers of optical kernels were treated as regression parameters. This way, good regression results were obtained with different sets of optical proximity effect data.
NASA Astrophysics Data System (ADS)
Alet, Analía. I.; Basso, Sabrina; Delannoy, Marcela; Alet, Nicolás. A.; D'Arrigo, Mabel; Castellini, Horacio V.; Riquelme, Bibiana D.
2015-06-01
Drugs used during anesthesia could enhance microvascular flow disturbance, not only for their systemic cardiovascular actions but also by a direct effect on the microcirculation and in particular on hemorheology. This is particularly important in high-risk surgical patients such as those with vascular disease (diabetes, hypertension, etc.). Therefore, in this work we propose a set of innovative parameters obtained by digital analysis of microscopic images to study the in vitro hemorheological effect of propofol and vecuronium on red blood cell from type 2 diabetic patients compared to healthy donors. Obtained innovative parameters allow quantifying alterations in erythrocyte aggregation, which can increase the in vivo risk of microcapillary obstruction.
NASA Astrophysics Data System (ADS)
Beck, Hylke; de Roo, Ad; van Dijk, Albert; McVicar, Tim; Miralles, Diego; Schellekens, Jaap; Bruijnzeel, Sampurno; de Jeu, Richard
2015-04-01
Motivated by the lack of large-scale model parameter regionalization studies, a large set of 3328 small catchments (< 10000 km2) around the globe was used to set up and evaluate five model parameterization schemes at global scale. The HBV-light model was chosen because of its parsimony and flexibility to test the schemes. The catchments were calibrated against observed streamflow (Q) using an objective function incorporating both behavioral and goodness-of-fit measures, after which the catchment set was split into subsets of 1215 donor and 2113 evaluation catchments based on the calibration performance. The donor catchments were subsequently used to derive parameter sets that were transferred to similar grid cells based on a similarity measure incorporating climatic and physiographic characteristics, thereby producing parameter maps with global coverage. Overall, there was a lack of suitable donor catchments for mountainous and tropical environments. The schemes with spatially-uniform parameter sets (EXP2 and EXP3) achieved the worst Q estimation performance in the evaluation catchments, emphasizing the importance of parameter regionalization. The direct transfer of calibrated parameter sets from donor catchments to similar grid cells (scheme EXP1) performed best, although there was still a large performance gap between EXP1 and HBV-light calibrated against observed Q. The schemes with parameter sets obtained by simultaneously calibrating clusters of similar donor catchments (NC10 and NC58) performed worse than EXP1. The relatively poor Q estimation performance achieved by two (uncalibrated) macro-scale hydrological models suggests there is considerable merit in regionalizing the parameters of such models. The global HBV-light parameter maps and ancillary data are freely available via http://water.jrc.ec.europa.eu.
Assessment of central haemomodynamics from a brachial cuff in a community setting
2012-01-01
Background Large artery stiffening and wave reflections are independent predictors of adverse events. To date, their assessment has been limited to specialised techniques and settings. A new, more practical method allowing assessment of central blood pressure from waveforms recorded using a conventional automated oscillometric monitor has recently been validated in laboratory settings. However, the feasibility of this method in a community based setting has not been assessed. Methods One-off peripheral and central haemodynamic (systolic and diastolic blood pressure (BP) and pulse pressure) and wave reflection parameters (augmentation pressure (AP) and index, AIx) were obtained from 1,903 volunteers in an Austrian community setting using a transfer-function like method (ARCSolver algorithm) and from waveforms recorded with a regular oscillometric cuff. We assessed these parameters for known differences and associations according to gender and age deciles from <30 years to >80 years in the whole population and a subset with a systolic BP < 140 mmHg. Results We obtained 1,793 measures of peripheral and central BP, PP and augmentation parameters. Age and gender associations with central haemodynamic and augmentation parameters reflected those previously established from reference standard non-invasive techniques under specialised settings. Findings were the same for patients with a systolic BP below 140 mmHg (i.e. normotensive). Lower values for AIx in the current study are possibly due to differences in sampling rates, detection frequency and/or averaging procedures and to lower numbers of volunteers in younger age groups. Conclusion A novel transfer-function like algorithm, using brachial cuff-based waveform recordings, provides robust and feasible estimates of central systolic pressure and augmentation in community-based settings. PMID:22734820
NASA Technical Reports Server (NTRS)
Mather, R. S.; Lerch, F. J.; Rizos, C.; Masters, E. G.; Hirsch, B.
1978-01-01
The 1977 altimetry data bank is analyzed for the geometrical shape of the sea surface expressed as surface spherical harmonics after referral to the higher reference model defined by GEM 9. The resulting determination is expressed as quasi-stationary dynamic SST. Solutions are obtained from different sets of long arcs in the GEOS-3 altimeter data bank as well as from sub-sets related to the September 1975 and March 1976 equinoxes assembled with a view to minimizing seasonal effects. The results are compared with equivalent parameters obtained from the hydrostatic analysis of sporadic temperature, pressure and salinity measurements of the oceans and the known major steady state current systems with comparable wavelengths. The most clearly defined parameter (the zonal harmonic of degree 2) is obtained with an uncertainty of + or - 6 cm. The preferred numerical value is smaller than the oceanographic value due to the effect of the correction for the permanent earth tide. Similar precision is achieved for the zonal harmonic of degree 3. The precision obtained for the fourth degree zonal harmonic reflects more closely the accuracy expected from the level of noise in the orbital solutions.
Cellular Therapy to Obtain Rapid Endochondral Bone Formation
2008-02-01
efficiency of the delivery cells for optimal BMP2 production is the key parameter in determining the ex- tent of bone formation (Olmsted et al., 2001...quan- titative bone analysis software provided with the MicroCT sys- tem. For this analysis, any tissue with a hydroxyapatite density greater than 0.26...2B. Continued. B duced cells do not interfere with the osteoinductive nature of BMP2. Using set parameters to obtain equivalent functional BMP2
Jing, Nan; Li, Chuang; Chong, Yaqin
2017-01-20
An estimation method for indirectly observable parameters for a typical low dynamic vehicle (LDV) is presented. The estimation method utilizes apparent magnitude, azimuth angle, and elevation angle to estimate the position and velocity of a typical LDV, such as a high altitude balloon (HAB). In order to validate the accuracy of the estimated parameters gained from an unscented Kalman filter, two sets of experiments are carried out to obtain the nonresolved photometric and astrometric data. In the experiments, a HAB launch is planned; models of the HAB dynamics and kinematics and observation models are built to use as time update and measurement update functions, respectively. When the HAB is launched, a ground-based optoelectronic detector is used to capture the object images, which are processed using aperture photometry technology to obtain the time-varying apparent magnitude of the HAB. Two sets of actual and estimated parameters are given to clearly indicate the parameter differences. Two sets of errors between the actual and estimated parameters are also given to show how the estimated position and velocity differ with respect to the observation time. The similar distribution curve results from the two scenarios, which agree within 3σ, verify that nonresolved photometric and astrometric data can be used to estimate the indirectly observable state parameters (position and velocity) for a typical LDV. This technique can be applied to small and dim space objects in the future.
Chaos control of Hastings-Powell model by combining chaotic motions.
Danca, Marius-F; Chattopadhyay, Joydev
2016-04-01
In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings-Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: "losing + losing = winning." If "loosing" is replaced with "chaos" and, "winning" with "order" (as the opposite to "chaos"), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write "chaos + chaos = regular." Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.
Chaos control of Hastings-Powell model by combining chaotic motions
NASA Astrophysics Data System (ADS)
Danca, Marius-F.; Chattopadhyay, Joydev
2016-04-01
In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings-Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can be approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: "losing + losing = winning." If "loosing" is replaced with "chaos" and, "winning" with "order" (as the opposite to "chaos"), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write "chaos + chaos = regular." Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.
Paliwal, Himanshu; Shirts, Michael R
2013-11-12
Multistate reweighting methods such as the multistate Bennett acceptance ratio (MBAR) can predict free energies and expectation values of thermodynamic observables at poorly sampled or unsampled thermodynamic states using simulations performed at only a few sampled states combined with single point energy reevaluations of these samples at the unsampled states. In this study, we demonstrate the power of this general reweighting formalism by exploring the effect of simulation parameters controlling Coulomb and Lennard-Jones cutoffs on free energy calculations and other observables. Using multistate reweighting, we can quickly identify, with very high sensitivity, the computationally least expensive nonbonded parameters required to obtain a specified accuracy in observables compared to the answer obtained using an expensive "gold standard" set of parameters. We specifically examine free energy estimates of three molecular transformations in a benchmark molecular set as well as the enthalpy of vaporization of TIP3P. The results demonstrates the power of this multistate reweighting approach for measuring changes in free energy differences or other estimators with respect to simulation or model parameters with very high precision and/or very low computational effort. The results also help to identify which simulation parameters affect free energy calculations and provide guidance to determine which simulation parameters are both appropriate and computationally efficient in general.
Automated Structural Optimization System (ASTROS). Volume 1. Theoretical Manual
1988-12-01
corresponding frequency list are given by Equation C-9. The second set of parameters is the frequency list used in solving Equation C-3 to obtain the response...vector (u(w)). This frequency list is: w - 2*fo, 2wfi, 2wf2, 2wfn (C-20) The frequency lists (^ and w are not necessarily equal. While setting...alternative methods are used to input the frequency list u. For the first method, the frequency list u is input via two parameters: Aff (C-21
An extended harmonic balance method based on incremental nonlinear control parameters
NASA Astrophysics Data System (ADS)
Khodaparast, Hamed Haddad; Madinei, Hadi; Friswell, Michael I.; Adhikari, Sondipon; Coggon, Simon; Cooper, Jonathan E.
2017-02-01
A new formulation for calculating the steady-state responses of multiple-degree-of-freedom (MDOF) non-linear dynamic systems due to harmonic excitation is developed. This is aimed at solving multi-dimensional nonlinear systems using linear equations. Nonlinearity is parameterised by a set of 'non-linear control parameters' such that the dynamic system is effectively linear for zero values of these parameters and nonlinearity increases with increasing values of these parameters. Two sets of linear equations which are formed from a first-order truncated Taylor series expansion are developed. The first set of linear equations provides the summation of sensitivities of linear system responses with respect to non-linear control parameters and the second set are recursive equations that use the previous responses to update the sensitivities. The obtained sensitivities of steady-state responses are then used to calculate the steady state responses of non-linear dynamic systems in an iterative process. The application and verification of the method are illustrated using a non-linear Micro-Electro-Mechanical System (MEMS) subject to a base harmonic excitation. The non-linear control parameters in these examples are the DC voltages that are applied to the electrodes of the MEMS devices.
Strategies for Efficient Computation of the Expected Value of Partial Perfect Information
Madan, Jason; Ades, Anthony E.; Price, Malcolm; Maitland, Kathryn; Jemutai, Julie; Revill, Paul; Welton, Nicky J.
2014-01-01
Expected value of information methods evaluate the potential health benefits that can be obtained from conducting new research to reduce uncertainty in the parameters of a cost-effectiveness analysis model, hence reducing decision uncertainty. Expected value of partial perfect information (EVPPI) provides an upper limit to the health gains that can be obtained from conducting a new study on a subset of parameters in the cost-effectiveness analysis and can therefore be used as a sensitivity analysis to identify parameters that most contribute to decision uncertainty and to help guide decisions around which types of study are of most value to prioritize for funding. A common general approach is to use nested Monte Carlo simulation to obtain an estimate of EVPPI. This approach is computationally intensive, can lead to significant sampling bias if an inadequate number of inner samples are obtained, and incorrect results can be obtained if correlations between parameters are not dealt with appropriately. In this article, we set out a range of methods for estimating EVPPI that avoid the need for nested simulation: reparameterization of the net benefit function, Taylor series approximations, and restricted cubic spline estimation of conditional expectations. For each method, we set out the generalized functional form that net benefit must take for the method to be valid. By specifying this functional form, our methods are able to focus on components of the model in which approximation is required, avoiding the complexities involved in developing statistical approximations for the model as a whole. Our methods also allow for any correlations that might exist between model parameters. We illustrate the methods using an example of fluid resuscitation in African children with severe malaria. PMID:24449434
Coordinate transformation by minimizing correlations between parameters
NASA Technical Reports Server (NTRS)
Kumar, M.
1972-01-01
This investigation was to determine the transformation parameters (three rotations, three translations and a scale factor) between two Cartesian coordinate systems from sets of coordinates given in both systems. The objective was the determination of well separated transformation parameters with reduced correlations between each other, a problem especially relevant when the sets of coordinates are not well distributed. The above objective is achieved by preliminarily determining the three rotational parameters and the scale factor from the respective direction cosines and chord distances (these being independent of the translation parameters) between the common points, and then computing all the seven parameters from a solution in which the rotations and the scale factor are entered as weighted constraints according to their variances and covariances obtained in the preliminary solutions. Numerical tests involving two geodetic reference systems were performed to evaluate the effectiveness of this approach.
Recalibrating disease parameters for increasing realism in modeling epidemics in closed settings.
Bioglio, Livio; Génois, Mathieu; Vestergaard, Christian L; Poletto, Chiara; Barrat, Alain; Colizza, Vittoria
2016-11-14
The homogeneous mixing assumption is widely adopted in epidemic modelling for its parsimony and represents the building block of more complex approaches, including very detailed agent-based models. The latter assume homogeneous mixing within schools, workplaces and households, mostly for the lack of detailed information on human contact behaviour within these settings. The recent data availability on high-resolution face-to-face interactions makes it now possible to assess the goodness of this simplified scheme in reproducing relevant aspects of the infection dynamics. We consider empirical contact networks gathered in different contexts, as well as synthetic data obtained through realistic models of contacts in structured populations. We perform stochastic spreading simulations on these contact networks and in populations of the same size under a homogeneous mixing hypothesis. We adjust the epidemiological parameters of the latter in order to fit the prevalence curve of the contact epidemic model. We quantify the agreement by comparing epidemic peak times, peak values, and epidemic sizes. Good approximations of the peak times and peak values are obtained with the homogeneous mixing approach, with a median relative difference smaller than 20 % in all cases investigated. Accuracy in reproducing the peak time depends on the setting under study, while for the peak value it is independent of the setting. Recalibration is found to be linear in the epidemic parameters used in the contact data simulations, showing changes across empirical settings but robustness across groups and population sizes. An adequate rescaling of the epidemiological parameters can yield a good agreement between the epidemic curves obtained with a real contact network and a homogeneous mixing approach in a population of the same size. The use of such recalibrated homogeneous mixing approximations would enhance the accuracy and realism of agent-based simulations and limit the intrinsic biases of the homogeneous mixing.
NASA Astrophysics Data System (ADS)
Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten
2017-07-01
Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral
sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral
parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.
Parameter Set Cloning Based on Catchment Similarity for Large-scale Hydrologic Modeling
NASA Astrophysics Data System (ADS)
Liu, Z.; Kaheil, Y.; McCollum, J.
2016-12-01
Parameter calibration is a crucial step to ensure the accuracy of hydrological models. However, streamflow gauges are not available everywhere for calibrating a large-scale hydrologic model globally. Thus, assigning parameters appropriately for regions where the calibration cannot be performed directly has been a challenge for large-scale hydrologic modeling. Here we propose a method to estimate the model parameters in ungauged regions based on the values obtained through calibration in areas where gauge observations are available. This parameter set cloning is performed according to a catchment similarity index, a weighted sum index based on four catchment characteristic attributes. These attributes are IPCC Climate Zone, Soil Texture, Land Cover, and Topographic Index. The catchments with calibrated parameter values are donors, while the uncalibrated catchments are candidates. Catchment characteristic analyses are first conducted for both donors and candidates. For each attribute, we compute a characteristic distance between donors and candidates. Next, for each candidate, weights are assigned to the four attributes such that higher weights are given to properties that are more directly linked to the hydrologic dominant processes. This will ensure that the parameter set cloning emphasizes the dominant hydrologic process in the region where the candidate is located. The catchment similarity index for each donor - candidate couple is then created as the sum of the weighted distance of the four properties. Finally, parameters are assigned to each candidate from the donor that is "most similar" (i.e. with the shortest weighted distance sum). For validation, we applied the proposed method to catchments where gauge observations are available, and compared simulated streamflows using the parameters cloned by other catchments to the results obtained by calibrating the hydrologic model directly using gauge data. The comparison shows good agreement between the two models for different river basins as we show here. This method has been applied globally to the Hillslope River Routing (HRR) model using gauge observations obtained from the Global Runoff Data Center (GRDC). As next step, more catchment properties can be taken into account to further improve the representation of catchment similarity.
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
Kapiriri, Lydia
2017-06-19
While there have been efforts to develop frameworks to guide healthcare priority setting; there has been limited focus on evaluation frameworks. Moreover, while the few frameworks identify quality indicators for successful priority setting, they do not provide the users with strategies to verify these indicators. Kapiriri and Martin (Health Care Anal 18:129-147, 2010) developed a framework for evaluating priority setting in low and middle income countries. This framework provides BOTH parameters for successful priority setting and proposes means of their verification. Before its use in real life contexts, this paper presents results from a validation process of the framework. The framework validation involved 53 policy makers and priority setting researchers at the global, national and sub-national levels (in Uganda). They were requested to indicate the relative importance of the proposed parameters as well as the feasibility of obtaining the related information. We also pilot tested the proposed means of verification. Almost all the respondents evaluated all the parameters, including the contextual factors, as 'very important'. However, some respondents at the global level thought 'presence of incentives to comply', 'reduced disagreements', 'increased public understanding,' 'improved institutional accountability' and 'meeting the ministry of health objectives', which could be a reflection of their levels of decision making. All the proposed means of verification were assessed as feasible with the exception of meeting observations which would require an insider. These findings results were consistent with those obtained from the pilot testing. These findings are relevant to policy makers and researchers involved in priority setting in low and middle income countries. To the best of our knowledge, this is one of the few initiatives that has involved potential users of a framework (at the global and in a Low Income Country) in its validation. The favorable validation of all the parameters at the national and sub-national levels implies that the framework has potential usefulness at those levels, as is. The parameters that were disputed at the global level necessitate further discussion when using the framework at that level. The next step is to use the validated framework in evaluating actual priority setting at the different levels.
Simulation of car collision with an impact block
NASA Astrophysics Data System (ADS)
Kostek, R.; Aleksandrowicz, P.
2017-10-01
This article presents the experimental results of crash test of Fiat Cinquecento performed by Allgemeiner Deutscher Automobil-Club (ADAC) and the simulation results obtained with program called V-SIM for default settings. At the next stage a wheel was blocked and the parameters of contact between the vehicle and the barrier were changed for better results matching. The following contact parameters were identified: stiffness at compression phase, stiffness at restitution phase, the coefficients of restitution and friction. The changes lead to various post-impact positions, which shows sensitivity of the results to contact parameters. V-SIM is commonly used by expert witnesses who tend to use default settings, therefore the companies offering simulation programs should identify those parameters with due diligence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbiener, W.A.; Cudnik, R.A.; Dykhuizen, R.C.
Experimental studies were conducted in a /sup 2///sub 15/-scale model of a four-loop pressurized water reactor at pressures to 75 psia to extend the understanding of steam-water interaction phenomena and processes associated with a loss-of-coolant accident. Plenum filling studies were conducted with hydraulic communication between the cold leg and core steam supplies and hot walls, with both fixed and ramped steam flows. Comparisons of correlational fits have been made for penetration data obtained with hydraulic communication, fixed cold leg steam, and no cold leg steam. Statistical tests applied to these correlational fits have indicated that the hydraulic communication and fixedmore » cold leg steam data can be considered to be a common data set. Comparing either of these data sets to the no cold leg steam data using the statistical test indicated that it was unlikely that these sets could be considered to be a common data set. The introduction of cold leg steam results in a slight decrease in penetration relative to that obtained without cold leg steam at the same value of subcooling of water entering the downcomer. A dimensionless parameter which is a weighted mean of a modified Froude number and the Weber number has been proposed as a scaling parameter for penetration data. This parameter contains an additional degree of freedom which allows data from different scales to collapse more closely to a single curve than current scaling parameters permit.« less
Monitoring wilderness stream ecosystems
Jeffrey C. Davis; G. Wayne Minshall; Christopher T. Robinson; Peter Landres
2001-01-01
A protocol and methods for monitoring the major physical, chemical, and biological components of stream ecosystems are presented. The monitoring protocol is organized into four stages. At stage 1 information is obtained on a basic set of parameters that describe stream ecosystems. Each following stage builds upon stage 1 by increasing the number of parameters and the...
Period Estimation for Sparsely-sampled Quasi-periodic Light Curves Applied to Miras
NASA Astrophysics Data System (ADS)
He, Shiyuan; Yuan, Wenlong; Huang, Jianhua Z.; Long, James; Macri, Lucas M.
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequency parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period-luminosity relations.
Vendruscolo, M; Najmanovich, R; Domany, E
2000-02-01
We present a method to derive contact energy parameters from large sets of proteins. The basic requirement on which our method is based is that for each protein in the database the native contact map has lower energy than all its decoy conformations that are obtained by threading. Only when this condition is satisfied one can use the proposed energy function for fold identification. Such a set of parameters can be found (by perceptron learning) if Mp, the number of proteins in the database, is not too large. Other aspects that influence the existence of such a solution are the exact definition of contact and the value of the critical distance Rc, below which two residues are considered to be in contact. Another important novel feature of our approach is its ability to determine whether an energy function of some suitable proposed form can or cannot be parameterized in a way that satisfies our basic requirement. As a demonstration of this, we determine the region in the (Rc, Mp) plane in which the problem is solvable, i.e., we can find a set of contact parameters that stabilize simultaneously all the native conformations. We show that for large enough databases the contact approximation to the energy cannot stabilize all the native folds even against the decoys obtained by gapless threading.
NASA Astrophysics Data System (ADS)
Khan, M. M. A.; Romoli, L.; Fiaschi, M.; Dini, G.; Sarri, F.
2011-02-01
This paper presents an experimental design approach to process parameter optimization for the laser welding of martensitic AISI 416 and AISI 440FSe stainless steels in a constrained overlap configuration in which outer shell was 0.55 mm thick. To determine the optimal laser-welding parameters, a set of mathematical models were developed relating welding parameters to each of the weld characteristics. These were validated both statistically and experimentally. The quality criteria set for the weld to determine optimal parameters were the minimization of weld width and the maximization of weld penetration depth, resistance length and shearing force. Laser power and welding speed in the range 855-930 W and 4.50-4.65 m/min, respectively, with a fiber diameter of 300 μm were identified as the optimal set of process parameters. However, the laser power and welding speed can be reduced to 800-840 W and increased to 4.75-5.37 m/min, respectively, to obtain stronger and better welds.
Elastic Scattering of 65 MeV Protons from Several Nuclei between 16O and 209Bi
NASA Astrophysics Data System (ADS)
Ahmed, Syed; Akther, Parvin; Ferdous, Nasima; Begum, Amena; Gupta, Hiranmay
1997-10-01
Elastic scattering of 65 MeV polarized protons from twenty five nuclei ranging from 16O to 209Bi have been analysed within the framework of the nine parameter optical model. A set of optical model parameters has been obtained which shows the systematic behaviour of the target mass dependence of the real potential, volume integral and the r.m.s. radius. The isotopic spin dependence of the real potential has also been studied. Parameters obtained by fitting the elastic scattering data have been able to reproduce the pickup and stripping reaction cross sections as studied in a few cases.
Modal Parameter Identification of a Flexible Arm System
NASA Technical Reports Server (NTRS)
Barrington, Jason; Lew, Jiann-Shiun; Korbieh, Edward; Wade, Montanez; Tantaris, Richard
1998-01-01
In this paper an experiment is designed for the modal parameter identification of a flexible arm system. This experiment uses a function generator to provide input signal and an oscilloscope to save input and output response data. For each vibrational mode, many sets of sine-wave inputs with frequencies close to the natural frequency of the arm system are used to excite the vibration of this mode. Then a least-squares technique is used to analyze the experimental input/output data to obtain the identified parameters for this mode. The identified results are compared with the analytical model obtained by applying finite element analysis.
Tuning Parameters in Heuristics by Using Design of Experiments Methods
NASA Technical Reports Server (NTRS)
Arin, Arif; Rabadi, Ghaith; Unal, Resit
2010-01-01
With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.
Nowakowska, Marzena
2017-04-01
The development of the Bayesian logistic regression model classifying the road accident severity is discussed. The already exploited informative priors (method of moments, maximum likelihood estimation, and two-stage Bayesian updating), along with the original idea of a Boot prior proposal, are investigated when no expert opinion has been available. In addition, two possible approaches to updating the priors, in the form of unbalanced and balanced training data sets, are presented. The obtained logistic Bayesian models are assessed on the basis of a deviance information criterion (DIC), highest probability density (HPD) intervals, and coefficients of variation estimated for the model parameters. The verification of the model accuracy has been based on sensitivity, specificity and the harmonic mean of sensitivity and specificity, all calculated from a test data set. The models obtained from the balanced training data set have a better classification quality than the ones obtained from the unbalanced training data set. The two-stage Bayesian updating prior model and the Boot prior model, both identified with the use of the balanced training data set, outperform the non-informative, method of moments, and maximum likelihood estimation prior models. It is important to note that one should be careful when interpreting the parameters since different priors can lead to different models. Copyright © 2017 Elsevier Ltd. All rights reserved.
An adaptive data-driven method for accurate prediction of remaining useful life of rolling bearings
NASA Astrophysics Data System (ADS)
Peng, Yanfeng; Cheng, Junsheng; Liu, Yanfei; Li, Xuejun; Peng, Zhihua
2018-06-01
A novel data-driven method based on Gaussian mixture model (GMM) and distance evaluation technique (DET) is proposed to predict the remaining useful life (RUL) of rolling bearings. The data sets are clustered by GMM to divide all data sets into several health states adaptively and reasonably. The number of clusters is determined by the minimum description length principle. Thus, either the health state of the data sets or the number of the states is obtained automatically. Meanwhile, the abnormal data sets can be recognized during the clustering process and removed from the training data sets. After obtaining the health states, appropriate features are selected by DET for increasing the classification and prediction accuracy. In the prediction process, each vibration signal is decomposed into several components by empirical mode decomposition. Some common statistical parameters of the components are calculated first and then the features are clustered using GMM to divide the data sets into several health states and remove the abnormal data sets. Thereafter, appropriate statistical parameters of the generated components are selected using DET. Finally, least squares support vector machine is utilized to predict the RUL of rolling bearings. Experimental results indicate that the proposed method reliably predicts the RUL of rolling bearings.
NASA Astrophysics Data System (ADS)
Zheng, Ling; Duan, Xuwei; Deng, Zhaoxue; Li, Yinong
2014-03-01
A novel flow-mode magneto-rheological (MR) engine mount integrated a diaphragm de-coupler and the spoiler plate is designed and developed to isolate engine and the transmission from the chassis in a wide frequency range and overcome the stiffness in high frequency. A lumped parameter model of the MR engine mount in single degree of freedom system is further developed based on bond graph method to predict the performance of the MR engine mount accurately. The optimization mathematical model is established to minimize the total of force transmissibility over several frequency ranges addressed. In this mathematical model, the lumped parameters are considered as design variables. The maximum of force transmissibility and the corresponding frequency in low frequency range as well as individual lumped parameter are limited as constraints. The multiple interval sensitivity analysis method is developed to select the optimized variables and improve the efficiency of optimization process. An improved non-dominated sorting genetic algorithm (NSGA-II) is used to solve the multi-objective optimization problem. The synthesized distance between the individual in Pareto set and the individual in possible set in engineering is defined and calculated. A set of real design parameters is thus obtained by the internal relationship between the optimal lumped parameters and practical design parameters for the MR engine mount. The program flowchart for the improved non-dominated sorting genetic algorithm (NSGA-II) is given. The obtained results demonstrate the effectiveness of the proposed optimization approach in minimizing the total of force transmissibility over several frequency ranges addressed.
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.
Pozzobon, Victor; Perre, Patrick
2018-01-21
This work provides a model and the associated set of parameters allowing for microalgae population growth computation under intermittent lightning. Han's model is coupled with a simple microalgae growth model to yield a relationship between illumination and population growth. The model parameters were obtained by fitting a dataset available in literature using Particle Swarm Optimization method. In their work, authors grew microalgae in excess of nutrients under flashing conditions. Light/dark cycles used for these experimentations are quite close to those found in photobioreactor, i.e. ranging from several seconds to one minute. In this work, in addition to producing the set of parameters, Particle Swarm Optimization robustness was assessed. To do so, two different swarm initialization techniques were used, i.e. uniform and random distribution throughout the search-space. Both yielded the same results. In addition, swarm distribution analysis reveals that the swarm converges to a unique minimum. Thus, the produced set of parameters can be trustfully used to link light intensity to population growth rate. Furthermore, the set is capable to describe photodamages effects on population growth. Hence, accounting for light overexposure effect on algal growth. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chaos control of Hastings–Powell model by combining chaotic motions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danca, Marius-F., E-mail: danca@rist.ro; Chattopadhyay, Joydev, E-mail: joydev@isical.ac.in
2016-04-15
In this paper, we propose a Parameter Switching (PS) algorithm as a new chaos control method for the Hastings–Powell (HP) system. The PS algorithm is a convergent scheme that switches the control parameter within a set of values while the controlled system is numerically integrated. The attractor obtained with the PS algorithm matches the attractor obtained by integrating the system with the parameter replaced by the averaged value of the switched parameter values. The switching rule can be applied periodically or randomly over a set of given values. In this way, every stable cycle of the HP system can bemore » approximated if its underlying parameter value equalizes the average value of the switching values. Moreover, the PS algorithm can be viewed as a generalization of Parrondo's game, which is applied for the first time to the HP system, by showing that losing strategy can win: “losing + losing = winning.” If “loosing” is replaced with “chaos” and, “winning” with “order” (as the opposite to “chaos”), then by switching the parameter value in the HP system within two values, which generate chaotic motions, the PS algorithm can approximate a stable cycle so that symbolically one can write “chaos + chaos = regular.” Also, by considering a different parameter control, new complex dynamics of the HP model are revealed.« less
Estimating Soil Hydraulic Parameters using Gradient Based Approach
NASA Astrophysics Data System (ADS)
Rai, P. K.; Tripathi, S.
2017-12-01
The conventional way of estimating parameters of a differential equation is to minimize the error between the observations and their estimates. The estimates are produced from forward solution (numerical or analytical) of differential equation assuming a set of parameters. Parameter estimation using the conventional approach requires high computational cost, setting-up of initial and boundary conditions, and formation of difference equations in case the forward solution is obtained numerically. Gaussian process based approaches like Gaussian Process Ordinary Differential Equation (GPODE) and Adaptive Gradient Matching (AGM) have been developed to estimate the parameters of Ordinary Differential Equations without explicitly solving them. Claims have been made that these approaches can straightforwardly be extended to Partial Differential Equations; however, it has been never demonstrated. This study extends AGM approach to PDEs and applies it for estimating parameters of Richards equation. Unlike the conventional approach, the AGM approach does not require setting-up of initial and boundary conditions explicitly, which is often difficult in real world application of Richards equation. The developed methodology was applied to synthetic soil moisture data. It was seen that the proposed methodology can estimate the soil hydraulic parameters correctly and can be a potential alternative to the conventional method.
Time Domain Estimation of Arterial Parameters using the Windkessel Model and the Monte Carlo Method
NASA Astrophysics Data System (ADS)
Gostuski, Vladimir; Pastore, Ignacio; Rodriguez Palacios, Gaspar; Vaca Diez, Gustavo; Moscoso-Vasquez, H. Marcela; Risk, Marcelo
2016-04-01
Numerous parameter estimation techniques exist for characterizing the arterial system using electrical circuit analogs. However, they are often limited by their requirements and usually high computational burdain. Therefore, a new method for estimating arterial parameters based on Monte Carlo simulation is proposed. A three element Windkessel model was used to represent the arterial system. The approach was to reduce the error between the calculated and physiological aortic pressure by randomly generating arterial parameter values, while keeping constant the arterial resistance. This last value was obtained for each subject using the arterial flow, and was a necessary consideration in order to obtain a unique set of values for the arterial compliance and peripheral resistance. The estimation technique was applied to in vivo data containing steady beats in mongrel dogs, and it reliably estimated Windkessel arterial parameters. Further, this method appears to be computationally efficient for on-line time-domain estimation of these parameters.
Study on loading path optimization of internal high pressure forming process
NASA Astrophysics Data System (ADS)
Jiang, Shufeng; Zhu, Hengda; Gao, Fusheng
2017-09-01
In the process of internal high pressure forming, there is no formula to describe the process parameters and forming results. The article use numerical simulation to obtain several input parameters and corresponding output result, use the BP neural network to found their mapping relationship, and with weighted summing method make each evaluating parameters to set up a formula which can evaluate quality. Then put the training BP neural network into the particle swarm optimization, and take the evaluating formula of the quality as adapting formula of particle swarm optimization, finally do the optimization and research at the range of each parameters. The results show that the parameters obtained by the BP neural network algorithm and the particle swarm optimization algorithm can meet the practical requirements. The method can solve the optimization of the process parameters in the internal high pressure forming process.
Yao, Xiaojun; Zhang, Xiaoyun; Zhang, Ruisheng; Liu, Mancang; Hu, Zhide; Fan, Botao
2002-05-16
A new method for the prediction of retention indices for a diverse set of compounds from their physicochemical parameters has been proposed. The two used input parameters for representing molecular properties are boiling point and molar volume. Models relating relationships between physicochemical parameters and retention indices of compounds are constructed by means of radial basis function neural networks. To get the best prediction results, some strategies are also employed to optimize the topology and learning parameters of the RBFNNs. For the test set, a predictive correlation coefficient R=0.9910 and root mean squared error of 14.1 are obtained. Results show that radial basis function networks can give satisfactory prediction ability and its optimization is less-time consuming and easy to implement.
The Inverse Optimal Control Problem for a Three-Loop Missile Autopilot
NASA Astrophysics Data System (ADS)
Hwang, Donghyeok; Tahk, Min-Jea
2018-04-01
The performance characteristics of the autopilot must have a fast response to intercept a maneuvering target and reasonable robustness for system stability under the effect of un-modeled dynamics and noise. By the conventional approach, the three-loop autopilot design is handled by time constant, damping factor and open-loop crossover frequency to achieve the desired performance requirements. Note that the general optimal theory can be also used to obtain the same gain as obtained from the conventional approach. The key idea of using optimal control technique for feedback gain design revolves around appropriate selection and interpretation of the performance index for which the control is optimal. This paper derives an explicit expression, which relates the weight parameters appearing in the quadratic performance index to the design parameters such as open-loop crossover frequency, phase margin, damping factor, or time constant, etc. Since all set of selection of design parameters do not guarantee existence of optimal control law, explicit inequalities, which are named the optimality criteria for the three-loop autopilot (OC3L), are derived to find out all set of design parameters for which the control law is optimal. Finally, based on OC3L, an efficient gain selection procedure is developed, where time constant is set to design objective and open-loop crossover frequency and phase margin as design constraints. The effectiveness of the proposed technique is illustrated through numerical simulations.
NASA Astrophysics Data System (ADS)
Nasonova, O. N.; Gusev, Ye. M.; Kovalev, Ye. E.
2009-04-01
Global estimates of the components of terrestrial water balance depend on a technique of estimation and on the global observational data sets used for this purpose. Land surface modelling is an up-to-date and powerful tool for such estimates. However, the results of modelling are affected by the quality of both a model and input information (including meteorological forcing data and model parameters). The latter is based on available global data sets containing meteorological data, land-use information, and soil and vegetation characteristics. Now there are a lot of global data sets, which differ in spatial and temporal resolution, as well as in accuracy and reliability. Evidently, uncertainties in global data sets will influence the results of model simulations, but to which extent? The present work is an attempt to investigate this issue. The work is based on the land surface model SWAP (Soil Water - Atmosphere - Plants) and global 1-degree data sets on meteorological forcing data and the land surface parameters, provided within the framework of the Second Global Soil Wetness Project (GSWP-2). The 3-hourly near-surface meteorological data (for the period from 1 July 1982 to 31 December 1995) are based on reanalyses and gridded observational data used in the International Satellite Land-Surface Climatology Project (ISLSCP) Initiative II. Following the GSWP-2 strategy, we used a number of alternative global forcing data sets to perform different sensitivity experiments (with six alternative versions of precipitation, four versions of radiation, two pure reanalysis products and two fully hybridized products of meteorological data). To reveal the influence of model parameters on simulations, in addition to GSWP-2 parameter data sets, we produced two alternative global data sets with soil parameters on the basis of their relationships with the content of clay and sand in a soil. After this the sensitivity experiments with three different sets of parameters were performed. As a result, 16 variants of global annual estimates of water balance components were obtained. Application of alternative data sets on radiation, precipitation, and soil parameters allowed us to reveal the influence of uncertainties in input data on global estimates of water balance components.
Situational reaction and planning
NASA Technical Reports Server (NTRS)
Yen, John; Pfluger, Nathan
1994-01-01
One problem faced in designing an autonomous mobile robot system is that there are many parameters of the system to define and optimize. While these parameters can be obtained for any given situation determining what the parameters should be in all situations is difficult. The usual solution is to give the system general parameters that work in all situations, but this does not help the robot to perform its best in a dynamic environment. Our approach is to develop a higher level situation analysis module that adjusts the parameters by analyzing the goals and history of sensor readings. By allowing the robot to change the system parameters based on its judgement of the situation, the robot will be able to better adapt to a wider set of possible situations. We use fuzzy logic in our implementation to reduce the number of basic situations the controller has to recognize. For example, a situation may be 60 percent open and 40 percent corridor, causing the optimal parameters to be somewhere between the optimal settings for the two extreme situations.
Ensemble of hybrid genetic algorithm for two-dimensional phase unwrapping
NASA Astrophysics Data System (ADS)
Balakrishnan, D.; Quan, C.; Tay, C. J.
2013-06-01
The phase unwrapping is the final and trickiest step in any phase retrieval technique. Phase unwrapping by artificial intelligence methods (optimization algorithms) such as hybrid genetic algorithm, reverse simulated annealing, particle swarm optimization, minimum cost matching showed better results than conventional phase unwrapping methods. In this paper, Ensemble of hybrid genetic algorithm with parallel populations is proposed to solve the branch-cut phase unwrapping problem. In a single populated hybrid genetic algorithm, the selection, cross-over and mutation operators are applied to obtain new population in every generation. The parameters and choice of operators will affect the performance of the hybrid genetic algorithm. The ensemble of hybrid genetic algorithm will facilitate to have different parameters set and different choice of operators simultaneously. Each population will use different set of parameters and the offspring of each population will compete against the offspring of all other populations, which use different set of parameters. The effectiveness of proposed algorithm is demonstrated by phase unwrapping examples and advantages of the proposed method are discussed.
Method and system for diagnostics of apparatus
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry (Inventor)
2012-01-01
Proposed is a method, implemented in software, for estimating fault state of an apparatus outfitted with sensors. At each execution period the method processes sensor data from the apparatus to obtain a set of parity parameters, which are further used for estimating fault state. The estimation method formulates a convex optimization problem for each fault hypothesis and employs a convex solver to compute fault parameter estimates and fault likelihoods for each fault hypothesis. The highest likelihoods and corresponding parameter estimates are transmitted to a display device or an automated decision and control system. The obtained accurate estimate of fault state can be used to improve safety, performance, or maintenance processes for the apparatus.
Crustal dynamics project data analysis, 1988: VLBI geodetic results, 1979 - 1987
NASA Technical Reports Server (NTRS)
Ma, C.; Ryan, J. W.; Caprette, D.
1989-01-01
The results obtained by the Goddard VLBI (very long base interferometry) Data Analysis Team from the analysis of 712 Mark 3 VLBI geodetic data sets acquired from fixed and mobile observing sites through the end of 1987 are reported. A large solution, GLB401, was used to obtain earth rotation parameters and site velocities. A second large solution, GLB405, was used to obtain baseline evolutions. Radio source positions were estimated globally while nutation offsets were estimated from each data set. Site positions are tabulated on a yearly basis from 1979 through 1988. The results include 55 sites and 270 baselines.
NASA Astrophysics Data System (ADS)
Cristescu, Constantin P.; Stan, Cristina; Scarlat, Eugen I.; Minea, Teofil; Cristescu, Cristina M.
2012-04-01
We present a novel method for the parameter oriented analysis of mutual correlation between independent time series or between equivalent structures such as ordered data sets. The proposed method is based on the sliding window technique, defines a new type of correlation measure and can be applied to time series from all domains of science and technology, experimental or simulated. A specific parameter that can characterize the time series is computed for each window and a cross correlation analysis is carried out on the set of values obtained for the time series under investigation. We apply this method to the study of some currency daily exchange rates from the point of view of the Hurst exponent and the intermittency parameter. Interesting correlation relationships are revealed and a tentative crisis prediction is presented.
Geerse, Daphne J; Coolen, Bert H; Roerdink, Melvyn
2015-01-01
Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect's 3D body point's time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point's time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point's time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters' walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman's bias and limits of agreement. Body point's time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point's time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner.
He, Yi; Xiao, Yi; Liwo, Adam; Scheraga, Harold A
2009-10-01
We explored the energy-parameter space of our coarse-grained UNRES force field for large-scale ab initio simulations of protein folding, to obtain good initial approximations for hierarchical optimization of the force field with new virtual-bond-angle bending and side-chain-rotamer potentials which we recently introduced to replace the statistical potentials. 100 sets of energy-term weights were generated randomly, and good sets were selected by carrying out replica-exchange molecular dynamics simulations of two peptides with a minimal alpha-helical and a minimal beta-hairpin fold, respectively: the tryptophan cage (PDB code: 1L2Y) and tryptophan zipper (PDB code: 1LE1). Eight sets of parameters produced native-like structures of these two peptides. These eight sets were tested on two larger proteins: the engrailed homeodomain (PDB code: 1ENH) and FBP WW domain (PDB code: 1E0L); two sets were found to produce native-like conformations of these proteins. These two sets were tested further on a larger set of nine proteins with alpha or alpha + beta structure and found to locate native-like structures of most of them. These results demonstrate that, in addition to finding reasonable initial starting points for optimization, an extensive search of parameter space is a powerful method to produce a transferable force field. Copyright 2009 Wiley Periodicals, Inc.
DD3MAT - a code for yield criteria anisotropy parameters identification.
NASA Astrophysics Data System (ADS)
Barros, P. D.; Carvalho, P. D.; Alves, J. L.; Oliveira, M. C.; Menezes, L. F.
2016-08-01
This work presents the main strategies and algorithms adopted in the DD3MAT inhouse code, specifically developed for identifying the anisotropy parameters. The algorithm adopted is based on the minimization of an error function, using a downhill simplex method. The set of experimental values can consider yield stresses and r -values obtained from in-plane tension, for different angles with the rolling direction (RD), yield stress and r -value obtained for biaxial stress state, and yield stresses from shear tests performed also for different angles to RD. All these values can be defined for a specific value of plastic work. Moreover, it can also include the yield stresses obtained from in-plane compression tests. The anisotropy parameters are identified for an AA2090-T3 aluminium alloy, highlighting the importance of the user intervention to improve the numerical fit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, Elise; Wolf, Rachel; Sako, Masao
2016-11-09
Cosmological parameter estimation techniques that robustly account for systematic measurement uncertainties will be crucial for the next generation of cosmological surveys. We present a new analysis method, superABC, for obtaining cosmological constraints from Type Ia supernova (SN Ia) light curves using Approximate Bayesian Computation (ABC) without any likelihood assumptions. The ABC method works by using a forward model simulation of the data where systematic uncertainties can be simulated and marginalized over. A key feature of the method presented here is the use of two distinct metrics, the `Tripp' and `Light Curve' metrics, which allow us to compare the simulated data to the observed data set. The Tripp metric takes as input the parameters of models fit to each light curve with the SALT-II method, whereas the Light Curve metric uses the measured fluxes directly without model fitting. We apply the superABC sampler to a simulated data set ofmore » $$\\sim$$1000 SNe corresponding to the first season of the Dark Energy Survey Supernova Program. Varying $$\\Omega_m, w_0, \\alpha$$ and $$\\beta$$ and a magnitude offset parameter, with no systematics we obtain $$\\Delta(w_0) = w_0^{\\rm true} - w_0^{\\rm best \\, fit} = -0.036\\pm0.109$$ (a $$\\sim11$$% 1$$\\sigma$$ uncertainty) using the Tripp metric and $$\\Delta(w_0) = -0.055\\pm0.068$$ (a $$\\sim7$$% 1$$\\sigma$$ uncertainty) using the Light Curve metric. Including 1% calibration uncertainties in four passbands, adding 4 more parameters, we obtain $$\\Delta(w_0) = -0.062\\pm0.132$$ (a $$\\sim14$$% 1$$\\sigma$$ uncertainty) using the Tripp metric. Overall we find a $17$% increase in the uncertainty on $$w_0$$ with systematics compared to without. We contrast this with a MCMC approach where systematic effects are approximately included. We find that the MCMC method slightly underestimates the impact of calibration uncertainties for this simulated data set.« less
Optimization of the reconstruction parameters in [123I]FP-CIT SPECT
NASA Astrophysics Data System (ADS)
Niñerola-Baizán, Aida; Gallego, Judith; Cot, Albert; Aguiar, Pablo; Lomeña, Francisco; Pavía, Javier; Ros, Domènec
2018-04-01
The aim of this work was to obtain a set of parameters to be applied in [123I]FP-CIT SPECT reconstruction in order to minimize the error between standardized and true values of the specific uptake ratio (SUR) in dopaminergic neurotransmission SPECT studies. To this end, Monte Carlo simulation was used to generate a database of 1380 projection data-sets from 23 subjects, including normal cases and a variety of pathologies. Studies were reconstructed using filtered back projection (FBP) with attenuation correction and ordered subset expectation maximization (OSEM) with correction for different degradations (attenuation, scatter and PSF). Reconstruction parameters to be optimized were the cut-off frequency of a 2D Butterworth pre-filter in FBP, and the number of iterations and the full width at Half maximum of a 3D Gaussian post-filter in OSEM. Reconstructed images were quantified using regions of interest (ROIs) derived from Magnetic Resonance scans and from the Automated Anatomical Labeling map. Results were standardized by applying a simple linear regression line obtained from the entire patient dataset. Our findings show that we can obtain a set of optimal parameters for each reconstruction strategy. The accuracy of the standardized SUR increases when the reconstruction method includes more corrections. The use of generic ROIs instead of subject-specific ROIs adds significant inaccuracies. Thus, after reconstruction with OSEM and correction for all degradations, subject-specific ROIs led to errors between standardized and true SUR values in the range [‑0.5, +0.5] in 87% and 92% of the cases for caudate and putamen, respectively. These percentages dropped to 75% and 88% when the generic ROIs were used.
NASA Technical Reports Server (NTRS)
Stephenson, J. D.
1983-01-01
Flight experiments with an augmented jet flap STOL aircraft provided data from which the lateral directional stability and control derivatives were calculated by applying a linear regression parameter estimation procedure. The tests, which were conducted with the jet flaps set at a 65 deg deflection, covered a large range of angles of attack and engine power settings. The effect of changing the angle of the jet thrust vector was also investigated. Test results are compared with stability derivatives that had been predicted. The roll damping derived from the tests was significantly larger than had been predicted, whereas the other derivatives were generally in agreement with the predictions. Results obtained using a maximum likelihood estimation procedure are compared with those from the linear regression solutions.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1988-01-01
Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.
Assessing Interval Estimation Methods for Hill Model ...
The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet
Analysis of energy-based algorithms for RNA secondary structure prediction
2012-01-01
Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Conclusions Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets. PMID:22296803
Analysis of energy-based algorithms for RNA secondary structure prediction.
Hajiaghayi, Monir; Condon, Anne; Hoos, Holger H
2012-02-01
RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets.
The Effects of Test Length and Sample Size on Item Parameters in Item Response Theory
ERIC Educational Resources Information Center
Sahin, Alper; Anil, Duygu
2017-01-01
This study investigates the effects of sample size and test length on item-parameter estimation in test development utilizing three unidimensional dichotomous models of item response theory (IRT). For this purpose, a real language test comprised of 50 items was administered to 6,288 students. Data from this test was used to obtain data sets of…
Process Control Strategies for Dual-Phase Steel Manufacturing Using ANN and ANFIS
NASA Astrophysics Data System (ADS)
Vafaeenezhad, H.; Ghanei, S.; Seyedein, S. H.; Beygi, H.; Mazinani, M.
2014-11-01
In this research, a comprehensive soft computational approach is presented for the analysis of the influencing parameters on manufacturing of dual-phase steels. A set of experimental data have been gathered to obtain the initial database used for the training and testing of both artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS). The parameters used in the strategy were intercritical annealing temperature, carbon content, and holding time which gives off martensite percentage as an output. A fraction of the data set was chosen to train both ANN and ANFIS, and the rest was put into practice to authenticate the act of the trained networks while seeing unseen data. To compare the obtained results, coefficient of determination and root mean squared error indexes were chosen. Using artificial intelligence methods, it is not necessary to consider and establish a preliminary mathematical model and formulate its affecting parameters on its definition. In conclusion, the martensite percentages corresponding to the manufacturing parameters can be determined prior to a production using these controlling algorithms. Although the results acquired from both ANN and ANFIS are very encouraging, the proposed ANFIS has enhanced performance over the ANN and takes better effect on cost-reduction profit.
PERIOD ESTIMATION FOR SPARSELY SAMPLED QUASI-PERIODIC LIGHT CURVES APPLIED TO MIRAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Shiyuan; Huang, Jianhua Z.; Long, James
2016-12-01
We develop a nonlinear semi-parametric Gaussian process model to estimate periods of Miras with sparsely sampled light curves. The model uses a sinusoidal basis for the periodic variation and a Gaussian process for the stochastic changes. We use maximum likelihood to estimate the period and the parameters of the Gaussian process, while integrating out the effects of other nuisance parameters in the model with respect to a suitable prior distribution obtained from earlier studies. Since the likelihood is highly multimodal for period, we implement a hybrid method that applies the quasi-Newton algorithm for Gaussian process parameters and search the period/frequencymore » parameter space over a dense grid. A large-scale, high-fidelity simulation is conducted to mimic the sampling quality of Mira light curves obtained by the M33 Synoptic Stellar Survey. The simulated data set is publicly available and can serve as a testbed for future evaluation of different period estimation methods. The semi-parametric model outperforms an existing algorithm on this simulated test data set as measured by period recovery rate and quality of the resulting period–luminosity relations.« less
Observational information for f(T) theories and dark torsion
NASA Astrophysics Data System (ADS)
Bengochea, Gabriel R.
2011-01-01
In the present work we analyze and compare the information coming from different observational data sets in the context of a sort of f(T) theories. We perform a joint analysis with measurements of the most recent type Ia supernovae (SNe Ia), Baryon Acoustic Oscillation (BAO), Cosmic Microwave Background radiation (CMB), Gamma-Ray Bursts data (GRBs) and Hubble parameter observations (OHD) to constraint the only new parameter these theories have. It is shown that when the new combined BAO/CMB parameter is used to put constraints, the result is different from previous works. We also show that when we include Observational Hubble Data (OHD) the simpler ΛCDM model is excluded to one sigma level, leading the effective equation of state of these theories to be of phantom type. Also, analyzing a tension criterion for SNe Ia and other observational sets, we obtain more consistent and better suited data sets to work with these theories.
Parameter identification for structural dynamics based on interval analysis algorithm
NASA Astrophysics Data System (ADS)
Yang, Chen; Lu, Zixing; Yang, Zhenyu; Liang, Ke
2018-04-01
A parameter identification method using interval analysis algorithm for structural dynamics is presented in this paper. The proposed uncertain identification method is investigated by using central difference method and ARMA system. With the help of the fixed memory least square method and matrix inverse lemma, a set-membership identification technology is applied to obtain the best estimation of the identified parameters in a tight and accurate region. To overcome the lack of insufficient statistical description of the uncertain parameters, this paper treats uncertainties as non-probabilistic intervals. As long as we know the bounds of uncertainties, this algorithm can obtain not only the center estimations of parameters, but also the bounds of errors. To improve the efficiency of the proposed method, a time-saving algorithm is presented by recursive formula. At last, to verify the accuracy of the proposed method, two numerical examples are applied and evaluated by three identification criteria respectively.
Bouc-Wen hysteresis model identification using Modified Firefly Algorithm
NASA Astrophysics Data System (ADS)
Zaman, Mohammad Asif; Sikder, Urmita
2015-12-01
The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.
A comparison of Stokes parameters for sky and a soybean canopy
NASA Technical Reports Server (NTRS)
Schutt, John B.; Holben, Brent N.; Mcmurtrey, James E., III
1991-01-01
An evaluation of the polarization signatures obtained from the four Stokes parameters is reported for the atmosphere and a soybean canopy. The polarimeter design and operation are set forth, and the Stokes parameters' relationships are discussed. The canopy polarization was different from the sky at azimuths of 90 and 270 degrees, demonstrating a response that reflecting the sky polarization signatures across a plane parallel to the polarization axis and passing through a phase angle of about 90 degrees would produce. Classical behavior in terms of electromagnetic theory was found in the fourth Stokes parameter of the canopy which was obtained in the principal plane. Only the third Stokes parameter is demonstrated to be unambiguously affected in a comparison of sky polarization signatures and aerosol optical densities. The similarity between the sky at azimuth 180 degrees and the soybean canopy data at the principal plane is interesting considering the disparity of the subjects.
A unified framework for approximation in inverse problems for distributed parameter systems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.
1988-01-01
A theoretical framework is presented that can be used to treat approximation techniques for very general classes of parameter estimation problems involving distributed systems that are either first or second order in time. Using the approach developed, one can obtain both convergence and stability (continuous dependence of parameter estimates with respect to the observations) under very weak regularity and compactness assumptions on the set of admissible parameters. This unified theory can be used for many problems found in the recent literature and in many cases offers significant improvements to existing results.
Multi-Criteria Optimization of Regulation in Metabolic Networks
Higuera, Clara; Villaverde, Alejandro F.; Banga, Julio R.; Ross, John; Morán, Federico
2012-01-01
Determining the regulation of metabolic networks at genome scale is a hard task. It has been hypothesized that biochemical pathways and metabolic networks might have undergone an evolutionary process of optimization with respect to several criteria over time. In this contribution, a multi-criteria approach has been used to optimize parameters for the allosteric regulation of enzymes in a model of a metabolic substrate-cycle. This has been carried out by calculating the Pareto set of optimal solutions according to two objectives: the proper direction of flux in a metabolic cycle and the energetic cost of applying the set of parameters. Different Pareto fronts have been calculated for eight different “environments” (specific time courses of end product concentrations). For each resulting front the so-called knee point is identified, which can be considered a preferred trade-off solution. Interestingly, the optimal control parameters corresponding to each of these points also lead to optimal behaviour in all the other environments. By calculating the average of the different parameter sets for the knee solutions more frequently found, a final and optimal consensus set of parameters can be obtained, which is an indication on the existence of a universal regulation mechanism for this system.The implications from such a universal regulatory switch are discussed in the framework of large metabolic networks. PMID:22848435
Characterization of electrical appliances in transient state
NASA Astrophysics Data System (ADS)
Wójcik, Augustyn; Winiecki, Wiesław
2017-08-01
The article contains the study about electrical appliance characterization on the basis of power grid signals. To represent devices, parameters of current and voltage signals recorded during transient states are used. In this paper only transients occurring as a result of switching on devices are considered. The way of data acquisition performed in specialized measurement setup developed for electricity load monitoring is described. The paper presents the method of transients detection and the method of appliance parameters calculation. Using the set of acquired measurement data and appropriate software the set of parameters for several household appliances operating in different operating conditions was processed. Usefulness of appliances characterization in Non-Intrusive Appliance Load Monitoring System (NIALMS) with the use of proposed method is discussed focusing on obtained results.
A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling
Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853
NASA Astrophysics Data System (ADS)
Dafonte, C.; Fustes, D.; Manteiga, M.; Garabato, D.; Álvarez, M. A.; Ulla, A.; Allende Prieto, C.
2016-10-01
Aims: We present an innovative artificial neural network (ANN) architecture, called Generative ANN (GANN), that computes the forward model, that is it learns the function that relates the unknown outputs (stellar atmospheric parameters, in this case) to the given inputs (spectra). Such a model can be integrated in a Bayesian framework to estimate the posterior distribution of the outputs. Methods: The architecture of the GANN follows the same scheme as a normal ANN, but with the inputs and outputs inverted. We train the network with the set of atmospheric parameters (Teff, log g, [Fe/H] and [α/ Fe]), obtaining the stellar spectra for such inputs. The residuals between the spectra in the grid and the estimated spectra are minimized using a validation dataset to keep solutions as general as possible. Results: The performance of both conventional ANNs and GANNs to estimate the stellar parameters as a function of the star brightness is presented and compared for different Galactic populations. GANNs provide significantly improved parameterizations for early and intermediate spectral types with rich and intermediate metallicities. The behaviour of both algorithms is very similar for our sample of late-type stars, obtaining residuals in the derivation of [Fe/H] and [α/ Fe] below 0.1 dex for stars with Gaia magnitude Grvs < 12, which accounts for a number in the order of four million stars to be observed by the Radial Velocity Spectrograph of the Gaia satellite. Conclusions: Uncertainty estimation of computed astrophysical parameters is crucial for the validation of the parameterization itself and for the subsequent exploitation by the astronomical community. GANNs produce not only the parameters for a given spectrum, but a goodness-of-fit between the observed spectrum and the predicted one for a given set of parameters. Moreover, they allow us to obtain the full posterior distribution over the astrophysical parameters space once a noise model is assumed. This can be used for novelty detection and quality assessment.
Risch, Martin; Nydegger, Urs; Risch, Lorenz
2017-01-01
In clinical practice, laboratory results are often important for making diagnostic, therapeutic, and prognostic decisions. Interpreting individual results relies on accurate reference intervals and decision limits. Despite the considerable amount of resources in clinical medicine spent on elderly patients, accurate reference intervals for the elderly are rarely available. The SENIORLAB study set out to determine reference intervals in the elderly by investigating a large variety of laboratory parameters in clinical chemistry, hematology, and immunology. The SENIORLAB study is an observational, prospective cohort study. Subjectively healthy residents of Switzerland aged 60 years and older were included for baseline examination (n = 1467), where anthropometric measurements were taken, medical history was reviewed, and a fasting blood sample was drawn under optimal preanalytical conditions. More than 110 laboratory parameters were measured, and a biobank was set up. The study participants are followed up every 3 to 5 years for quality of life, morbidity, and mortality. The primary aim is to evaluate different laboratory parameters at age-related reference intervals. The secondary aims of this study include the following: identify associations between different parameters, identify diagnostic characteristics to diagnose different circumstances, identify the prevalence of occult disease in subjectively healthy individuals, and identify the prognostic factors for the investigated outcomes, including mortality. To obtain better grounds to justify clinical decisions, specific reference intervals for laboratory parameters of the elderly are needed. Reference intervals are obtained from healthy individuals. A major obstacle when obtaining reference intervals in the elderly is the definition of health in seniors because individuals without any medical condition and any medication are rare in older adulthood. Reference intervals obtained from such individuals cannot be considered representative for seniors in a status of age-specific normal health. In addition to the established methods for determining reference intervals, this longitudinal study utilizes a unique approach, in that survival and long-term well-being are taken as indicators of health in seniors. This approach is expected to provide robust and representative reference intervals that are obtained from an adequate reference population and not a collective of highly selected individuals. The present study was registered under International Standard Randomized Controlled Trial Number registry: ISRCTN53778569.
NASA Astrophysics Data System (ADS)
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty
Optical phantoms with adjustable subdiffusive scattering parameters
NASA Astrophysics Data System (ADS)
Krauter, Philipp; Nothelfer, Steffen; Bodenschatz, Nico; Simon, Emanuel; Stocker, Sabrina; Foschum, Florian; Kienle, Alwin
2015-10-01
A new epoxy-resin-based optical phantom system with adjustable subdiffusive scattering parameters is presented along with measurements of the intrinsic absorption, scattering, fluorescence, and refractive index of the matrix material. Both an aluminium oxide powder and a titanium dioxide dispersion were used as scattering agents and we present measurements of their scattering and reduced scattering coefficients. A method is theoretically described for a mixture of both scattering agents to obtain continuously adjustable anisotropy values g between 0.65 and 0.9 and values of the phase function parameter γ in the range of 1.4 to 2.2. Furthermore, we show absorption spectra for a set of pigments that can be added to achieve particular absorption characteristics. By additional analysis of the aging, a fully characterized phantom system is obtained with the novelty of g and γ parameter adjustment.
Propagation characteristics of two-color laser pulses in homogeneous plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemlata,; Saroch, Akanksha; Jha, Pallavi
2015-11-15
An analytical and numerical study of the evolution of two-color, sinusoidal laser pulses in cold, underdense, and homogeneous plasma has been presented. The wave equations for the radiation fields driven by linear as well as nonlinear contributions due to the two-color laser pulses have been set up. A variational technique is used to obtain the simultaneous equations describing the evolution of the laser spot size, pulse length, and chirp parameter. Numerical methods are used to graphically analyze the simultaneous evolution of these parameters due to the combined effect of the two-color laser pulses. Further, the pulse parameters are compared withmore » those obtained for a single laser pulse. Significant focusing, compression, and enhanced positive chirp is obtained due to the combined effect of simultaneously propagating two-color pulses as compared to a single pulse propagating in plasma.« less
PAR -- Interface to the ADAM Parameter System
NASA Astrophysics Data System (ADS)
Currie, Malcolm J.; Chipperfield, Alan J.
PAR is a library of Fortran subroutines that provides convenient mechanisms for applications to exchange information with the outside world, through input-output channels called parameters. Parameters enable a user to control an application's behaviour. PAR supports numeric, character, and logical parameters, and is currently implemented only on top of the ADAM parameter system. The PAR library permits parameter values to be obtained, without or with a variety of constraints. Results may be put into parameters to be passed onto other applications. Other facilities include setting a prompt string, and suggested defaults. This document also introduces a preliminary C interface for the PAR library -- this may be subject to change in the light of experience.
An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian
For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less
Chemical trend of acceptor levels of Be, Mg, Zn, and Cd in GaAs, GaP, InP and GaN
NASA Astrophysics Data System (ADS)
Wang, Hao; Chen, An-Ban
2000-03-01
We are investigating the “shallow” acceptor levels in the III-nitride semiconductors theoretically. The k·p Hamiltonians and a model central-cell impurity potential have been used to evaluate the ordering of the ionization energies of impurities Be, Mg, Zn, and Cd in GaN. The impurity potential parameters were obtained from studying the same set of impurities in GaAs. These parameters were then transferred to the calculation for other hosts, leaving only one adjustable screening parameter for each host. This procedure was tested in GaP and InP and remarkably good results were obtained. When applied to GaN, this procedure produced a consistent set of acceptor levels with different k·p Hamiltonians. The calculated ionization energies for Be, Mg, Zn and Cd acceptors in GaN are respectively145, 156, 192, and 312 meV for the zincblende structure, and 229, 250, 320, and 510 meV for the wurtzite structure. These and other results will be discussed.
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, Francis J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
Analysis of Design Parameters Effects on Vibration Characteristics of Fluidlastic Isolators
NASA Astrophysics Data System (ADS)
Deng, Jing-hui; Cheng, Qi-you
2017-07-01
The control of vibration in helicopters which consists of reducing vibration levels below the acceptable limit is one of the key problems. The fluidlastic isolators become more and more widely used because the fluids are non-toxic, non-corrosive, nonflammable, and compatible with most elastomers and adhesives. In the field of the fluidlastic isolators design, the selection of design parameters is very important to obtain efficient vibration-suppressed. Aiming at getting the effect of design parameters on the property of fluidlastic isolator, a dynamic equation is set up based on the theory of dynamics. And the dynamic analysis is carried out. The influences of design parameters on the property of fluidlastic isolator are calculated. Dynamic analysis results have shown that fluidlastic isolator can reduce the vibration effectively. Analysis results also showed that the design parameters such as the fluid density, viscosity coefficient, stiffness (K1 and K2) and loss coefficient have obvious influence on the performance of isolator. The efficient vibration-suppressed can be obtained by the design optimization of parameters.
Optimization of Gas Metal Arc Welding Process Parameters
NASA Astrophysics Data System (ADS)
Kumar, Amit; Khurana, M. K.; Yadav, Pradeep K.
2016-09-01
This study presents the application of Taguchi method combined with grey relational analysis to optimize the process parameters of gas metal arc welding (GMAW) of AISI 1020 carbon steels for multiple quality characteristics (bead width, bead height, weld penetration and heat affected zone). An orthogonal array of L9 has been implemented to fabrication of joints. The experiments have been conducted according to the combination of voltage (V), current (A) and welding speed (Ws). The results revealed that the welding speed is most significant process parameter. By analyzing the grey relational grades, optimal parameters are obtained and significant factors are known using ANOVA analysis. The welding parameters such as speed, welding current and voltage have been optimized for material AISI 1020 using GMAW process. To fortify the robustness of experimental design, a confirmation test was performed at selected optimal process parameter setting. Observations from this method may be useful for automotive sub-assemblies, shipbuilding and vessel fabricators and operators to obtain optimal welding conditions.
Radiation effects on type I fiber Bragg gratings: influence of recoating
NASA Astrophysics Data System (ADS)
Blanchet, T.; Laffont, G.; Cotillard, R.; Marin, E.; Morana, A.; Boukenter, A.; Ouerdane, Y.; Girard, S.
2017-04-01
We investigated the Bragg Wavelength Shift (BWS) induced by X-rays in a large set of conventional FBGs up to 100kGy dose. Obtained results give some insights on the influence of irradiation parameters such as dose, dose rate as well as the impact of some writing process parameters such as thermal treatment or acrylate recoating on the FBG radiation tolerance.
A theoretical study of potentially observable chirality-sensitive NMR effects in molecules.
Garbacz, Piotr; Cukras, Janusz; Jaszuński, Michał
2015-09-21
Two recently predicted nuclear magnetic resonance effects, the chirality-induced rotating electric polarization and the oscillating magnetization, are examined for several experimentally available chiral molecules. We discuss in detail the requirements for experimental detection of chirality-sensitive NMR effects of the studied molecules. These requirements are related to two parameters: the shielding polarizability and the antisymmetric part of the nuclear magnetic shielding tensor. The dominant second contribution has been computed for small molecules at the coupled cluster and density functional theory levels. It was found that DFT calculations using the KT2 functional and the aug-cc-pCVTZ basis set adequately reproduce the CCSD(T) values obtained with the same basis set. The largest values of parameters, thus most promising from the experimental point of view, were obtained for the fluorine nuclei in 1,3-difluorocyclopropene and 1,3-diphenyl-2-fluoro-3-trifluoromethylcyclopropene.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.
2008-05-15
We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less
A multi-objective approach to improve SWAT model calibration in alpine catchments
NASA Astrophysics Data System (ADS)
Tuo, Ye; Marcolini, Giorgia; Disse, Markus; Chiogna, Gabriele
2018-04-01
Multi-objective hydrological model calibration can represent a valuable solution to reduce model equifinality and parameter uncertainty. The Soil and Water Assessment Tool (SWAT) model is widely applied to investigate water quality and water management issues in alpine catchments. However, the model calibration is generally based on discharge records only, and most of the previous studies have defined a unique set of snow parameters for an entire basin. Only a few studies have considered snow observations to validate model results or have taken into account the possible variability of snow parameters for different subbasins. This work presents and compares three possible calibration approaches. The first two procedures are single-objective calibration procedures, for which all parameters of the SWAT model were calibrated according to river discharge alone. Procedures I and II differ from each other by the assumption used to define snow parameters: The first approach assigned a unique set of snow parameters to the entire basin, whereas the second approach assigned different subbasin-specific sets of snow parameters to each subbasin. The third procedure is a multi-objective calibration, in which we considered snow water equivalent (SWE) information at two different spatial scales (i.e. subbasin and elevation band), in addition to discharge measurements. We tested these approaches in the Upper Adige river basin where a dense network of snow depth measurement stations is available. Only the set of parameters obtained with this multi-objective procedure provided an acceptable prediction of both river discharge and SWE. These findings offer the large community of SWAT users a strategy to improve SWAT modeling in alpine catchments.
Coronary artery segmentation in X-ray angiograms using gabor filters and differential evolution.
Cervantes-Sanchez, Fernando; Cruz-Aceves, Ivan; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Cordova-Fraga, Teodoro; Aviña-Cervantes, Juan Gabriel
2018-08-01
Segmentation of coronary arteries in X-ray angiograms represents an essential task for computer-aided diagnosis, since it can help cardiologists in diagnosing and monitoring vascular abnormalities. Due to the main disadvantages of the X-ray angiograms are the nonuniform illumination, and the weak contrast between blood vessels and image background, different vessel enhancement methods have been introduced. In this paper, a novel method for blood vessel enhancement based on Gabor filters tuned using the optimization strategy of Differential evolution (DE) is proposed. Because the Gabor filters are governed by three different parameters, the optimal selection of those parameters is highly desirable in order to maximize the vessel detection rate while reducing the computational cost of the training stage. To obtain the optimal set of parameters for the Gabor filters, the area (Az) under the receiver operating characteristics curve is used as objective function. In the experimental results, the proposed method achieves an A z =0.9388 in a training set of 40 images, and for a test set of 40 images it obtains the highest performance with an A z =0.9538 compared with six state-of-the-art vessel detection methods. Finally, the proposed method achieves an accuracy of 0.9423 for vessel segmentation using the test set. In addition, the experimental results have also shown that the proposed method can be highly suitable for clinical decision support in terms of computational time and vessel segmentation performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.
Shireman, Emilie; Steinley, Douglas; Brusco, Michael J
2017-02-01
Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.
Giri, Maria Grazia; Cavedon, Carlo; Mazzarotto, Renzo; Ferdeghini, Marco
2016-05-01
The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on (18)F-fluorodeoxyglucose positron emission tomography ((18)F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracy was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10-37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This "calibration curve" was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giri, Maria Grazia, E-mail: mariagrazia.giri@ospedaleuniverona.it; Cavedon, Carlo; Mazzarotto, Renzo
Purpose: The aim of this study was to implement a Dirichlet process mixture (DPM) model for automatic tumor edge identification on {sup 18}F-fluorodeoxyglucose positron emission tomography ({sup 18}F-FDG PET) images by optimizing the parameters on which the algorithm depends, to validate it experimentally, and to test its robustness. Methods: The DPM model belongs to the class of the Bayesian nonparametric models and uses the Dirichlet process prior for flexible nonparametric mixture modeling, without any preliminary choice of the number of mixture components. The DPM algorithm implemented in the statistical software package R was used in this work. The contouring accuracymore » was evaluated on several image data sets: on an IEC phantom (spherical inserts with diameter in the range 10–37 mm) acquired by a Philips Gemini Big Bore PET-CT scanner, using 9 different target-to-background ratios (TBRs) from 2.5 to 70; on a digital phantom simulating spherical/uniform lesions and tumors, irregular in shape and activity; and on 20 clinical cases (10 lung and 10 esophageal cancer patients). The influence of the DPM parameters on contour generation was studied in two steps. In the first one, only the IEC spheres having diameters of 22 and 37 mm and a sphere of the digital phantom (41.6 mm diameter) were studied by varying the main parameters until the diameter of the spheres was obtained within 0.2% of the true value. In the second step, the results obtained for this training set were applied to the entire data set to determine DPM based volumes of all available lesions. These volumes were compared to those obtained by applying already known algorithms (Gaussian mixture model and gradient-based) and to true values, when available. Results: Only one parameter was found able to significantly influence segmentation accuracy (ANOVA test). This parameter was linearly connected to the uptake variance of the tested region of interest (ROI). In the first step of the study, a calibration curve was determined to automatically generate the optimal parameter from the variance of the ROI. This “calibration curve” was then applied to contour the whole data set. The accuracy (mean discrepancy between DPM model-based contours and reference contours) of volume estimation was below (1 ± 7)% on the whole data set (1 SD). The overlap between true and automatically segmented contours, measured by the Dice similarity coefficient, was 0.93 with a SD of 0.03. Conclusions: The proposed DPM model was able to accurately reproduce known volumes of FDG concentration, with high overlap between segmented and true volumes. For all the analyzed inserts of the IEC phantom, the algorithm proved to be robust to variations in radius and in TBR. The main advantage of this algorithm was that no setting of DPM parameters was required in advance, since the proper setting of the only parameter that could significantly influence the segmentation results was automatically related to the uptake variance of the chosen ROI. Furthermore, the algorithm did not need any preliminary choice of the optimum number of classes to describe the ROIs within PET images and no assumption about the shape of the lesion and the uptake heterogeneity of the tracer was required.« less
Reconstruction of interaction rate in holographic dark energy
NASA Astrophysics Data System (ADS)
Mukherjee, Ankan
2016-11-01
The present work is based on the holographic dark energy model with Hubble horizon as the infrared cut-off. The interaction rate between dark energy and dark matter has been reconstructed for three different parameterizations of the deceleration parameter. Observational constraints on the model parameters have been obtained by maximum likelihood analysis using the observational Hubble parameter data (OHD), type Ia supernovab data (SNe), baryon acoustic oscillation data (BAO) and the distance prior of cosmic microwave background (CMB) namely the CMB shift parameter data (CMBShift). The interaction rate obtained in the present work remains always positive and increases with expansion. It is very similar to the result obtained by Sen and Pavon [1] where the interaction rate has been reconstructed for a parametrization of the dark energy equation of state. Tighter constraints on the interaction rate have been obtained in the present work as it is based on larger data sets. The nature of the dark energy equation of state parameter has also been studied for the present models. Though the reconstruction is done from different parametrizations, the overall nature of the interaction rate is very similar in all the cases. Different information criteria and the Bayesian evidence, which have been invoked in the context of model selection, show that the these models are at close proximity of each other.
Oliveira, Augusto F; Philipsen, Pier; Heine, Thomas
2015-11-10
In the first part of this series, we presented a parametrization strategy to obtain high-quality electronic band structures on the basis of density-functional-based tight-binding (DFTB) calculations and published a parameter set called QUASINANO2013.1. Here, we extend our parametrization effort to include the remaining terms that are needed to compute the total energy and its gradient, commonly referred to as repulsive potential. Instead of parametrizing these terms as a two-body potential, we calculate them explicitly from the DFTB analogues of the Kohn-Sham total energy expression. This strategy requires only two further numerical parameters per element. Thus, the atomic configuration and four real numbers per element are sufficient to define the DFTB model at this level of parametrization. The QUASINANO2015 parameter set allows the calculation of energy, structure, and electronic structure of all systems composed of elements ranging from H to Ca. Extensive benchmarks show that the overall accuracy of QUASINANO2015 is comparable to that of well-established methods, including PM7 and hand-tuned DFTB parameter sets, while coverage of a much larger range of chemical systems is available.
Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions
NASA Astrophysics Data System (ADS)
Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.
2017-01-01
Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.
NASA Astrophysics Data System (ADS)
Sadashiva, M.; Shivanand, H. K.; Vidyasagar, H. N.
2018-04-01
The Current work is aimed to investigate the effect of process parameters in friction stir welding of Aluminium 2024 base alloy and Aluminium 2024 matrix alloy reinforced with E Glass and Silicon Carbide reinforcements. The process involved a set of synthesis techniques incorporating stir casting methodology resulting in fabrication of the composite material. This composite material that is synthesized is then machined to obtain a plate of dimensions 100 mm * 50 mm * 6 mm. The plate is then friction stir welded at different set of parameters viz. the spindle speed of 600 rpm, 900 rpm and 1200 rpm and feed rate of 40 mm/min, 80 mm/min and 120 mm/min for analyzing the process capability. The study of the given set of parameters is predominantly important to understand the physics of the process that may lead to better properties of the joint, which is very much important in perspective to its use in advanced engineering applications, especially in aerospace domain that uses Aluminium 2024 alloy for wing and fuselage structures under tension.
NASA Astrophysics Data System (ADS)
Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie
2018-01-01
The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.
Pilot climate data system user's guide
NASA Technical Reports Server (NTRS)
Reph, M. G.; Treinish, L. A.; Bloch, L.
1984-01-01
Instructions for using the Pilot Climate Data System (PCDS), an interactive, scientific data management system for locating, obtaining, manipulating, and displaying climate-research data are presented. The PCDS currently provides this supoort for approximately twenty data sets. Figures that illustrate the terminal displays which a user sees when he/she runs the PCDS and some examples of the output from this system are included. The capabilities which are described in detail allow a user to perform the following: (1) obtain comprehensive descriptions of a number of climate parameter data sets and the associated sensor measurements from which they were derived; (2) obtain detailed information about the temporal coverage and data volume of data sets which are readily accessible via the PCDS; (3) extract portions of a data set using criteria such as time range and geographic location, and output the data to tape, user terminal, system printer, or online disk files in a special data-set-independent format; (4) access and manipulate the data in these data-set-independent files, performing such functions as combining the data, subsetting the data, and averaging the data; and (5) create various graphical representations of the data stored in the data-set-independent files.
Optimisation of dispersion parameters of Gaussian plume model for CO₂ dispersion.
Liu, Xiong; Godbole, Ajit; Lu, Cheng; Michal, Guillaume; Venton, Philip
2015-11-01
The carbon capture and storage (CCS) and enhanced oil recovery (EOR) projects entail the possibility of accidental release of carbon dioxide (CO2) into the atmosphere. To quantify the spread of CO2 following such release, the 'Gaussian' dispersion model is often used to estimate the resulting CO2 concentration levels in the surroundings. The Gaussian model enables quick estimates of the concentration levels. However, the traditionally recommended values of the 'dispersion parameters' in the Gaussian model may not be directly applicable to CO2 dispersion. This paper presents an optimisation technique to obtain the dispersion parameters in order to achieve a quick estimation of CO2 concentration levels in the atmosphere following CO2 blowouts. The optimised dispersion parameters enable the Gaussian model to produce quick estimates of CO2 concentration levels, precluding the necessity to set up and run much more complicated models. Computational fluid dynamics (CFD) models were employed to produce reference CO2 dispersion profiles in various atmospheric stability classes (ASC), different 'source strengths' and degrees of ground roughness. The performance of the CFD models was validated against the 'Kit Fox' field measurements, involving dispersion over a flat horizontal terrain, both with low and high roughness regions. An optimisation model employing a genetic algorithm (GA) to determine the best dispersion parameters in the Gaussian plume model was set up. Optimum values of the dispersion parameters for different ASCs that can be used in the Gaussian plume model for predicting CO2 dispersion were obtained.
NASA Technical Reports Server (NTRS)
Hartfield, Roy J., Jr.; Hollo, Steven D.; Mcdaniel, James C.
1990-01-01
A nonintrusive optical technique, laser-induced iodine fluorescence, has been used to obtain planar measurements of flow field parameters in the supersonic mixing flow field of a nonreacting supersonic combustor. The combustor design used in this work was configured with staged transverse sonic injection behind a rearward-facing step into a Mach 2.07 free stream. A set of spatially resolved measurements of temperature and injectant mole fraction has been generated. These measurements provide an extensive and accurate experimental data set required for the validation of computational fluid dynamic codes developed for the calculation of highly three-dimensional combustor flow fields.
Analysis of material parameter effects on fluidlastic isolators performance
NASA Astrophysics Data System (ADS)
Cheng, Q. Y.; Deng, J. H.; Feng, Z. Z.; Qian, F.
2018-01-01
Control of vibration in helicopters has always been a complex and challenging task. The fluidlastic isolators become more and more widely used because the fluids are non-toxic, non-corrosive, nonflammable, and compatible with most elastomers and adhesives. In the field of the fluidlastic isolators design, the selection of design parameters of fluid and rubber is very important to obtain efficient vibration-suppressed. Aiming at getting the property of fluidlastic isolator to material design parameters, a dynamic equation is set up based on the dynamic theory. And the dynamic analysis is carried out. The influences of design parameters on the property of fluidlastic isolator are calculated. The material parameters examined are the properties of fluid and rubber. Analysis results showed that the design parameters such as density of fluid, viscosity coefficient of fluid, stiffness of rubber (K1) and loss coefficient of rubber have obvious influence on the performance of isolator. Base on the results of the study it is concluded that the efficient vibration-suppressed can be obtained by the selection of design parameters.
Agrawal, Vijay K; Gupta, Madhu; Singh, Jyoti; Khadikar, Padmakar V
2005-03-15
Attempt is made to propose yet another method of estimating lipophilicity of a heterogeneous set of 223 compounds. The method is based on the use of equalized electronegativity along with topological indices. It was observed that excellent results are obtained in multiparametric regression upon introduction of indicator parameters. The results are discussed critically on the basis various statistical parameters.
Constraints on cosmological parameters in power-law cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rani, Sarita; Singh, J.K.; Altaibayeva, A.
In this paper, we examine observational constraints on the power law cosmology; essentially dependent on two parameters H{sub 0} (Hubble constant) and q (deceleration parameter). We investigate the constraints on these parameters using the latest 28 points of H(z) data and 580 points of Union2.1 compilation data and, compare the results with the results of ΛCDM . We also forecast constraints using a simulated data set for the future JDEM, supernovae survey. Our studies give better insight into power law cosmology than the earlier done analysis by Kumar [arXiv:1109.6924] indicating it tuning well with Union2.1 compilation data but not withmore » H(z) data. However, the constraints obtained on and i.e. H{sub 0} average and q average using the simulated data set for the future JDEM, supernovae survey are found to be inconsistent with the values obtained from the H(z) and Union2.1 compilation data. We also perform the statefinder analysis and find that the power-law cosmological models approach the standard ΛCDM model as q → −1. Finally, we observe that although the power law cosmology explains several prominent features of evolution of the Universe, it fails in details.« less
An, Yan; Zou, Zhihong; Li, Ranran
2014-01-01
A large number of parameters are acquired during practical water quality monitoring. If all the parameters are used in water quality assessment, the computational complexity will definitely increase. In order to reduce the input space dimensions, a fuzzy rough set was introduced to perform attribute reduction. Then, an attribute recognition theoretical model and entropy method were combined to assess water quality in the Harbin reach of the Songhuajiang River in China. A dataset consisting of ten parameters was collected from January to October in 2012. Fuzzy rough set was applied to reduce the ten parameters to four parameters: BOD5, NH3-N, TP, and F. coli (Reduct A). Considering that DO is a usual parameter in water quality assessment, another reduct, including DO, BOD5, NH3-N, TP, TN, F, and F. coli (Reduct B), was obtained. The assessment results of Reduct B show a good consistency with those of Reduct A, and this means that DO is not always necessary to assess water quality. The results with attribute reduction are not exactly the same as those without attribute reduction, which can be attributed to the α value decided by subjective experience. The assessment results gained by the fuzzy rough set obviously reduce computational complexity, and are acceptable and reliable. The model proposed in this paper enhances the water quality assessment system. PMID:24675643
Mühlfeld, Christian; Ochs, Matthias
2013-08-01
Design-based stereology provides efficient methods to obtain valuable quantitative information of the respiratory tract in various diseases. However, the choice of the most relevant parameters in a specific disease setting has to be deduced from the present pathobiological knowledge. Often it is difficult to express the pathological alterations by interpretable parameters in terms of volume, surface area, length, or number. In the second part of this companion review article, we analyze the present pathophysiological knowledge about acute lung injury, diffuse parenchymal lung diseases, emphysema, pulmonary hypertension, and asthma to come up with recommendations for the disease-specific application of stereological principles for obtaining relevant parameters. Worked examples with illustrative images are used to demonstrate the work flow, estimation procedure, and calculation and to facilitate the practical performance of equivalent analyses.
NASA Astrophysics Data System (ADS)
Visser, Eric P.; Disselhorst, Jonathan A.; van Lier, Monique G. J. T. B.; Laverman, Peter; de Jong, Gabie M.; Oyen, Wim J. G.; Boerman, Otto C.
2011-02-01
The image reconstruction algorithms provided with the Siemens Inveon small-animal PET scanner are filtered backprojection (FBP), 3-dimensional reprojection (3DRP), ordered subset expectation maximization in 2 or 3 dimensions (OSEM2D/3D) and maximum a posteriori (MAP) reconstruction. This study aimed at optimizing the reconstruction parameter settings with regard to image quality (IQ) as defined by the NEMA NU 4-2008 standards. The NEMA NU 4-2008 image quality phantom was used to determine image noise, expressed as percentage standard deviation in the uniform phantom region (%STD unif), activity recovery coefficients for the FDG-filled rods (RC rod), and spill-over ratios for the non-radioactive water- and air-filled phantom compartments (SOR wat and SOR air). Although not required by NEMA NU 4, we also determined a contrast-to-noise ratio for each rod (CNR rod), expressing the trade-off between activity recovery and image noise. For FBP and 3DRP the cut-off frequency of the applied filters, and for OSEM2D and OSEM3D, the number of iterations was varied. For MAP, the "smoothing parameter" β and the type of uniformity constraint (variance or resolution) were varied. Results of these analyses were demonstrated in images of an FDG-injected rat showing tumours in the liver, and of a mouse injected with an 18F-labeled peptide, showing a small subcutaneous tumour and the cortex structure of the kidneys. Optimum IQ in terms of CNR rod for the small-diameter rods was obtained using MAP with uniform variance and β=0.4. This setting led to RC rod,1 mm=0.21, RC rod,2 mm=0.57, %STD unif=1.38, SOR wat=0.0011, and SOR air=0.00086. However, the highest activity recovery for the smallest rods with still very small %STD unif was obtained using β=0.075, for which these IQ parameters were 0.31, 0.74, 2.67, 0.0041, and 0.0030, respectively. The different settings of reconstruction parameters were clearly reflected in the rat and mouse images as the trade-off between the recovery of small structures (blood vessels, small tumours, kidney cortex structure) and image noise in homogeneous body parts (healthy liver background). Highest IQ for the Inveon PET scanner was obtained using MAP reconstruction with uniform variance. The setting of β depended on the specific imaging goals.
Uloza, Virgilijus; Padervinskis, Evaldas; Uloziene, Ingrida; Saferis, Viktoras; Verikas, Antanas
2015-09-01
The aim of the present study was to evaluate the reliability of the measurements of acoustic voice parameters obtained simultaneously using oral and contact (throat) microphones and to investigate utility of combined use of these microphones for voice categorization. Voice samples of sustained vowel /a/ obtained from 157 subjects (105 healthy and 52 pathological voices) were recorded in a soundproof booth simultaneously through two microphones: oral AKG Perception 220 microphone (AKG Acoustics, Vienna, Austria) and contact (throat) Triumph PC microphone (Clearer Communications, Inc, Burnaby, Canada) placed on the lamina of thyroid cartilage. Acoustic voice signal data were measured for fundamental frequency, percent of jitter and shimmer, normalized noise energy, signal-to-noise ratio, and harmonic-to-noise ratio using Dr. Speech software (Tiger Electronics, Seattle, WA). The correlations of acoustic voice parameters in vocal performance were statistically significant and strong (r = 0.71-1.0) for the entire functional measurements obtained for the two microphones. When classifying into healthy-pathological voice classes, the oral-shimmer revealed the correct classification rate (CCR) of 75.2% and the throat-jitter revealed CCR of 70.7%. However, combination of both throat and oral microphones allowed identifying a set of three voice parameters: throat-signal-to-noise ratio, oral-shimmer, and oral-normalized noise energy, which provided the CCR of 80.3%. The measurements of acoustic voice parameters using a combination of oral and throat microphones showed to be reliable in clinical settings and demonstrated high CCRs when distinguishing the healthy and pathological voice patient groups. Our study validates the suitability of the throat microphone signal for the task of automatic voice analysis for the purpose of voice screening. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Classification of adulterated honeys by multivariate analysis.
Amiry, Saber; Esmaiili, Mohsen; Alizadeh, Mohammad
2017-06-01
In this research, honey samples were adulterated with date syrup (DS) and invert sugar syrup (IS) at three concentrations (7%, 15% and 30%). 102 adulterated samples were prepared in six batches with 17 replications for each batch. For each sample, 32 parameters including color indices, rheological, physical, and chemical parameters were determined. To classify the samples, based on type and concentrations of adulterant, a multivariate analysis was applied using principal component analysis (PCA) followed by a linear discriminant analysis (LDA). Then, 21 principal components (PCs) were selected in five sets. Approximately two-thirds were identified correctly using color indices (62.75%) or rheological properties (67.65%). A power discrimination was obtained using physical properties (97.06%), and the best separations were achieved using two sets of chemical properties (set 1: lactone, diastase activity, sucrose - 100%) (set 2: free acidity, HMF, ash - 95%). Copyright © 2016 Elsevier Ltd. All rights reserved.
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
Geerse, Daphne J.; Coolen, Bert H.; Roerdink, Melvyn
2015-01-01
Walking ability is frequently assessed with the 10-meter walking test (10MWT), which may be instrumented with multiple Kinect v2 sensors to complement the typical stopwatch-based time to walk 10 meters with quantitative gait information derived from Kinect’s 3D body point’s time series. The current study aimed to evaluate a multi-Kinect v2 set-up for quantitative gait assessments during the 10MWT against a gold-standard motion-registration system by determining between-systems agreement for body point’s time series, spatiotemporal gait parameters and the time to walk 10 meters. To this end, the 10MWT was conducted at comfortable and maximum walking speed, while 3D full-body kinematics was concurrently recorded with the multi-Kinect v2 set-up and the Optotrak motion-registration system (i.e., the gold standard). Between-systems agreement for body point’s time series was assessed with the intraclass correlation coefficient (ICC). Between-systems agreement was similarly determined for the gait parameters’ walking speed, cadence, step length, stride length, step width, step time, stride time (all obtained for the intermediate 6 meters) and the time to walk 10 meters, complemented by Bland-Altman’s bias and limits of agreement. Body point’s time series agreed well between the motion-registration systems, particularly so for body points in motion. For both comfortable and maximum walking speeds, the between-systems agreement for the time to walk 10 meters and all gait parameters except step width was high (ICC ≥ 0.888), with negligible biases and narrow limits of agreement. Hence, body point’s time series and gait parameters obtained with a multi-Kinect v2 set-up match well with those derived with a gold standard in 3D measurement accuracy. Future studies are recommended to test the clinical utility of the multi-Kinect v2 set-up to automate 10MWT assessments, thereby complementing the time to walk 10 meters with reliable spatiotemporal gait parameters obtained objectively in a quick, unobtrusive and patient-friendly manner. PMID:26461498
Loss-resistant unambiguous phase measurement
NASA Astrophysics Data System (ADS)
Dinani, Hossein T.; Berry, Dominic W.
2014-08-01
Entangled multiphoton states have the potential to provide improved measurement accuracy, but are sensitive to photon loss. It is possible to calculate ideal loss-resistant states that maximize the Fisher information, but it is unclear how these could be experimentally generated. Here we propose a set of states that can be obtained by processing the output from parametric down-conversion. Although these states are not optimal, they provide performance very close to that of optimal states for a range of parameters. Moreover, we show how to use sequences of such states in order to obtain an unambiguous phase measurement that beats the standard quantum limit. We consider the optimization of parameters in order to minimize the final phase variance, and find that the optimum parameters are different from those that maximize the Fisher information.
NASA Technical Reports Server (NTRS)
Galindo-Israel, V.; Imbriale, W.; Shogen, K.; Mittra, R.
1990-01-01
In obtaining solutions to the first-order nonlinear partial differential equations (PDEs) for synthesizing offset dual-shaped reflectors, it is found that previously observed computational problems can be avoided if the integration of the PDEs is started from an inner projected perimeter and integrated outward rather than starting from an outer projected perimeter and integrating inward. This procedure, however, introduces a new parameter, the main reflector inner perimeter radius p(o), when given a subreflector inner angle 0(o). Furthermore, a desired outer projected perimeter (e.g., a circle) is no longer guaranteed. Stability of the integration is maintained if some of the initial parameters are determined first from an approximate solution to the PDEs. A one-, two-, or three-parameter optimization algorithm can then be used to obtain a best set of parameters yielding a close fit to the desired projected outer rim. Good low cross-polarization mapping functions are also obtained. These methods are illustrated by synthesis of a high-gain offset-shaped Cassegrainian antenna and a low-noise offset-shaped Gregorian antenna.
NASA Astrophysics Data System (ADS)
Chen, Zhen; Wei, Zhengying; Wei, Pei; Chen, Shenggui; Lu, Bingheng; Du, Jun; Li, Junfeng; Zhang, Shuzhe
2017-12-01
In this work, a set of experiments was designed to investigate the effect of process parameters on the relative density of the AlSi10Mg parts manufactured by SLM. The influence of laser scan speed v, laser power P and hatch space H, which were considered as the dominant parameters, on the powder melting and densification behavior was also studied experimentally. In addition, the laser energy density was introduced to evaluate the combined effect of the above dominant parameters, so as to control the SLM process integrally. As a result, a high relative density (> 97%) was obtained by SLM at an optimized laser energy density of 3.5-5.5 J/mm2. Moreover, a parameter-densification map was established to visually select the optimum process parameters for the SLM-processed AlSi10Mg parts with elevated density and required mechanical properties. The results provide an important experimental guidance for obtaining AlSi10Mg components with full density and gradient functional porosity by SLM.
On the use of Bayesian Monte-Carlo in evaluation of nuclear data
NASA Astrophysics Data System (ADS)
De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles
2017-09-01
As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.
Wavelets solution of MHD 3-D fluid flow in the presence of slip and thermal radiation effects
NASA Astrophysics Data System (ADS)
Usman, M.; Zubair, T.; Hamid, M.; Haq, Rizwan Ul; Wang, Wei
2018-02-01
This article is devoted to analyze the magnetic field, slip, and thermal radiations effects on generalized three-dimensional flow, heat, and mass transfer in a channel of lower stretching wall. We supposed two various lateral direction rates for the lower stretching surface of the wall while the upper wall of the channel is subjected to constant injection. Moreover, influence of thermal slip on the temperature profile beside the viscous dissipation and Joule heating is also taken into account. The governing set of partial differential equations of the heat transfer and flow are transformed to nonlinear set of ordinary differential equations (ODEs) by using the compatible similarity transformations. The obtained nonlinear ODE set tackled by means of a new wavelet algorithm. The outcomes obtained via modified Chebyshev wavelet method are compared with Runge-Kutta (order-4). The worthy comparison, error, and convergence analysis shows an excellent agreement. Additionally, the graphical representation for various physical parameters including the skin friction coefficient, velocity, the temperature gradient, and the temperature profiles are plotted and discussed. It is observed that for a fixed value of velocity slip parameter a suitable selection of stretching ratio parameter can be helpful in hastening the heat transfer rate and in reducing the viscous drag over the stretching sheet. Finally, the convergence analysis is performed which endorsing that this proposed method is well efficient.
Electro-optical parameters of bond polarizability model for aluminosilicates.
Smirnov, Konstantin S; Bougeard, Daniel; Tandon, Poonam
2006-04-06
Electro-optical parameters (EOPs) of bond polarizability model (BPM) for aluminosilicate structures were derived from quantum-chemical DFT calculations of molecular models. The tensor of molecular polarizability and the derivatives of the tensor with respect to the bond length are well reproduced with the BPM, and the EOPs obtained are in a fair agreement with available experimental data. The parameters derived were found to be transferable to larger molecules. This finding suggests that the procedure used can be applied to systems with partially ionic chemical bonds. The transferability of the parameters to periodic systems was tested in molecular dynamics simulation of the polarized Raman spectra of alpha-quartz. It appeared that the molecular Si-O bond EOPs failed to reproduce the intensity of peaks in the spectra. This limitation is due to large values of the longitudinal components of the bond polarizability and its derivative found in the molecular calculations as compared to those obtained from periodic DFT calculations of crystalline silica polymorphs by Umari et al. (Phys. Rev. B 2001, 63, 094305). It is supposed that the electric field of the solid is responsible for the difference of the parameters. Nevertheless, the EOPs obtained can be used as an initial set of parameters for calculations of polarizability related characteristics of relevant systems in the framework of BPM.
Oceanic Geoid and Tides Obtained from GEOS-3 Satellite Data in the Northwestern Atlantic Ocean
NASA Technical Reports Server (NTRS)
Won, I. J.; Miller, L. S.
1978-01-01
Two sets of GEO-3 altimeter data which fall within about a 2.5 degree width are analyzed for ocean geoid and tides. One set covers a linear path from Newfoundland to Cuba and the other from Puerto Rico to the North Carolina coast. Forty different analyses using various parameters are performed in order to investigate convergence. Profiles of the geoid and four tides, M sub 2 O sub 1, S sub 2, and K sub 1, are obtained along the two strips. The results demonstrate convergent solutions for all forty cases and show, within expectation, fair agreement with those obtained from the MODE deep-sea tide gauge. It is also shown that the oceanic geoid obtained through this analysis can potentially improve the short wavelength structure over existing geoid models.
Shen, Qijun; Shan, Yanna; Hu, Zhengyu; Chen, Wenhui; Yang, Bing; Han, Jing; Huang, Yanfang; Xu, Wen; Feng, Zhan
2018-04-30
To objectively quantify intracranial hematoma (ICH) enlargement by analysing the image texture of head CT scans and to provide objective and quantitative imaging parameters for predicting early hematoma enlargement. We retrospectively studied 108 ICH patients with baseline non-contrast computed tomography (NCCT) and 24-h follow-up CT available. Image data were assessed by a chief radiologist and a resident radiologist. Consistency analysis between observers was tested. The patients were divided into training set (75%) and validation set (25%) by stratified sampling. Patients in the training set were dichotomized according to 24-h hematoma expansion ≥ 33%. Using the Laplacian of Gaussian bandpass filter, we chose different anatomical spatial domains ranging from fine texture to coarse texture to obtain a series of derived parameters (mean grayscale intensity, variance, uniformity) in order to quantify and evaluate all data. The parameters were externally validated on validation set. Significant differences were found between the two groups of patients within variance at V 1.0 and in uniformity at U 1.0 , U 1.8 and U 2.5 . The intraclass correlation coefficients for the texture parameters were between 0.67 and 0.99. The area under the ROC curve between the two groups of ICH cases was between 0.77 and 0.92. The accuracy of validation set by CTTA was 0.59-0.85. NCCT texture analysis can objectively quantify the heterogeneity of ICH and independently predict early hematoma enlargement. • Heterogeneity is helpful in predicting ICH enlargement. • CTTA could play an important role in predicting early ICH enlargement. • After filtering, fine texture had the best diagnostic performance. • The histogram-based uniformity parameters can independently predict ICH enlargement. • CTTA is more objective, more comprehensive, more independently operable, than previous methods.
NASA Astrophysics Data System (ADS)
Neri, Mattia; Toth, Elena
2017-04-01
The study presents the implementation of different regionalisation approaches for the transfer of model parameters from similar and/or neighbouring gauged basin to an ungauged catchment, and in particular it uses a semi-distributed continuously-simulating conceptual rainfall-runoff model for simulating daily streamflows. The case study refers to a set of Apennine catchments (in the Emilia-Romagna region, Italy), that, given the spatial proximity, are assumed to belong to the same hydrologically homogeneous region and are used, alternatively, as donors and regionalised basins. The model is a semi-distributed version of the HBV model (TUWien model) in which the catchment is divided in zones of different altitude that contribute separately to the total outlet flow. The model includes a snow module, whose application in the Apennine area has been, so far, very limited, even if snow accumulation and melting phenomena do have an important role in the study basins. Two methods, both widely applied in the recent literature, are applied for regionalising the model: i) "parameters averaging", where each parameter is obtained as a weighted mean of the parameters obtained, through calibration, on the donor catchments ii) "output averaging", where the model is run over the ungauged basin using the entire set of parameters of each donor basin and the simulated outputs are then averaged. In the first approach, the parameters are regionalised independently from each other, in the second one, instead, the correlation among the parameters is maintained. Since the model is a semi-distributed one, where each elevation zone contributes separately, the study proposes to test also a modified version of the second approach ("output averaging"), where each zone is considered as an autonomous entity, whose parameters are transposed to the ungauged sub-basin corresponding to the same elevation zone. The study explores also the choice of the weights to be used for averaging the parameters (in the "parameters averaging" approach) or for averaging the simulated streamflow (in the "output averaging" approach): in particular, weights are estimated as a function of the similarity/distance of the ungauged basin/zone to the donors, on the basis of a set of geo-morphological catchment descriptors. The predictive accuracy of the different regionalisation methods is finally assessed by jack-knife cross-validation against the observed daily runoff for all the study catchments.
Multiprocessor sparse L/U decomposition with controlled fill-in
NASA Technical Reports Server (NTRS)
Alaghband, G.; Jordan, H. F.
1985-01-01
Generation of the maximal compatibles of pivot elements for a class of small sparse matrices is studied. The algorithm involves a binary tree search and has a complexity exponential in the order of the matrix. Different strategies for selection of a set of compatible pivots based on the Markowitz criterion are investigated. The competing issues of parallelism and fill-in generation are studied and results are provided. A technque for obtaining an ordered compatible set directly from the ordered incompatible table is given. This technique generates a set of compatible pivots with the property of generating few fills. A new hueristic algorithm is then proposed that combines the idea of an ordered compatible set with a limited binary tree search to generate several sets of compatible pivots in linear time. Finally, an elimination set to reduce the matrix is selected. Parameters are suggested to obtain a balance between parallelism and fill-ins. Results of applying the proposed algorithms on several large application matrices are presented and analyzed.
Krill herd and piecewise-linear initialization algorithms for designing Takagi-Sugeno systems
NASA Astrophysics Data System (ADS)
Hodashinsky, I. A.; Filimonenko, I. V.; Sarin, K. S.
2017-07-01
A method for designing Takagi-Sugeno fuzzy systems is proposed which uses a piecewiselinear initialization algorithm for structure generation and a metaheuristic krill herd algorithm for parameter optimization. The obtained systems are tested against real data sets. The influence of some parameters of this algorithm on the approximation accuracy is analyzed. Estimates of the approximation accuracy and the number of fuzzy rules are compared with four known methods of design.
Comparison of Spatial Correlation Parameters between Full and Model Scale Launch Vehicles
NASA Technical Reports Server (NTRS)
Kenny, Jeremy; Giacomoni, Clothilde
2016-01-01
The current vibro-acoustic analysis tools require specific spatial correlation parameters as input to define the liftoff acoustic environment experienced by the launch vehicle. Until recently these parameters have not been very well defined. A comprehensive set of spatial correlation data were obtained during a scale model acoustic test conducted in 2014. From these spatial correlation data, several parameters were calculated: the decay coefficient, the diffuse to propagating ratio, and the angle of incidence. Spatial correlation data were also collected on the EFT-1 flight of the Delta IV vehicle which launched on December 5th, 2014. A comparison of the spatial correlation parameters from full scale and model scale data will be presented.
Inverse sequential detection of parameter changes in developing time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1992-01-01
Progressive values of two probabilities are obtained for parameter estimates derived from an existing set of values and from the same set enlarged by one or more new values, respectively. One probability is that of erroneously preferring the second of these estimates for the existing data ('type 1 error'), while the second probability is that of erroneously accepting their estimates for the enlarged test ('type 2 error'). A more stable combined 'no change' probability which always falls between 0.5 and 0 is derived from the (logarithmic) width of the uncertainty region of an equivalent 'inverted' sequential probability ratio test (SPRT, Wald 1945) in which the error probabilities are calculated rather than prescribed. A parameter change is indicated when the compound probability undergoes a progressive decrease. The test is explicitly formulated and exemplified for Gaussian samples.
Heuristics for multiobjective multiple sequence alignment.
Abbasi, Maryam; Paquete, Luís; Pereira, Francisco B
2016-07-15
Aligning multiple sequences arises in many tasks in Bioinformatics. However, the alignments produced by the current software packages are highly dependent on the parameters setting, such as the relative importance of opening gaps with respect to the increase of similarity. Choosing only one parameter setting may provide an undesirable bias in further steps of the analysis and give too simplistic interpretations. In this work, we reformulate multiple sequence alignment from a multiobjective point of view. The goal is to generate several sequence alignments that represent a trade-off between maximizing the substitution score and minimizing the number of indels/gaps in the sum-of-pairs score function. This trade-off gives to the practitioner further information about the similarity of the sequences, from which she could analyse and choose the most plausible alignment. We introduce several heuristic approaches, based on local search procedures, that compute a set of sequence alignments, which are representative of the trade-off between the two objectives (substitution score and indels). Several algorithm design options are discussed and analysed, with particular emphasis on the influence of the starting alignment and neighborhood search definitions on the overall performance. A perturbation technique is proposed to improve the local search, which provides a wide range of high-quality alignments. The proposed approach is tested experimentally on a wide range of instances. We performed several experiments with sequences obtained from the benchmark database BAliBASE 3.0. To evaluate the quality of the results, we calculate the hypervolume indicator of the set of score vectors returned by the algorithms. The results obtained allow us to identify reasonably good choices of parameters for our approach. Further, we compared our method in terms of correctly aligned pairs ratio and columns correctly aligned ratio with respect to reference alignments. Experimental results show that our approaches can obtain better results than TCoffee and Clustal Omega in terms of the first ratio.
Reconstruction of interaction rate in holographic dark energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, Ankan, E-mail: ankan_ju@iiserkol.ac.in
2016-11-01
The present work is based on the holographic dark energy model with Hubble horizon as the infrared cut-off. The interaction rate between dark energy and dark matter has been reconstructed for three different parameterizations of the deceleration parameter. Observational constraints on the model parameters have been obtained by maximum likelihood analysis using the observational Hubble parameter data (OHD), type Ia supernovab data (SNe), baryon acoustic oscillation data (BAO) and the distance prior of cosmic microwave background (CMB) namely the CMB shift parameter data (CMBShift). The interaction rate obtained in the present work remains always positive and increases with expansion. Itmore » is very similar to the result obtained by Sen and Pavon [1] where the interaction rate has been reconstructed for a parametrization of the dark energy equation of state. Tighter constraints on the interaction rate have been obtained in the present work as it is based on larger data sets. The nature of the dark energy equation of state parameter has also been studied for the present models. Though the reconstruction is done from different parametrizations, the overall nature of the interaction rate is very similar in all the cases. Different information criteria and the Bayesian evidence, which have been invoked in the context of model selection, show that the these models are at close proximity of each other.« less
Moench, A.F.; Garabedian, Stephen P.; LeBlanc, Denis R.
2000-01-01
An aquifer test conducted in a sand and gravel, glacial outwash deposit on Cape Cod, Massachusetts was analyzed by means of a model for flow to a partially penetrating well in a homogeneous, anisotropic unconfined aquifer. The model is designed to account for all significant mechanisms expected to influence drawdown in observation piezometers and in the pumped well. In addition to the usual fluid-flow and storage processes, additional processes include effects of storage in the pumped well, storage in observation piezometers, effects of skin at the pumped-well screen, and effects of drainage from the zone above the water table. The aquifer was pumped at a rate of 320 gallons per minute for 72-hours and drawdown measurements were made in the pumped well and in 20 piezometers located at various distances from the pumped well and depths below the land surface. To facilitate the analysis, an automatic parameter estimation algorithm was used to obtain relevant unconfined aquifer parameters, including the saturated thickness and a set of empirical parameters that relate to gradual drainage from the unsaturated zone. Drainage from the unsaturated zone is treated in this paper as a finite series of exponential terms, each of which contains one empirical parameter that is to be determined. It was necessary to account for effects of gradual drainage from the unsaturated zone to obtain satisfactory agreement between measured and simulated drawdown, particularly in piezometers located near the water table. The commonly used assumption of instantaneous drainage from the unsaturated zone gives rise to large discrepancies between measured and predicted drawdown in the intermediate-time range and can result in inaccurate estimates of aquifer parameters when automatic parameter estimation procedures are used. The values of the estimated hydraulic parameters are consistent with estimates from prior studies and from what is known about the aquifer at the site. Effects of heterogeneity at the site were small as measured drawdowns in all piezometers and wells were very close to the simulated values for a homogeneous porous medium. The estimated values are: specific yield, 0.26; saturated thickness, 170 feet; horizontal hydraulic conductivity, 0.23 feet per minute; vertical hydraulic conductivity, 0.14 feet per minute; and specific storage, 1.3x10-5 per foot. It was found that drawdown in only a few piezometers strategically located at depth near the pumped well yielded parameter estimates close to the estimates obtained for the entire data set analyzed simultaneously. If the influence of gradual drainage from the unsaturated zone is not taken into account, specific yield is significantly underestimated even in these deep-seated piezometers. This helps to explain the low values of specific yield often reported for granular aquifers in the literature. If either the entire data set or only the drawdown in selected deep-seated piezometers was used, it was found unnecessary to conduct the test for the full 72-hours to obtain accurate estimates of the hydraulic parameters. For some piezometer groups, practically identical results would be obtained for an aquifer test conducted for only 8-hours. Drawdowns measured in the pumped well and piezometers at distant locations were diagnostic only of aquifer transmissivity.
Kalman filter estimation of human pilot-model parameters
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Roland, V. R.
1975-01-01
The parameters of a human pilot-model transfer function are estimated by applying the extended Kalman filter to the corresponding retarded differential-difference equations in the time domain. Use of computer-generated data indicates that most of the parameters, including the implicit time delay, may be reasonably estimated in this way. When applied to two sets of experimental data obtained from a closed-loop tracking task performed by a human, the Kalman filter generated diverging residuals for one of the measurement types, apparently because of model assumption errors. Application of a modified adaptive technique was found to overcome the divergence and to produce reasonable estimates of most of the parameters.
Determination of the technical constants of laminates in oblique directions
NASA Technical Reports Server (NTRS)
Vidouse, F.
1979-01-01
An off-axis tensile test theory based on Hooke's Law is applied to glass fiber reinforced laminates. A corrective parameter dependent on the characteristics of the strain gauge used is introduced by testing machines set up for isotropic materials. Theoretical results for a variety of strain gauges are compared with those obtained by a finite element method and with experimental results obtained on laminates reinforced with glass.
One-Dimensional Simulations for Spall in Metals with Intra- and Inter-grain failure models
NASA Astrophysics Data System (ADS)
Ferri, Brian; Dwivedi, Sunil; McDowell, David
2017-06-01
The objective of the present work is to model spall failure in metals with coupled effect of intra-grain and inter-grain failure mechanisms. The two mechanisms are modeled by a void nucleation, growth, and coalescence (VNGC) model and contact-cohesive model respectively. Both models were implemented in a 1-D code to simulate spall in 6061-T6 aluminum at two impact velocities. The parameters of the VNGC model without inter-grain failure and parameters of the cohesive model without intra-grain failure were first determined to obtain pull-back velocity profiles in agreement with experimental data. With the same impact velocities, the same sets of parameters did not predict the velocity profiles when both mechanisms were simultaneously activated. A sensitivity study was performed to predict spall under combined mechanisms by varying critical stress in the VNGC model and maximum traction in the cohesive model. The study provided possible sets of the two parameters leading to spall. Results will be presented comparing the predicted velocity profile with experimental data using one such set of parameters for the combined intra-grain and inter-grain failures during spall. Work supported by HDTRA1-12-1-0004 gran and by the School of Mechanical Engineering GTA.
Volumetric flow rate in simulations of microfluidic devices+
NASA Astrophysics Data System (ADS)
Kovalčíková, KristÍna; Slavík, Martin; Bachratá, Katarína; Bachratý, Hynek; Bohiniková, Alžbeta
2018-06-01
In this work, we examine the volumetric flow rate of microfluidic devices. The volumetric flow rate is a parameter which is necessary to correctly set up a simulation of a real device and to check the conformity of a simulation and a laboratory experiments [1]. Instead of defining the volumetric rate at the beginning as a simulation parameter, a parameter of external force is set. The proposed hypothesis is that for a fixed set of other parameters (topology, viscosity of the liquid, …) the volumetric flow rate is linearly dependent on external force in typical ranges of fluid velocity used in our simulations. To confirm this linearity hypothesis and to find numerical limits of this approach, we test several values of the external force parameter. The tests are designed for three different topologies of simulation box and for various haematocrits. The topologies of the microfluidic devices are inspired by existing laboratory experiments [3 - 6]. The linear relationship between the external force and the volumetric flow rate is verified in orders of magnitudes similar to the values obtained from laboratory experiments. Supported by the Slovak Research and Development Agency under the contract No. APVV-15-0751 and by the Ministry of Education, Science, Research and Sport of the Slovak Republic under the contract No. VEGA 1/0643/17.
NASA Technical Reports Server (NTRS)
Suit, W. T.
1977-01-01
Flight test data are used to extract the lateral aerodynamic parameters of the F-8C airplane at moderate to high angles of attack. The data were obtained during perturbations of the airplane from steady turns with trim normal accelerations from 1.5g to 3.0g. The angle-of-attack variation from trim was negligible. The aerodynamic coefficients extracted from flight data were compared with several other sets of coefficients, and the extracted coefficients resulted in characteristics for the Dutch roll mode (at the highest angles of attack) similar to those of a set of coefficients that have been the basis of several simulations of the F-8C.
Blocky inversion of multichannel elastic impedance for elastic parameters
NASA Astrophysics Data System (ADS)
Mozayan, Davoud Karami; Gholami, Ali; Siahkoohi, Hamid Reza
2018-04-01
Petrophysical description of reservoirs requires proper knowledge of elastic parameters like P- and S-wave velocities (Vp and Vs) and density (ρ), which can be retrieved from pre-stack seismic data using the concept of elastic impedance (EI). We propose an inversion algorithm which recovers elastic parameters from pre-stack seismic data in two sequential steps. In the first step, using the multichannel blind seismic inversion method (exploited recently for recovering acoustic impedance from post-stack seismic data), high-resolution blocky EI models are obtained directly from partial angle-stacks. Using an efficient total-variation (TV) regularization, each angle-stack is inverted independently in a multichannel form without prior knowledge of the corresponding wavelet. The second step involves inversion of the resulting EI models for elastic parameters. Mathematically, under some assumptions, the EI's are linearly described by the elastic parameters in the logarithm domain. Thus a linear weighted least squares inversion is employed to perform this step. Accuracy of the concept of elastic impedance in predicting reflection coefficients at low and high angles of incidence is compared with that of exact Zoeppritz elastic impedance and the role of low frequency content in the problem is discussed. The performance of the proposed inversion method is tested using synthetic 2D data sets obtained from the Marmousi model and also 2D field data sets. The results confirm the efficiency and accuracy of the proposed method for inversion of pre-stack seismic data.
A new monitor set for the determination of neutron flux parameters in short-time k0-NAA
NASA Astrophysics Data System (ADS)
Kubešová, Marie; Kučera, Jan; Fikrle, Marek
2011-11-01
Multipurpose research reactors such as LVR-15 in Řež require monitoring of the neutron flux parameters (f, α) in each batch of samples analyzed when k0 standardization in NAA is to be used. The above parameters may change quite unpredictably, because experiments in channels adjacent to those used for NAA require an adjustment of the reactor operation parameters and/or active core configuration. For frequent monitoring of the neutron flux parameters the bare multi-monitor method is very convenient. The well-known Au-Zr tri-isotopic monitor set that provides a good tool for determining f and α after long-time irradiation is not optimal in case of short-time irradiation because only a low activity of the 95Zr radionuclide is formed. Therefore, several elements forming radionuclides with suitable half-lives and Q0 and Ēr parameters in a wide range of values were tested, namely 198Au, 56Mn, 88Rb, 128I, 139Ba, and 239U. As a result, an optimal mixture was selected consisting of Au, Mn, and Rb to form a well suited monitor set for irradiation at a thermal neutron fluence rate of 3×1017 m-2 s-1. The procedure of short-time INAA with the new monitor set for k0 standardization was successfully validated using the synthetic reference material SMELS 1 and several matrix reference materials (RMs) representing matrices of sample types frequently analyzed in our laboratory. The results were obtained using the Kayzero for Windows program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
El-Farhan, Y.H.; Scow, K.M.; Fan, S.
Trichloroethylene (TCE) biodegradation in soil under aerobic conditions requires the presence of another compound, such as toluene, to support growth of microbial populations and enzyme induction. The biodegradation kinetics of TCE and toluene were examined by conducting three groups of experiments in soil: toluene only, toluene combined with low TCE concentrations, and toluene with TCE concentrations similar to or higher than toluene. The biodegradation of TCE and toluene and their interrelationships were modeled using a combination of several biodegradation functions. In the model, the pollutants were described as existing in the solid, liquid, and gas phases of soil, with biodegradationmore » occurring only in the liquid phase. The distribution of the chemicals between the solid and liquid phase was described by a linear sorption isotherm, whereas liquid-vapor partitioning was described by Henry's law. Results from 12 experiments with toluene only could be described by a single set of kinetic parameters. The same set of parameters could describe toluene degradation in 10 experiments where low TCE concentrations were present. From these 10 experiments a set of parameters describing TCE cometabolism induced by toluene also was obtained. The complete set of parameters was used to describe the biodegradation of both compounds in 15 additional experiments, where significant TCE toxicity and inhibition effects were expected. Toluene parameters were similar to values reported for pure culture systems. Parameters describing the interaction of TCE with toluene and biomass were different from reported values for pure cultures, suggesting that the presence of soil may have affected the cometabolic ability of the indigenous soil microbial populations.« less
Estimation of proportions in mixed pixels through their region characterization
NASA Technical Reports Server (NTRS)
Chittineni, C. B. (Principal Investigator)
1981-01-01
A region of mixed pixels can be characterized through the probability density function of proportions of classes in the pixels. Using information from the spectral vectors of a given set of pixels from the mixed pixel region, expressions are developed for obtaining the maximum likelihood estimates of the parameters of probability density functions of proportions. The proportions of classes in the mixed pixels can then be estimated. If the mixed pixels contain objects of two classes, the computation can be reduced by transforming the spectral vectors using a transformation matrix that simultaneously diagonalizes the covariance matrices of the two classes. If the proportions of the classes of a set of mixed pixels from the region are given, then expressions are developed for obtaining the estmates of the parameters of the probability density function of the proportions of mixed pixels. Development of these expressions is based on the criterion of the minimum sum of squares of errors. Experimental results from the processing of remotely sensed agricultural multispectral imagery data are presented.
Boatwright, John
1994-01-01
The vertical components of the S wave trains recorded on the Eastern Canadian Telemetered Network (ECTN) from 1980 through 1990 have been spectrally analyzed for source, site, and propagation characteristics. The data set comprises some 1033 recordings of 97 earthquakes whose magnitudes range from M ≈ 3 to 6. The epicentral distances range from 15 to 1000 km, with most of the data set recorded at distances from 200 to 800 km. The recorded S wave trains contain the phases S, SmS, Sn, and Lg and are sampled using windows that increase with distance; the acceleration spectra were analyzed from 1.0 to 10 Hz. To separate the source, site, and propagation characteristics, an inversion for the earthquake corner frequencies, low-frequency levels, and average attenuation parameters is alternated with a regression of residuals onto the set of stations and a grid of 14 distances ranging from 25 to 1000 km. The iteration between these two parts of the inversion converges in about 60 steps. The average attenuation parameters obtained from the inversion were Q = 1997 ± 10 and γ = 0.998 ± 0.003. The most pronounced variation from this average attenuation is a marked deamplification of more than a factor of 2 at 63 km and 2 Hz, which shallows with increasing frequency and increasing distance out to 200 km. The site-response spectra obtained for the ECTN stations are generally flat. The source spectral shape assumed in this inversion provides an adequate spectral model for the smaller events (Mo < 3 × 1021 dyne-cm) in the data set, whose Brune stress drops range from 5 to 150 bars. For the five events in the data set with Mo ≧ 1023 dyne-cm, however, the source spectra obtained by regressing the residuals suggest that an ω2 spectrum is an inadequate model for the spectral shape. In particular, the corner frequencies for most of these large events appear to be split, so that the spectra exhibit an intermediate behavior (where |ü(ω)| is roughly proportional to ω).
NASA Astrophysics Data System (ADS)
Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen
2014-08-01
Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.
[Evaluation of Dose Reduction of the Active Collimator in Multi Detector Row CT].
Ueno, Hiroyuki; Matsubara, Kosuke
The purpose of this study was to evaluate the performance of active collimator by changing acquisition parameters and obtaining dose profiles in z-axis direction. Dose profiles along z-axis were obtained using XRQA2 Gafchromic film. As a result, the active collimator reduced overranging about 55% compared to that without the active collimator. In addition, by changing the combination of X-ray beam width (32 mm, 40 mm), pitch factor (1.4, 0.6), and the X-ray tube rotation time (0.5 s/rot, 1.0 s/rot), the overranging changed from 19.4 to 34.9 mm. Although the active collimator is effective for reducing overranging, it is necessary to adjust acquisition parameters by taking the properties of the active collimator for acquisition parameters, especially setting beam width, into consideration.
Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas
2018-03-06
High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.
Inference of reaction rate parameters based on summary statistics from experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
Inference of reaction rate parameters based on summary statistics from experiments
Khalil, Mohammad; Chowdhary, Kamaljit Singh; Safta, Cosmin; ...
2016-10-15
Here, we present the results of an application of Bayesian inference and maximum entropy methods for the estimation of the joint probability density for the Arrhenius rate para meters of the rate coefficient of the H 2/O 2-mechanism chain branching reaction H + O 2 → OH + O. Available published data is in the form of summary statistics in terms of nominal values and error bars of the rate coefficient of this reaction at a number of temperature values obtained from shock-tube experiments. Our approach relies on generating data, in this case OH concentration profiles, consistent with the givenmore » summary statistics, using Approximate Bayesian Computation methods and a Markov Chain Monte Carlo procedure. The approach permits the forward propagation of parametric uncertainty through the computational model in a manner that is consistent with the published statistics. A consensus joint posterior on the parameters is obtained by pooling the posterior parameter densities given each consistent data set. To expedite this process, we construct efficient surrogates for the OH concentration using a combination of Pad'e and polynomial approximants. These surrogate models adequately represent forward model observables and their dependence on input parameters and are computationally efficient to allow their use in the Bayesian inference procedure. We also utilize Gauss-Hermite quadrature with Gaussian proposal probability density functions for moment computation resulting in orders of magnitude speedup in data likelihood evaluation. Despite the strong non-linearity in the model, the consistent data sets all res ult in nearly Gaussian conditional parameter probability density functions. The technique also accounts for nuisance parameters in the form of Arrhenius parameters of other rate coefficients with prescribed uncertainty. The resulting pooled parameter probability density function is propagated through stoichiometric hydrogen-air auto-ignition computations to illustrate the need to account for correlation among the Arrhenius rate parameters of one reaction and across rate parameters of different reactions.« less
A Survey of Uncontrolled Satellite reentry and Impact Prediction
1993-09-23
NORAD produces " element sets " which are mean values of the orbital elements that have been obtained by removing the periodic orbital variations in a...Final Element Set --a listing of the final orbit parameters. The eccentricity and mean motion data from the listing were used in the investigation...yielded altitude and orbital elements as a function of time. Computer run results for these simulations were extremely long and therefore the decision was
NASA Technical Reports Server (NTRS)
Frenklach, Michael; Wang, Hai; Rabinowitz, Martin J.
1992-01-01
A method of systematic optimization, solution mapping, as applied to a large-scale dynamic model is presented. The basis of the technique is parameterization of model responses in terms of model parameters by simple algebraic expressions. These expressions are obtained by computer experiments arranged in a factorial design. The developed parameterized responses are then used in a joint multiparameter multidata-set optimization. A brief review of the mathematical background of the technique is given. The concept of active parameters is discussed. The technique is applied to determine an optimum set of parameters for a methane combustion mechanism. Five independent responses - comprising ignition delay times, pre-ignition methyl radical concentration profiles, and laminar premixed flame velocities - were optimized with respect to thirteen reaction rate parameters. The numerical predictions of the optimized model are compared to those computed with several recent literature mechanisms. The utility of the solution mapping technique in situations where the optimum is not unique is also demonstrated.
Self-adaptive multi-objective harmony search for optimal design of water distribution networks
NASA Astrophysics Data System (ADS)
Choi, Young Hwan; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon
2017-11-01
In multi-objective optimization computing, it is important to assign suitable parameters to each optimization problem to obtain better solutions. In this study, a self-adaptive multi-objective harmony search (SaMOHS) algorithm is developed to apply the parameter-setting-free technique, which is an example of a self-adaptive methodology. The SaMOHS algorithm attempts to remove some of the inconvenience from parameter setting and selects the most adaptive parameters during the iterative solution search process. To verify the proposed algorithm, an optimal least cost water distribution network design problem is applied to three different target networks. The results are compared with other well-known algorithms such as multi-objective harmony search and the non-dominated sorting genetic algorithm-II. The efficiency of the proposed algorithm is quantified by suitable performance indices. The results indicate that SaMOHS can be efficiently applied to the search for Pareto-optimal solutions in a multi-objective solution space.
NASA Astrophysics Data System (ADS)
Sun, Zhan; Zhang, Hong-Fei
2018-04-01
A thorough study reveals that the only key parameter for ψ (J/ψ, ψ‧) polarization at hadron colliders is the ratio < {O}\\psi {(}3{S}1[8])> /< {O}\\psi {(}3{P}0[8])> , if the velocity scaling rule holds. A slight variation of this parameter results in substantial change of the ψ polarization. We find that with equally good description of the yield data, this parameter can vary significantly. Fitting the yield data is therefore incapable of determining this parameter, and consequently, of determining the ψ polarization. We provide a universal approach to fixing the long-distance matrix elements (LDMEs) for J/ψ and ψ‧ production. Further, with the existing data, we implement this approach, obtain a favorable set of the LDMEs, and manage to reconcile the charmonia production and polarization experiments, except for two sets of CDF data on J/ψ polarization. Supported by National Natural Science Foundation of China (11405268, 11647113, 11705034)
A Mössbauer study of some new trinuclear Fe-S cluster compounds
NASA Astrophysics Data System (ADS)
Zhang, Jing-Kun; Song, Li-Cheng; Zhang, Ze-Min; Liu, Rong-Gon; Cheng, Zheng-Zhung; Wang, Ji-Tao
1988-02-01
The reaction of (u-RS)2 (XMgS) Fe2 (CO)2 with CpFe (CO)2I gave thirteen new compounds (u-RS) [CpFe (CO)2S] Fe2 (CO)4. Mossbauer spectra were obtained at 80K. Two quadrupote doubles (A set and B set) were present. The ratio of areas between A set and B set was close to 2∶1. The molecule of every compound contained two Fe (2+) which were in the same chemical environment of low spin state with a coordination number of six, and the Mossbauer parameters of the two Fe (2+), IS=0.2 0.3 mm/s, QS=0.7 0.8 mm/s. In addition, the molecule contained a Fe (3+) in low spin state which was proved by ESR. Its Mossbauer parameters, IS=0.4 0.5 mm/s. QS=1.5±1.6 mm/s, The molecular structure of (u-MeS) [u-CpFe (CO)2S] Fe2 (CO)4 was determined by X-ray diffraction, monoclinic form, space group P21/n z=4, unit cell parameters, a=7.90A, b=10.77A, c=22.53A.
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, F. J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
Stability analysis in tachyonic potential chameleon cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farajollahi, H.; Salehi, A.; Tayebi, F.
2011-05-01
We study general properties of attractors for tachyonic potential chameleon scalar-field model which possess cosmological scaling solutions. An analytic formulation is given to obtain fixed points with a discussion on their stability. The model predicts a dynamical equation of state parameter with phantom crossing behavior for an accelerating universe. We constrain the parameters of the model by best fitting with the recent data-sets from supernovae and simulated data points for redshift drift experiment generated by Monte Carlo simulations.
He, Wangli; Qian, Feng; Han, Qing-Long; Cao, Jinde
2012-10-01
This paper investigates the problem of master-slave synchronization of two delayed Lur'e systems in the presence of parameter mismatches. First, by analyzing the corresponding synchronization error system, synchronization with an error level, which is referred to as quasi-synchronization, is established. Some delay-dependent quasi-synchronization criteria are derived. An estimation of the synchronization error bound is given, and an explicit expression of error levels is obtained. Second, sufficient conditions on the existence of feedback controllers under a predetermined error level are provided. The controller gains are obtained by solving a set of linear matrix inequalities. Finally, a delayed Chua's circuit is chosen to illustrate the effectiveness of the derived results.
A possible loophole in the theorem of Bell.
Hess, K; Philipp, W
2001-12-04
The celebrated inequalities of Bell are based on the assumption that local hidden parameters exist. When combined with conflicting experimental results, these inequalities appear to prove that local hidden parameters cannot exist. This contradiction suggests to many that only instantaneous action at a distance can explain the Einstein, Podolsky, and Rosen type of experiments. We show that, in addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions that contribute to his being able to obtain the desired contradiction. For instance, Bell assumes that the hidden parameters do not depend on time and are governed by a single probability measure independent of the analyzer settings. We argue that the exclusion of time has neither a physical nor a mathematical basis but is based on Bell's translation of the concept of Einstein locality into the language of probability theory. Our additional set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does not permit Bell-type proofs to go forward.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Jing-Jy; Flood, Paul E.; LePoire, David
In this report, the results generated by RESRAD-RDD version 2.01 are compared with those produced by RESRAD-RDD version 1.7 for different scenarios with different sets of input parameters. RESRAD-RDD version 1.7 is spreadsheet-driven, performing calculations with Microsoft Excel spreadsheets. RESRAD-RDD version 2.01 revamped version 1.7 by using command-driven programs designed with Visual Basic.NET to direct calculations with data saved in Microsoft Access database, and re-facing the graphical user interface (GUI) to provide more flexibility and choices in guideline derivation. Because version 1.7 and version 2.01 perform the same calculations, the comparison of their results serves as verification of both versions.more » The verification covered calculation results for 11 radionuclides included in both versions: Am-241, Cf-252, Cm-244, Co-60, Cs-137, Ir-192, Po-210, Pu-238, Pu-239, Ra-226, and Sr-90. At first, all nuclidespecific data used in both versions were compared to ensure that they are identical. Then generic operational guidelines and measurement-based radiation doses or stay times associated with a specific operational guideline group were calculated with both versions using different sets of input parameters, and the results obtained with the same set of input parameters were compared. A total of 12 sets of input parameters were used for the verification, and the comparison was performed for each operational guideline group, from A to G, sequentially. The verification shows that RESRAD-RDD version 1.7 and RESRAD-RDD version 2.01 generate almost identical results; the slight differences could be attributed to differences in numerical precision with Microsoft Excel and Visual Basic.NET. RESRAD-RDD version 2.01 allows the selection of different units for use in reporting calculation results. The results of SI units were obtained and compared with the base results (in traditional units) used for comparison with version 1.7. The comparison shows that RESRAD-RDD version 2.01 correctly reports calculation results in the unit specified in the GUI.« less
NASA Astrophysics Data System (ADS)
Dumon, M.; Van Ranst, E.
2016-01-01
This paper presents a free and open-source program called PyXRD (short for Python X-ray diffraction) to improve the quantification of complex, poly-phasic mixed-layer phyllosilicate assemblages. The validity of the program was checked by comparing its output with Sybilla v2.2.2, which shares the same mathematical formalism. The novelty of this program is the ab initio incorporation of the multi-specimen method, making it possible to share phases and (a selection of) their parameters across multiple specimens. PyXRD thus allows for modelling multiple specimens side by side, and this approach speeds up the manual refinement process significantly. To check the hypothesis that this multi-specimen set-up - as it effectively reduces the number of parameters and increases the number of observations - can also improve automatic parameter refinements, we calculated X-ray diffraction patterns for four theoretical mineral assemblages. These patterns were then used as input for one refinement employing the multi-specimen set-up and one employing the single-pattern set-ups. For all of the assemblages, PyXRD was able to reproduce or approximate the input parameters with the multi-specimen approach. Diverging solutions only occurred in single-pattern set-ups, which do not contain enough information to discern all minerals present (e.g. patterns of heated samples). Assuming a correct qualitative interpretation was made and a single pattern exists in which all phases are sufficiently discernible, the obtained results indicate a good quantification can often be obtained with just that pattern. However, these results from theoretical experiments cannot automatically be extrapolated to all real-life experiments. In any case, PyXRD has proven to be useful when X-ray diffraction patterns are modelled for complex mineral assemblages containing mixed-layer phyllosilicates with a multi-specimen approach.
Robust and fast-converging level set method for side-scan sonar image segmentation
NASA Astrophysics Data System (ADS)
Liu, Yan; Li, Qingwu; Huo, Guanying
2017-11-01
A robust and fast-converging level set method is proposed for side-scan sonar (SSS) image segmentation. First, the noise in each sonar image is removed using the adaptive nonlinear complex diffusion filter. Second, k-means clustering is used to obtain the initial presegmentation image from the denoised image, and then the distance maps of the initial contours are reinitialized to guarantee the accuracy of the numerical calculation used in the level set evolution. Finally, the satisfactory segmentation is achieved using a robust variational level set model, where the evolution control parameters are generated by the presegmentation. The proposed method is successfully applied to both synthetic image with speckle noise and real SSS images. Experimental results show that the proposed method needs much less iteration and therefore is much faster than the fuzzy local information c-means clustering method, the level set method using a gamma observation model, and the enhanced region-scalable fitting method. Moreover, the proposed method can usually obtain more accurate segmentation results compared with other methods.
NASA Astrophysics Data System (ADS)
Ray, Shonket; Keller, Brad M.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina
2016-03-01
This work details a methodology to obtain optimal parameter values for a locally-adaptive texture analysis algorithm that extracts mammographic texture features representative of breast parenchymal complexity for predicting falsepositive (FP) recalls from breast cancer screening with digital mammography. The algorithm has two components: (1) adaptive selection of localized regions of interest (ROIs) and (2) Haralick texture feature extraction via Gray- Level Co-Occurrence Matrices (GLCM). The following parameters were systematically varied: mammographic views used, upper limit of the ROI window size used for adaptive ROI selection, GLCM distance offsets, and gray levels (binning) used for feature extraction. Each iteration per parameter set had logistic regression with stepwise feature selection performed on a clinical screening cohort of 474 non-recalled women and 68 FP recalled women; FP recall prediction was evaluated using area under the curve (AUC) of the receiver operating characteristic (ROC) and associations between the extracted features and FP recall were assessed via odds ratios (OR). A default instance of mediolateral (MLO) view, upper ROI size limit of 143.36 mm (2048 pixels2), GLCM distance offset combination range of 0.07 to 0.84 mm (1 to 12 pixels) and 16 GLCM gray levels was set. The highest ROC performance value of AUC=0.77 [95% confidence intervals: 0.71-0.83] was obtained at three specific instances: the default instance, upper ROI window equal to 17.92 mm (256 pixels2), and gray levels set to 128. The texture feature of sum average was chosen as a statistically significant (p<0.05) predictor and associated with higher odds of FP recall for 12 out of 14 total instances.
NASA Astrophysics Data System (ADS)
Książek, Judyta
2015-10-01
At present, there has been a great interest in the development of texture based image classification methods in many different areas. This study presents the results of research carried out to assess the usefulness of selected textural features for detection of asbestos-cement roofs in orthophotomap classification. Two different orthophotomaps of southern Poland (with ground resolution: 5 cm and 25 cm) were used. On both orthoimages representative samples for two classes: asbestos-cement roofing sheets and other roofing materials were selected. Estimation of texture analysis usefulness was conducted using machine learning methods based on decision trees (C5.0 algorithm). For this purpose, various sets of texture parameters were calculated in MaZda software. During the calculation of decision trees different numbers of texture parameters groups were considered. In order to obtain the best settings for decision trees models cross-validation was performed. Decision trees models with the lowest mean classification error were selected. The accuracy of the classification was held based on validation data sets, which were not used for the classification learning. For 5 cm ground resolution samples, the lowest mean classification error was 15.6%. The lowest mean classification error in the case of 25 cm ground resolution was 20.0%. The obtained results confirm potential usefulness of the texture parameter image processing for detection of asbestos-cement roofing sheets. In order to improve the accuracy another extended study should be considered in which additional textural features as well as spectral characteristics should be analyzed.
Yang, Chih-Cheng; Liu, Chang-Lun
2016-08-12
Cold forging is often applied in the fastener industry. Wires in coil form are used as semi-finished products for the production of billets. This process usually requires preliminarily drawing wire coil in order to reduce the diameter of products. The wire usually has to be annealed to improve its cold formability. The quality of spheroidizing annealed wire affects the forming quality of screws. In the fastener industry, most companies use a subcritical process for spheroidized annealing. Various parameters affect the spheroidized annealing quality of steel wire, such as the spheroidized annealing temperature, prolonged heating time, furnace cooling time and flow rate of nitrogen (protective atmosphere). The effects of the spheroidized annealing parameters affect the quality characteristics of steel wire, such as the tensile strength and hardness. A series of experimental tests on AISI 1022 low carbon steel wire are carried out and the Taguchi method is used to obtain optimum spheroidized annealing conditions to improve the mechanical properties of steel wires for cold forming. The results show that the spheroidized annealing temperature and prolonged heating time have the greatest effect on the mechanical properties of steel wires. A comparison between the results obtained using the optimum spheroidizing conditions and the measures using the original settings shows the new spheroidizing parameter settings effectively improve the performance measures over their value at the original settings. The results presented in this paper could be used as a reference for wire manufacturers.
López, Iván; Borzacconi, Liliana
2010-10-01
A model based on the work of Angelidaki et al. (1993) was applied to simulate the anaerobic biodegradation of ruminal contents. In this study, two fractions of solids with different biodegradation rates were considered. A first-order kinetic was used for the easily biodegradable fraction and a kinetic expression that is function of the extracellular enzyme concentration was used for the slowly biodegradable fraction. Batch experiments were performed to obtain an accumulated methane curve that was then used to obtain the model parameters. For this determination, a methodology derived from the "multiple-shooting" method was successfully used. Monte Carlo simulations allowed a confidence range to be obtained for each parameter. Simulations of a continuous reactor were performed using the optimal set of model parameters. The final steady-states were determined as functions of the operational conditions (solids load and residence time). The simulations showed that methane flow peaked at a flow rate of 0.5-0.8 Nm(3)/d/m(reactor)(3) at a residence time of 10-20 days. Simulations allow the adequate selection of operating conditions of a continuous reactor. (c) 2010 Elsevier Ltd. All rights reserved.
Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M
2012-08-01
This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Fan, Yurui; Huang, Guohe; Veawab, Amornvadee
2012-01-01
In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.
NASA Astrophysics Data System (ADS)
Vignesh, S.; Dinesh Babu, P.; Surya, G.; Dinesh, S.; Marimuthu, P.
2018-02-01
The ultimate goal of all production entities is to select the process parameters that would be of maximum strength, minimum wear and friction. The friction and wear are serious problems in most of the industries which are influenced by the working set of parameters, oxidation characteristics and mechanism involved in formation of wear. The experimental input parameters such as sliding distance, applied load, and temperature are utilized in finding out the optimized solution for achieving the desired output responses such as coefficient of friction, wear rate, and volume loss. The optimization is performed with the help of a novel method, Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) based on an evolutionary algorithm. The regression equations obtained using Response Surface Methodology (RSM) are used in determining the optimum process parameters. Further, the results achieved through desirability approach in RSM are compared with that of the optimized solution obtained through NSGA-II. The results conclude that proposed evolutionary technique is much effective and faster than the desirability approach.
NASA Astrophysics Data System (ADS)
Amanda, A. R.; Widita, R.
2016-03-01
The aim of this research is to compare some image segmentation methods for lungs based on performance evaluation parameter (Mean Square Error (MSE) and Peak Signal Noise to Ratio (PSNR)). In this study, the methods compared were connected threshold, neighborhood connected, and the threshold level set segmentation on the image of the lungs. These three methods require one important parameter, i.e the threshold. The threshold interval was obtained from the histogram of the original image. The software used to segment the image here was InsightToolkit-4.7.0 (ITK). This research used 5 lung images to be analyzed. Then, the results were compared using the performance evaluation parameter determined by using MATLAB. The segmentation method is said to have a good quality if it has the smallest MSE value and the highest PSNR. The results show that four sample images match the criteria of connected threshold, while one sample refers to the threshold level set segmentation. Therefore, it can be concluded that connected threshold method is better than the other two methods for these cases.
Bflinks: Reliable Bugfix Links via Bidirectional References and Tuned Heuristics
2014-01-01
Background. Data from software version archives and defect databases can be used for defect insertion circumstance analysis and defect prediction. The first step in such analyses is identifying defect-correcting changes in the version archive (bugfix commits) and enriching them with additional metadata by establishing bugfix links to corresponding entries in the defect database. Candidate bugfix commits are typically identified via heuristic string matching on the commit message. Research Questions. Which filters could be used to obtain a set of bugfix links? How to tune their parameters? What accuracy is achieved? Method. We analyze a modular set of seven independent filters, including new ones that make use of reverse links, and evaluate visual heuristics for setting cutoff parameters. For a commercial repository, a product expert manually verifies over 2500 links to validate the results with unprecedented accuracy. Results. The heuristics pick a very good parameter value for five filters and a reasonably good one for the sixth. The combined filtering, called bflinks, provides 93% precision and only 7% results loss. Conclusion. Bflinks can provide high-quality results and adapts to repositories with different properties. PMID:27433506
Mei, J.; Dong, P.; Kalnaus, S.; ...
2017-07-21
It has been well established that fatigue damage process is load-path dependent under non-proportional multi-axial loading conditions. Most of studies to date have been focusing on interpretation of S-N based test data by constructing a path-dependent fatigue damage model. Our paper presents a two-parameter mixed-mode fatigue crack growth model which takes into account of crack growth dependency on both load path traversed and a maximum effective stress intensity attained in a stress intensity factor plane (e.g.,KI-KIII plane). Furthermore, by taking advantage of a path-dependent maximum range (PDMR) cycle definition (Dong et al., 2010; Wei and Dong, 2010), the two parametersmore » are formulated by introducing a moment of load path (MLP) based equivalent stress intensity factor range (ΔKNP) and a maximum effective stress intensity parameter KMax incorporating an interaction term KI·KIII. To examine the effectiveness of the proposed model, two sets of crack growth rate test data are considered. The first set is obtained as a part of this study using 304 stainless steel disk specimens subjected to three combined non-proportional modes I and III loading conditions (i.e., with a phase angle of 0°, 90°, and 180°). The second set was obtained by Feng et al. (2007) using 1070 steel disk specimens subjected to similar types of non-proportional mixed-mode conditions. Once the proposed two-parameter non-proportional mixed-mode crack growth model is used, it is shown that a good correlation can be achieved for both sets of the crack growth rate test data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, J.; Dong, P.; Kalnaus, S.
It has been well established that fatigue damage process is load-path dependent under non-proportional multi-axial loading conditions. Most of studies to date have been focusing on interpretation of S-N based test data by constructing a path-dependent fatigue damage model. Our paper presents a two-parameter mixed-mode fatigue crack growth model which takes into account of crack growth dependency on both load path traversed and a maximum effective stress intensity attained in a stress intensity factor plane (e.g.,KI-KIII plane). Furthermore, by taking advantage of a path-dependent maximum range (PDMR) cycle definition (Dong et al., 2010; Wei and Dong, 2010), the two parametersmore » are formulated by introducing a moment of load path (MLP) based equivalent stress intensity factor range (ΔKNP) and a maximum effective stress intensity parameter KMax incorporating an interaction term KI·KIII. To examine the effectiveness of the proposed model, two sets of crack growth rate test data are considered. The first set is obtained as a part of this study using 304 stainless steel disk specimens subjected to three combined non-proportional modes I and III loading conditions (i.e., with a phase angle of 0°, 90°, and 180°). The second set was obtained by Feng et al. (2007) using 1070 steel disk specimens subjected to similar types of non-proportional mixed-mode conditions. Once the proposed two-parameter non-proportional mixed-mode crack growth model is used, it is shown that a good correlation can be achieved for both sets of the crack growth rate test data.« less
Duan, Yong; Wu, Chun; Chowdhury, Shibasish; Lee, Mathew C; Xiong, Guoming; Zhang, Wei; Yang, Rong; Cieplak, Piotr; Luo, Ray; Lee, Taisung; Caldwell, James; Wang, Junmei; Kollman, Peter
2003-12-01
Molecular mechanics models have been applied extensively to study the dynamics of proteins and nucleic acids. Here we report the development of a third-generation point-charge all-atom force field for proteins. Following the earlier approach of Cornell et al., the charge set was obtained by fitting to the electrostatic potentials of dipeptides calculated using B3LYP/cc-pVTZ//HF/6-31G** quantum mechanical methods. The main-chain torsion parameters were obtained by fitting to the energy profiles of Ace-Ala-Nme and Ace-Gly-Nme di-peptides calculated using MP2/cc-pVTZ//HF/6-31G** quantum mechanical methods. All other parameters were taken from the existing AMBER data base. The major departure from previous force fields is that all quantum mechanical calculations were done in the condensed phase with continuum solvent models and an effective dielectric constant of epsilon = 4. We anticipate that this force field parameter set will address certain critical short comings of previous force fields in condensed-phase simulations of proteins. Initial tests on peptides demonstrated a high-degree of similarity between the calculated and the statistically measured Ramanchandran maps for both Ace-Gly-Nme and Ace-Ala-Nme di-peptides. Some highlights of our results include (1) well-preserved balance between the extended and helical region distributions, and (2) favorable type-II poly-proline helical region in agreement with recent experiments. Backward compatibility between the new and Cornell et al. charge sets, as judged by overall agreement between dipole moments, allows a smooth transition to the new force field in the area of ligand-binding calculations. Test simulations on a large set of proteins are also discussed. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 1999-2012, 2003
NASA Astrophysics Data System (ADS)
Amjad, M.; Salam, Z.; Ishaque, K.
2014-04-01
In order to design an efficient resonant power supply for ozone gas generator, it is necessary to accurately determine the parameters of the ozone chamber. In the conventional method, the information from Lissajous plot is used to estimate the values of these parameters. However, the experimental setup for this purpose can only predict the parameters at one operating frequency and there is no guarantee that it results in the highest ozone gas yield. This paper proposes a new approach to determine the parameters using a search and optimization technique known as Differential Evolution (DE). The desired objective function of DE is set at the resonance condition and the chamber parameter values can be searched regardless of experimental constraints. The chamber parameters obtained from the DE technique are validated by experiment.
Riesová, Martina; Svobodová, Jana; Ušelová, Kateřina; Tošner, Zdeněk; Zusková, Iva; Gaš, Bohuslav
2014-10-17
In this paper we determine acid dissociation constants, limiting ionic mobilities, complexation constants with β-cyclodextrin or heptakis(2,3,6-tri-O-methyl)-β-cyclodextrin, and mobilities of resulting complexes of profens, using capillary zone electrophoresis and affinity capillary electrophoresis. Complexation parameters are determined for both neutral and fully charged forms of profens and further corrected for actual ionic strength and variable viscosity in order to obtain thermodynamic values of complexation constants. The accuracy of obtained complexation parameters is verified by multidimensional nonlinear regression of affinity capillary electrophoretic data, which provides the acid dissociation and complexation parameters within one set of measurements, and by NMR technique. A good agreement among all discussed methods was obtained. Determined complexation parameters were used as input parameters for simulations of electrophoretic separation of profens by Simul 5 Complex. An excellent agreement of experimental and simulated results was achieved in terms of positions, shapes, and amplitudes of analyte peaks, confirming the applicability of Simul 5 Complex to complex systems, and accuracy of obtained physical-chemical constants. Simultaneously, we were able to demonstrate the influence of electromigration dispersion on the separation efficiency, which is not possible using the common theoretical approaches, and predict the electromigration order reversals of profen peaks. We have shown that determined acid dissociation and complexation parameters in combination with tool Simul 5 Complex software can be used for optimization of separation conditions in capillary electrophoresis. Copyright © 2014 Elsevier B.V. All rights reserved.
Lyapunov dimension formula for the global attractor of the Lorenz system
NASA Astrophysics Data System (ADS)
Leonov, G. A.; Kuznetsov, N. V.; Korzhemanova, N. A.; Kusakin, D. V.
2016-12-01
The exact Lyapunov dimension formula for the Lorenz system for a positive measure set of parameters, including classical values, was analytically obtained first by G.A. Leonov in 2002. Leonov used the construction technique of special Lyapunov-type functions, which was developed by him in 1991 year. Later it was shown that the consideration of larger class of Lyapunov-type functions permits proving the validity of this formula for all parameters, of the system, such that all the equilibria of the system are hyperbolically unstable. In the present work it is proved the validity of the formula for Lyapunov dimension for a wider variety of parameters values including all parameters, which satisfy the classical physical limitations.
Lalinet status - station expansion and lidar ratio systematic measurements
NASA Astrophysics Data System (ADS)
Landulfo, Eduardo; Lopes, Fabio; Moreira, Gregori Arruda; da Silva, Jonatan; Ristori, Pablo; Quel, Eduardo; Otero, Lidia; Pallota, Juan Vicente; Herrera, Milagros; Salvador, Jacobo; Bali, Juan Lucas; Wolfram, Eliam; Etala, Paula; Barbero, Albane; Forno, Ricardo; Sanchez, Maria Fernanda; Barbosa, Henrique; Gouveia, Diego; Santos, Amanda Vieira; Hoelzemann, Judith; Fernandez, Jose Henrique; Guedes, Anderson; Silva, Antonieta; Barja, Boris; Zamorano, Felix; Legue, Raul Perez; Bastidas, Alvaro; Zabala, Maribel Vellejo; Velez, Juan; Nisperuza, Daniel; Montilla, Elena; Arredondo, Rene Estevam; Marrero, Juan Carlos Antuña; Vega, Alberth Rodriguez; Alados-Arboledas, Lucas; Guerrero-Rascado, Juan Luis; Sugimoto, Nobuo; Yoshitaka, Jin
2018-04-01
LALINET is expanding regionally to guarantee spatial coverage over South and Central Americas. One of the network goals is to obtain a set of regional representative aerosol optical properties such as particle backscatter, extinction and lidar ratio. Given the North-South extension and influence of distinct airmass circulation patterns it is paramount to distinguish these optical parameters in order to gain better perfomance in radiation transfer models. A set of lidar ratio data is presented.
Ring rolling process simulation for geometry optimization
NASA Astrophysics Data System (ADS)
Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio
2017-10-01
Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.
Optimization-Based Inverse Identification of the Parameters of a Concrete Cap Material Model
NASA Astrophysics Data System (ADS)
Král, Petr; Hokeš, Filip; Hušek, Martin; Kala, Jiří; Hradil, Petr
2017-10-01
Issues concerning the advanced numerical analysis of concrete building structures in sophisticated computing systems currently require the involvement of nonlinear mechanics tools. The efforts to design safer, more durable and mainly more economically efficient concrete structures are supported via the use of advanced nonlinear concrete material models and the geometrically nonlinear approach. The application of nonlinear mechanics tools undoubtedly presents another step towards the approximation of the real behaviour of concrete building structures within the framework of computer numerical simulations. However, the success rate of this application depends on having a perfect understanding of the behaviour of the concrete material models used and having a perfect understanding of the used material model parameters meaning. The effective application of nonlinear concrete material models within computer simulations often becomes very problematic because these material models very often contain parameters (material constants) whose values are difficult to obtain. However, getting of the correct values of material parameters is very important to ensure proper function of a concrete material model used. Today, one possibility, which permits successful solution of the mentioned problem, is the use of optimization algorithms for the purpose of the optimization-based inverse material parameter identification. Parameter identification goes hand in hand with experimental investigation while it trying to find parameter values of the used material model so that the resulting data obtained from the computer simulation will best approximate the experimental data. This paper is focused on the optimization-based inverse identification of the parameters of a concrete cap material model which is known under the name the Continuous Surface Cap Model. Within this paper, material parameters of the model are identified on the basis of interaction between nonlinear computer simulations, gradient based and nature inspired optimization algorithms and experimental data, the latter of which take the form of a load-extension curve obtained from the evaluation of uniaxial tensile test results. The aim of this research was to obtain material model parameters corresponding to the quasi-static tensile loading which may be further used for the research involving dynamic and high-speed tensile loading. Based on the obtained results it can be concluded that the set goal has been reached.
Surprises and insights from long-term aquatic datasets and experiments
Walter K. Dodds; Christopher T. Robinson; Evelyn E. Gaiser; Gretchen J.A. Hansen; Heather Powell; Joseph M. Smith; Nathaniel B. Morse; Sherri L. Johnson; Stanley V. Gregory; Tisza Bell; Timothy K. Kratz; William H. McDowell
2012-01-01
Long-term research on freshwater ecosystems provides insights that can be difficult to obtain from other approaches. Widespread monitoring of ecologically relevant water-quality parameters spanning decades can facilitate important tests of ecological principles. Unique long-term data sets and analytical tools are increasingly available, allowing for powerful and...
Magsat vector magnetometer calibration using Magsat geomagnetic field measurements
NASA Technical Reports Server (NTRS)
Lancaster, E. R.; Jennings, T.; Morrissey, M.; Langel, R. A.
1980-01-01
From the time of its launch on Oct. 30, 1979 into a nearly polar, Sun synchronous orbit, until it reentered the Earth's atmosphere on June 11, 1980, Magsat measured and transmitted more than three complete sets of global magnetic field data. The data obtained from the mission will be used primarily to compute a currently accurate model of the Earth's main magnetic field, to update and refine world and regional magnetic charts, and to develop a global scalar and vector crustal magnetic anomaly map. The in-flight calibration procecure used for 39 vector magnetometer system parameters is described as well as results obtained from some data sets and the numerical studies designed to evaluate the results.
Advanced approach to the analysis of a series of in-situ nuclear forward scattering experiments
NASA Astrophysics Data System (ADS)
Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel
2017-03-01
This study introduces a sequential fitting procedure as a specific approach to nuclear forward scattering (NFS) data evaluation. Principles and usage of this advanced evaluation method are described in details and its utilization is demonstrated on NFS in-situ investigations of fast processes. Such experiments frequently consist of hundreds of time spectra which need to be evaluated. The introduced procedure allows the analysis of these experiments and significantly decreases the time needed for the data evaluation. The key contributions of the study are the sequential use of the output fitting parameters of a previous data set as the input parameters for the next data set and the model suitability crosscheck option of applying the procedure in ascending and descending directions of the data sets. Described fitting methodology is beneficial for checking of model validity and reliability of obtained results.
NASA Astrophysics Data System (ADS)
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-08-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-01-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
2016-06-13
motional ground state, the ratio of Rabi frequencies of carrier and sideband couplings is given by the Lamb-Dicke parameter48, which is for u1 and Dkx...carrier Rabi - frequencies determine Lamb-Dicke parameters and allow for finding the orientation of modes. We use a single ion near T0 to determine the...and find corresponding coefficient settings where we obtain a maximal Rabi rate of the detection transition and/or minimal Rabi rates of micromotion
Control mechanisms for stochastic biochemical systems via computation of reachable sets.
Lakatos, Eszter; Stumpf, Michael P H
2017-08-01
Controlling the behaviour of cells by rationally guiding molecular processes is an overarching aim of much of synthetic biology. Molecular processes, however, are notoriously noisy and frequently nonlinear. We present an approach to studying the impact of control measures on motifs of molecular interactions that addresses the problems faced in many biological systems: stochasticity, parameter uncertainty and nonlinearity. We show that our reachability analysis formalism can describe the potential behaviour of biological (naturally evolved as well as engineered) systems, and provides a set of bounds on their dynamics at the level of population statistics: for example, we can obtain the possible ranges of means and variances of mRNA and protein expression levels, even in the presence of uncertainty about model parameters.
Control mechanisms for stochastic biochemical systems via computation of reachable sets
Lakatos, Eszter
2017-01-01
Controlling the behaviour of cells by rationally guiding molecular processes is an overarching aim of much of synthetic biology. Molecular processes, however, are notoriously noisy and frequently nonlinear. We present an approach to studying the impact of control measures on motifs of molecular interactions that addresses the problems faced in many biological systems: stochasticity, parameter uncertainty and nonlinearity. We show that our reachability analysis formalism can describe the potential behaviour of biological (naturally evolved as well as engineered) systems, and provides a set of bounds on their dynamics at the level of population statistics: for example, we can obtain the possible ranges of means and variances of mRNA and protein expression levels, even in the presence of uncertainty about model parameters. PMID:28878957
A methodology for the transfer of probabilities between accident severity categories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitlow, J. D.; Neuhauser, K. S.
A methodology has been developed which allows the accident probabilities associated with one accident-severity category scheme to be transferred to another severity category scheme. The methodology requires that the schemes use a common set of parameters to define the categories. The transfer of accident probabilities is based on the relationships between probability of occurrence and each of the parameters used to define the categories. Because of the lack of historical data describing accident environments in engineering terms, these relationships may be difficult to obtain directly for some parameters. Numerical models or experienced judgement are often needed to obtain the relationships.more » These relationships, even if they are not exact, allow the accident probability associated with any severity category to be distributed within that category in a manner consistent with accident experience, which in turn will allow the accident probability to be appropriately transferred to a different category scheme.« less
He, Ning; Sun, Hechun; Dai, Miaomiao
2014-05-01
To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.
Assessment of image quality in x-ray radiography imaging using a small plasma focus device
NASA Astrophysics Data System (ADS)
Kanani, A.; Shirani, B.; Jabbari, I.; Mokhtari, J.
2014-08-01
This paper offers a comprehensive investigation of image quality parameters for a small plasma focus as a pulsed hard x-ray source for radiography applications. A set of images were captured from some metal objects and electronic circuits using a low energy plasma focus at different voltages of capacitor bank and different pressures of argon gas. The x-ray source focal spot of this device was obtained to be about 0.6 mm using the penumbra imaging method. The image quality was studied by several parameters such as image contrast, line spread function (LSF) and modulation transfer function (MTF). Results showed that the contrast changes by variations in gas pressure. The best contrast was obtained at a pressure of 0.5 mbar and 3.75 kJ stored energy. The results of x-ray dose from the device showed that about 0.6 mGy is sufficient to obtain acceptable images on the film. The measurements of LSF and MTF parameters were carried out by means of a thin stainless steel wire 0.8 mm in diameter and the cut-off frequency was obtained to be about 1.5 cycles/mm.
NASA Astrophysics Data System (ADS)
Lach, Adeline; Boulahya, Faïza; André, Laurent; Lassin, Arnault; Azaroual, Mohamed; Serin, Jean-Paul; Cézac, Pierre
2016-07-01
The thermal and volumetric properties of complex aqueous solutions are described according to the Pitzer equation, explicitly taking into account the speciation in the aqueous solutions. The thermal properties are the apparent relative molar enthalpy (Lϕ) and the apparent molar heat capacity (Cp,ϕ). The volumetric property is the apparent molar volume (Vϕ). Equations describing these properties are obtained from the temperature or pressure derivatives of the excess Gibbs energy and make it possible to calculate the dilution enthalpy (∆HD), the heat capacity (cp) and the density (ρ) of aqueous solutions up to high concentrations. Their implementation in PHREEQC V.3 (Parkhurst and Appelo, 2013) is described and has led to a new numerical tool, called PhreeSCALE. It was tested first, using a set of parameters (specific interaction parameters and standard properties) from the literature for two binary systems (Na2SO4-H2O and MgSO4-H2O), for the quaternary K-Na-Cl-SO4 system (heat capacity only) and for the Na-K-Ca-Mg-Cl-SO4-HCO3 system (density only). The results obtained with PhreeSCALE are in agreement with the literature data when the same standard solution heat capacity (Cp0) and volume (V0) values are used. For further applications of this improved computation tool, these standard solution properties were calculated independently, using the Helgeson-Kirkham-Flowers (HKF) equations. By using this kind of approach, most of the Pitzer interaction parameters coming from literature become obsolete since they are not coherent with the standard properties calculated according to the HKF formalism. Consequently a new set of interaction parameters must be determined. This approach was successfully applied to the Na2SO4-H2O and MgSO4-H2O binary systems, providing a new set of optimized interaction parameters, consistent with the standard solution properties derived from the HKF equations.
Analysis of aerobic granular sludge formation based on grey system theory.
Zhang, Cuiya; Zhang, Hanmin
2013-04-01
Based on grey entropy analysis, the relational grade of operational parameters with aerobic granular sludge's granulation indicators was studied. The former consisted of settling time (ST), aeration time (AT), superficial gas velocity (SGV), height/diameter (H/D) ratio and organic loading rates (OLR), the latter included sludge volume index (SVI) and set-up time. The calculated result showed that for SVI and set-up time, the influence orders and the corresponding grey entropy relational grades (GERG) were: SGV (0.9935) > AT (0.9921) > OLR (0.9894) > ST (0.9876) > H/D (0.9857) and SGV (0.9928) > H/D (0.9914) > AT (0.9909) > OLR (0.9897) > ST (0.9878). The chosen parameters were all key impact factors as each GERG was larger than 0.98. SGV played an important role in improving SVI transformation and facilitating the set-up process. The influence of ST on SVI and set-up time was relatively low due to its dual functions. SVI transformation and rapid set-up demanded different optimal H/D ratio scopes (10-20 and 16-20). Meanwhile, different functions could be obtained through adjusting certain factors' scope.
NASA Astrophysics Data System (ADS)
Gowtham, K. N.; Vasudevan, M.; Maduraimuthu, V.; Jayakumar, T.
2011-04-01
Modified 9Cr-1Mo ferritic steel is used as a structural material for steam generator components of power plants. Generally, tungsten inert gas (TIG) welding is preferred for welding of these steels in which the depth of penetration achievable during autogenous welding is limited. Therefore, activated flux TIG (A-TIG) welding, a novel welding technique, has been developed in-house to increase the depth of penetration. In modified 9Cr-1Mo steel joints produced by the A-TIG welding process, weld bead width, depth of penetration, and heat-affected zone (HAZ) width play an important role in determining the mechanical properties as well as the performance of the weld joints during service. To obtain the desired weld bead geometry and HAZ width, it becomes important to set the welding process parameters. In this work, adaptative neuro fuzzy inference system is used to develop independent models correlating the welding process parameters like current, voltage, and torch speed with weld bead shape parameters like depth of penetration, bead width, and HAZ width. Then a genetic algorithm is employed to determine the optimum A-TIG welding process parameters to obtain the desired weld bead shape parameters and HAZ width.
Extreme data compression for the CMB
NASA Astrophysics Data System (ADS)
Zablocki, Alan; Dodelson, Scott
2016-04-01
We apply the Karhunen-Loéve methods to cosmic microwave background (CMB) data sets, and show that we can recover the input cosmology and obtain the marginalized likelihoods in Λ cold dark matter cosmologies in under a minute, much faster than Markov chain Monte Carlo methods. This is achieved by forming a linear combination of the power spectra at each multipole l , and solving a system of simultaneous equations such that the Fisher matrix is locally unchanged. Instead of carrying out a full likelihood evaluation over the whole parameter space, we need evaluate the likelihood only for the parameter of interest, with the data compression effectively marginalizing over all other parameters. The weighting vectors contain insight about the physical effects of the parameters on the CMB anisotropy power spectrum Cl . The shape and amplitude of these vectors give an intuitive feel for the physics of the CMB, the sensitivity of the observed spectrum to cosmological parameters, and the relative sensitivity of different experiments to cosmological parameters. We test this method on exact theory Cl as well as on a Wilkinson Microwave Anisotropy Probe (WMAP)-like CMB data set generated from a random realization of a fiducial cosmology, comparing the compression results to those from a full likelihood analysis using CosmoMC. After showing that the method works, we apply it to the temperature power spectrum from the WMAP seven-year data release, and discuss the successes and limitations of our method as applied to a real data set.
Estimates of the atmospheric parameters of M-type stars: a machine-learning perspective
NASA Astrophysics Data System (ADS)
Sarro, L. M.; Ordieres-Meré, J.; Bello-García, A.; González-Marcos, A.; Solano, E.
2018-05-01
Estimating the atmospheric parameters of M-type stars has been a difficult task due to the lack of simple diagnostics in the stellar spectra. We aim at uncovering good sets of predictive features of stellar atmospheric parameters (Teff, log (g), [M/H]) in spectra of M-type stars. We define two types of potential features (equivalent widths and integrated flux ratios) able to explain the atmospheric physical parameters. We search the space of feature sets using a genetic algorithm that evaluates solutions by their prediction performance in the framework of the BT-Settl library of stellar spectra. Thereafter, we construct eight regression models using different machine-learning techniques and compare their performances with those obtained using the classical χ2 approach and independent component analysis (ICA) coefficients. Finally, we validate the various alternatives using two sets of real spectra from the NASA Infrared Telescope Facility (IRTF) and Dwarf Archives collections. We find that the cross-validation errors are poor measures of the performance of regression models in the context of physical parameter prediction in M-type stars. For R ˜ 2000 spectra with signal-to-noise ratios typical of the IRTF and Dwarf Archives, feature selection with genetic algorithms or alternative techniques produces only marginal advantages with respect to representation spaces that are unconstrained in wavelength (full spectrum or ICA). We make available the atmospheric parameters for the two collections of observed spectra as online material.
Position control of an industrial robot using fractional order controller
NASA Astrophysics Data System (ADS)
Clitan, Iulia; Muresan, Vlad; Abrudean, Mihail; Clitan, Andrei; Miron, Radu
2017-02-01
This paper presents the design of a control structure that ensures no overshoot for the movement of an industrial robot, used for the evacuation of round steel blocks from inside a rotary hearth furnace. First, a mathematical model for the positioning system is derived from a set of experimental data, and further, the paper focuses on obtaining a PID type controller, using the relay method as tuning method in order to obtain a stable closed loop system. The controller parameters are further tuned in order to achieve the imposed set of performances for the positioning of the industrial robot through computer simulation, using trial and error method. Further, a fractional - order PID controller is obtained in order to improve the control signal variation, so as to fit within the range of unified current's variation, 4 to 20 mA.
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.
2016-01-01
Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We therefore conclude that customization parameters must be set with reference to the optimized parameters of the corresponding irradiation technique in order to render them useful for achieving artifact-free MC simulation for use in computational experiments and clinical treatments.
Kinematics of our Galaxy from the PMA and TGAS catalogues
NASA Astrophysics Data System (ADS)
Velichko, Anna B.; Akhmetov, Volodymyr S.; Fedorov, Peter N.
2018-04-01
We derive and compare kinematic parameters of the Galaxy using the PMA and Gaia TGAS data. Two methods are used in calculations: evaluation of the Ogorodnikov-Milne model (OMM) parameters by the least square method (LSM) and a decomposition on a set of vector spherical harmonics (VSH). We trace dependencies on the distance of the derived parameters including the Oort constants A and B and the rotational velocity of the Galaxy V rot at the Solar distance for the common sample of stars of mixed spectral composition of the PMA and TGAS catalogues. The distances were obtained from the TGAS parallaxes or from reduced proper motions for fainter stars. The A, B and V rot parameters derived from proper motions of both catalogues used show identical behaviour but the values are systematically shifted by about 0.5 mas/yr. The Oort B parameter derived from the PMA sample of red giants shows gradual decrease with increasing the distance while the Oort A has a minimum at about 2 kpc and then gradually increases. As for models chosen for calculations, first, we confirm conclusions of other authors about the existence of extra-model harmonics in the stellar velocity field. Secondly, not all parameters of the OMM are statistically significant, and the set of parameters depends on the stellar sample used.
NASA Astrophysics Data System (ADS)
Koller, Thomas; Ramos, Javier; Garrido, Nuno M.; Fröba, Andreas P.; Economou, Ioannis G.
2012-06-01
Three united-atom (UA) force fields are presented for the ionic liquid 1-ethyl-3-methylimidazolium tetracyanoborate, abbreviated as [EMIM]+[B(CN)4]-. The atomistic charges were calculated based on the restrained electrostatic potential (RESP) of the isolated ions (abbreviated as force field 1, FF-1) and the ensemble averaged RESP (EA-RESP) method from the most stable ion pair configurations obtained by MP2/6-31G*+ calculations (abbreviated as FF-2 and FF-3). Non-electrostatic parameters for both ions were taken from the literature and Lennard-Jones parameters for the [B(CN)4]- anion were fitted in two different ways to reproduce the experimental liquid density. Molecular dynamics (MD) simulations were performed over a wide temperature range to identify the effect of the electrostatic and non-electrostatic potential on the liquid density and on transport properties such as self-diffusion coefficient and viscosity. Predicted liquid densities for the three parameter sets deviate less than 0.5% from experimental data. The molecular mobility with FF-2 and FF-3 using reduced charge sets is appreciably faster than that obtained with FF-1. FF-3 presents a refined non-electrostatic potential that leads to a notable improvement in both transport properties when compared to experimental data.
Matching initial torque with different stimulation parameters influences skeletal muscle fatigue.
Bickel, C Scott; Gregory, Chris M; Azuero, Andres
2012-01-01
A fundamental barrier to using electrical stimulation in the clinical setting is an inability to maintain torque production secondary to muscle fatigue. Electrical stimulation parameters are manipulated to influence muscle torque production, and they may also influence fatigability during repetitive stimulation. Our purpose was to determine the response of the quadriceps femoris to three different fatigue protocols using the same initial torque obtained by altering stimulator parameter settings. Participants underwent fatigue protocols in which either pulse frequency (lowHz), pulse duration (lowPD), or voltage (lowV) was manipulated to obtain an initial torque that equaled 25% of maximum voluntary isometric contraction. Muscle soreness was reported on a visual analog scale 48 h after each fatigue test. The lowHz protocol resulted in the least fatigue (25% +/- 14%); the lowPD (50% +/- 13%) and lowV (48% +/- 14%) protocols had similar levels of fatigue. The lowHz protocol resulted in significantly less muscle soreness than the higher frequency protocols. Stimulation protocols that use a lower frequency coupled with long pulse durations and high voltages result in lesser amounts of muscle fatigue and perceived soreness. The identification of optimal stimulation patterns to maximize muscle performance will reduce the effect of muscle fatigue and potentially improve clinical efficacy.
Rain-rate data base development and rain-rate climate analysis
NASA Technical Reports Server (NTRS)
Crane, Robert K.
1993-01-01
The single-year rain-rate distribution data available within the archives of Consultative Committee for International Radio (CCIR) Study Group 5 were compiled into a data base for use in rain-rate climate modeling and for the preparation of predictions of attenuation statistics. The four year set of tip-time sequences provided by J. Goldhirsh for locations near Wallops Island were processed to compile monthly and annual distributions of rain rate and of event durations for intervals above and below preset thresholds. A four-year data set of tropical rain-rate tip-time sequences were acquired from the NASA TRMM program for 30 gauges near Darwin, Australia. They were also processed for inclusion in the CCIR data base and the expanded data base for monthly observations at the University of Oklahoma. The empirical rain-rate distributions (edfs) accepted for inclusion in the CCIR data base were used to estimate parameters for several rain-rate distribution models: the lognormal model, the Crane two-component model, and the three parameter model proposed by Moupfuma. The intent of this segment of the study is to obtain a limited set of parameters that can be mapped globally for use in rain attenuation predictions. If the form of the distribution can be established, then perhaps available climatological data can be used to estimate the parameters rather than requiring years of rain-rate observations to set the parameters. The two-component model provided the best fit to the Wallops Island data but the Moupfuma model provided the best fit to the Darwin data.
Maas, Anne H; Rozendaal, Yvonne J W; van Pul, Carola; Hilbers, Peter A J; Cottaar, Ward J; Haak, Harm R; van Riel, Natal A W
2015-03-01
Current diabetes education methods are costly, time-consuming, and do not actively engage the patient. Here, we describe the development and verification of the physiological model for healthy subjects that forms the basis of the Eindhoven Diabetes Education Simulator (E-DES). E-DES shall provide diabetes patients with an individualized virtual practice environment incorporating the main factors that influence glycemic control: food, exercise, and medication. The physiological model consists of 4 compartments for which the inflow and outflow of glucose and insulin are calculated using 6 nonlinear coupled differential equations and 14 parameters. These parameters are estimated on 12 sets of oral glucose tolerance test (OGTT) data (226 healthy subjects) obtained from literature. The resulting parameter set is verified on 8 separate literature OGTT data sets (229 subjects). The model is considered verified if 95% of the glucose data points lie within an acceptance range of ±20% of the corresponding model value. All glucose data points of the verification data sets lie within the predefined acceptance range. Physiological processes represented in the model include insulin resistance and β-cell function. Adjusting the corresponding parameters allows to describe heterogeneity in the data and shows the capabilities of this model for individualization. We have verified the physiological model of the E-DES for healthy subjects. Heterogeneity of the data has successfully been modeled by adjusting the 4 parameters describing insulin resistance and β-cell function. Our model will form the basis of a simulator providing individualized education on glucose control. © 2014 Diabetes Technology Society.
Masci, Ilaria; Vannozzi, Giuseppe; Bergamini, Elena; Pesce, Caterina; Getchell, Nancy; Cappozzo, Aurelio
2013-04-01
Objective quantitative evaluation of motor skill development is of increasing importance to carefully drive physical exercise programs in childhood. Running is a fundamental motor skill humans adopt to accomplish locomotion, which is linked to physical activity levels, although the assessment is traditionally carried out using qualitative evaluation tests. The present study aimed at investigating the feasibility of using inertial sensors to quantify developmental differences in the running pattern of young children. Qualitative and quantitative assessment tools were adopted to identify a skill-sensitive set of biomechanical parameters for running and to further our understanding of the factors that determine progression to skilled running performance. Running performances of 54 children between the ages of 2 and 12 years were submitted to both qualitative and quantitative analysis, the former using sequences of developmental level, the latter estimating temporal and kinematic parameters from inertial sensor measurements. Discriminant analysis with running developmental level as dependent variable allowed to identify a set of temporal and kinematic parameters, within those obtained with the sensor, that best classified children into the qualitative developmental levels (accuracy higher than 67%). Multivariate analysis of variance with the quantitative parameters as dependent variables allowed to identify whether and which specific parameters or parameter subsets were differentially sensitive to specific transitions between contiguous developmental levels. The findings showed that different sets of temporal and kinematic parameters are able to tap all steps of the transitional process in running skill described through qualitative observation and can be prospectively used for applied diagnostic and sport training purposes. Copyright © 2012 Elsevier B.V. All rights reserved.
A preliminary MTD-PLS study for androgen receptor binding of steroid compounds
NASA Astrophysics Data System (ADS)
Bora, Alina; Seclaman, E.; Kurunczi, L.; Funar-Timofei, Simona
The relative binding affinities (RBA) of a series of 30 steroids for Human Androgen Receptor (AR) were used to initiate a MTD-PLS study. The 3D structures of all the compounds were obtained through geometry optimization in the framework of AM1 semiempirical quantum chemical method. The MTD hypermolecule (HM) was constructed, superposing these structures on the AR-bonded dihydrotestosterone (DHT) skeleton obtained from PDB (AR complex, ID 1I37). The parameters characterizing the HM vertices were collected using: AM1 charges, XlogP fragmental values, calculated fragmental polarizabilities (from refractivities), volumes, and H-bond parameters (Raevsky's thermodynamic originated scale). The resulted QSAR data matrix was submitted to PCA (Principal Component Analysis) and PLS (Projections in Latent Structures) procedure (SIMCA P 9.0); five compounds were selected as test set, and the remaining 25 molecules were used as training set. In the PLS procedure supplementary chemical information was introduced, i.e. the steric effect was always considered detrimental, and the hydrophobic and van der Waals interactions were imposed to be beneficial. The initial PLS model using the entire training set has the following characteristics: R2Y = 0.584, Q2 = 0.344. Based on distances to the model criterions (DMODX and DMODY), five compounds were eliminated and the obtained final model had the following characteristics: R2Y D 0.891, Q2 D 0.591. For this the external predictivity on the test set was unsatisfactory. A tentative explanation for these behaviors is the weak information content of the input QSAR matrix for the present series comparatively with other successful MTD-PLS modeling published elsewhere.
Wieser, Stefan; Axmann, Markus; Schütz, Gerhard J.
2008-01-01
We propose here an approach for the analysis of single-molecule trajectories which is based on a comprehensive comparison of an experimental data set with multiple Monte Carlo simulations of the diffusion process. It allows quantitative data analysis, particularly whenever analytical treatment of a model is infeasible. Simulations are performed on a discrete parameter space and compared with the experimental results by a nonparametric statistical test. The method provides a matrix of p-values that assess the probability for having observed the experimental data at each setting of the model parameters. We show the testing approach for three typical situations observed in the cellular plasma membrane: i), free Brownian motion of the tracer, ii), hop diffusion of the tracer in a periodic meshwork of squares, and iii), transient binding of the tracer to slowly diffusing structures. By plotting the p-value as a function of the model parameters, one can easily identify the most consistent parameter settings but also recover mutual dependencies and ambiguities which are difficult to determine by standard fitting routines. Finally, we used the test to reanalyze previous data obtained on the diffusion of the glycosylphosphatidylinositol-protein CD59 in the plasma membrane of the human T24 cell line. PMID:18805933
NASA Astrophysics Data System (ADS)
Maltz, Jonathan S.
2000-11-01
We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.
Determination of polarimetric parameters of honey by near-infrared transflectance spectroscopy.
García-Alvarez, M; Ceresuela, S; Huidobro, J F; Hermida, M; Rodríguez-Otero, J L
2002-01-30
NIR transflectance spectroscopy was used to determine polarimetric parameters (direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides) and sucrose in honey. In total, 156 honey samples were collected during 1992 (45 samples), 1995 (56 samples), and 1996 (55 samples). Samples were analyzed by NIR spectroscopy and polarimetric methods. Calibration (118 samples) and validation (38 samples) sets were made up; honeys from the three years were included in both sets. Calibrations were performed by modified partial least-squares regression and scatter correction by standard normal variation and detrend methods. For direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides, good statistics (bias, SEV, and R(2)) were obtained for the validation set, and no statistically (p = 0.05) significant differences were found between instrumental and polarimetric methods for these parameters. Statistical data for sucrose were not as good as those of the other parameters. Therefore, NIR spectroscopy is not an effective method for quantitative analysis of sucrose in these honey samples. However, NIR spectroscopy may be an acceptable method for semiquantitative evaluation of sucrose for honeys, such as those in our study, containing up to 3% of sucrose. Further work is necessary to validate the uncertainty at higher levels.
Lunar tidal acceleration obtained from satellite-derived ocean tide parameters
NASA Technical Reports Server (NTRS)
Goad, C. C.; Douglas, B. C.
1978-01-01
One hundred sets of mean elements of GEOS-3 computed at 2-day intervals yielded observation equations for the M sub 2 ocean tide from the long periodic variations of the inclination and node of the orbit. The 2nd degree Love number was given the value k sub 2 = 0.30 and the solid tide phase angle was taken to be zero. Combining obtained equations with results for the satellite 1967-92A gives the M sub 2 ocean tide parameter values. Under the same assumption of zero solid tide phase lag, the lunar tidal acceleration was found mostly due to the C sub 22 term in the expansion of the M sub 2 tide with additional small contributions from the 0 sub 1 and N sub 2 tides. Using Lambeck's (1975) estimates for the latter, the obtained acceleration in lunar longitudal in excellent agreement with the most recent determinations from ancient and modern astronomical data.
A Grobner Basis Solution for Lightning Ground Flash Fraction Retrieval
NASA Technical Reports Server (NTRS)
Solakiewicz, Richard; Attele, Rohan; Koshak, William
2011-01-01
A Bayesian inversion method was previously introduced for retrieving the fraction of ground flashes in a set of flashes observed from a (low earth orbiting or geostationary) satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters, a scalar function was minimized by a numerical method. In order to improve this optimization, we introduce a Grobner basis solution to obtain analytic representations of the model parameters that serve as a refined initialization scheme to the numerical optimization. Using the Grobner basis, we show that there are exactly 2 solutions involving the first 3 moments of the (exponentially distributed) data. When the mean of the ground flash optical characteristic (e.g., such as the Maximum Group Area, MGA) is larger than that for cloud flashes, then a unique solution can be obtained.
Application of Artificial Neural Network to Optical Fluid Analyzer
NASA Astrophysics Data System (ADS)
Kimura, Makoto; Nishida, Katsuhiko
1994-04-01
A three-layer artificial neural network has been applied to the presentation of optical fluid analyzer (OFA) raw data, and the accuracy of oil fraction determination has been significantly improved compared to previous approaches. To apply the artificial neural network approach to solving a problem, the first step is training to determine the appropriate weight set for calculating the target values. This involves using a series of data sets (each comprising a set of input values and an associated set of output values that the artificial neural network is required to determine) to tune artificial neural network weighting parameters so that the output of the neural network to the given set of input values is as close as possible to the required output. The physical model used to generate the series of learning data sets was the effective flow stream model, developed for OFA data presentation. The effectiveness of the training was verified by reprocessing the same input data as were used to determine the weighting parameters and then by comparing the results of the artificial neural network to the expected output values. The standard deviation of the expected and obtained values was approximately 10% (two sigma).
Yang, Yongji; Moser, Michael A J; Zhang, Edwin; Zhang, Wenjun; Zhang, Bing
2018-01-01
The aim of this study was to develop a statistical model for cell death by irreversible electroporation (IRE) and to show that the statistic model is more accurate than the electric field threshold model in the literature using cervical cancer cells in vitro. HeLa cell line was cultured and treated with different IRE protocols in order to obtain data for modeling the statistical relationship between the cell death and pulse-setting parameters. In total, 340 in vitro experiments were performed with a commercial IRE pulse system, including a pulse generator and an electric cuvette. Trypan blue staining technique was used to evaluate cell death after 4 hours of incubation following IRE treatment. Peleg-Fermi model was used in the study to build the statistical relationship using the cell viability data obtained from the in vitro experiments. A finite element model of IRE for the electric field distribution was also built. Comparison of ablation zones between the statistical model and electric threshold model (drawn from the finite element model) was used to show the accuracy of the proposed statistical model in the description of the ablation zone and its applicability in different pulse-setting parameters. The statistical models describing the relationships between HeLa cell death and pulse length and the number of pulses, respectively, were built. The values of the curve fitting parameters were obtained using the Peleg-Fermi model for the treatment of cervical cancer with IRE. The difference in the ablation zone between the statistical model and the electric threshold model was also illustrated to show the accuracy of the proposed statistical model in the representation of ablation zone in IRE. This study concluded that: (1) the proposed statistical model accurately described the ablation zone of IRE with cervical cancer cells, and was more accurate compared with the electric field model; (2) the proposed statistical model was able to estimate the value of electric field threshold for the computer simulation of IRE in the treatment of cervical cancer; and (3) the proposed statistical model was able to express the change in ablation zone with the change in pulse-setting parameters.
Parametric study of the swimming performance of a fish robot propelled by a flexible caudal fin.
Low, K H; Chong, C W
2010-12-01
In this paper, we aim to study the swimming performance of fish robots by using a statistical approach. A fish robot employing a carangiform swimming mode had been used as an experimental platform for the performance study. The experiments conducted aim to investigate the effect of various design parameters on the thrust capability of the fish robot with a flexible caudal fin. The controllable parameters associated with the fin include frequency, amplitude of oscillation, aspect ratio and the rigidity of the caudal fin. The significance of these parameters was determined in the first set of experiments by using a statistical approach. A more detailed parametric experimental study was then conducted with only those significant parameters. As a result, the parametric study could be completed with a reduced number of experiments and time spent. With the obtained experimental result, we were able to understand the relationship between various parameters and a possible adjustment of parameters to obtain a higher thrust. The proposed statistical method for experimentation provides an objective and thorough analysis of the effects of individual or combinations of parameters on the swimming performance. Such an efficient experimental design helps to optimize the process and determine factors that influence variability.
NASA Astrophysics Data System (ADS)
Rudowicz, C.; Gnutek, P.
2010-01-01
Central quantities in spectroscopy and magnetism of transition ions in crystals are crystal (ligand) field parameters (CFPs). For orthorhombic, monoclinic, and triclinic site symmetry CF analysis is prone to misinterpretations due to large number of CFPs and existence of correlated sets of alternative CFPs. In this review, we elucidate the intrinsic features of orthorhombic and lower symmetry CFPs and their implications. The alternative CFP sets, which yield identical energy levels, belong to different regions of CF parameter space and hence are intrinsically incompatible. Only their ‘images’ representing CFP sets expressed in the same region of CF parameter space may be directly compared. Implications of these features for fitting procedures and meaning of fitted CFPs are categorized into negative: pitfalls and positive: blessings. As a case study, the CFP sets for Tm 3+ ions in KLu(WO 4) 2 are analysed and shown to be intrinsically incompatible. Inadvertent, so meaningless, comparisons of incompatible CFP sets result in various pitfalls, e.g., controversial claims about the values of CFPs obtained by other researchers as well as incorrect structural conclusions or faulty systematics of CF parameters across rare-earth ion series based on relative magnitudes of incompatible CFPs. Such pitfalls bear on interpretation of, e.g., optical spectroscopy, inelastic neutron scattering, and magnetic susceptibility data. An extensive survey of pertinent literature was carried out to assess recognition of compatibility problems. Great portion of available orthorhombic and lower symmetry CFP sets are found intrinsically incompatible, yet these problems and their implications appear barely recognized. The considerable extent and consequences of pitfalls revealed by our survey call for concerted remedial actions of researchers. A general approach based on the rhombicity ratio standardization may solve compatibility problems. Wider utilization of alternative CFP sets in the multiple correlated fitting techniques may improve reliability ( blessing) of fitted CFPs. This review may be of interest to a broad range of researchers from condensed matter physicists to physical chemists working on, e.g., high temperature superconductors, luminescent, optoelectronic, laser, and magnetic materials.
Parameter optimization of electrochemical machining process using black hole algorithm
NASA Astrophysics Data System (ADS)
Singh, Dinesh; Shukla, Rajkamal
2017-12-01
Advanced machining processes are significant as higher accuracy in machined component is required in the manufacturing industries. Parameter optimization of machining processes gives optimum control to achieve the desired goals. In this paper, electrochemical machining (ECM) process is considered to evaluate the performance of the considered process using black hole algorithm (BHA). BHA considers the fundamental idea of a black hole theory and it has less operating parameters to tune. The two performance parameters, material removal rate (MRR) and overcut (OC) are considered separately to get optimum machining parameter settings using BHA. The variations of process parameters with respect to the performance parameters are reported for better and effective understanding of the considered process using single objective at a time. The results obtained using BHA are found better while compared with results of other metaheuristic algorithms, such as, genetic algorithm (GA), artificial bee colony (ABC) and bio-geography based optimization (BBO) attempted by previous researchers.
NASA Astrophysics Data System (ADS)
Vergara, Maximiliano R.; Van Sint Jan, Michel; Lorig, Loren
2016-04-01
The mechanical behavior of rock containing parallel non-persistent joint sets was studied using a numerical model. The numerical analysis was performed using the discrete element software UDEC. The use of fictitious joints allowed the inclusion of non-persistent joints in the model domain and simulating the progressive failure due to propagation of existing fractures. The material and joint mechanical parameters used in the model were obtained from experimental results. The results of the numerical model showed good agreement with the strength and failure modes observed in the laboratory. The results showed the large anisotropy in the strength resulting from variation of the joint orientation. Lower strength of the specimens was caused by the coalescence of fractures belonging to parallel joint sets. A correlation was found between geometrical parameters of the joint sets and the contribution of the joint sets strength in the global strength of the specimen. The results suggest that for the same dip angle with respect to the principal stresses; the uniaxial strength depends primarily on the joint spacing and the angle between joints tips and less on the length of the rock bridges (persistency). A relation between joint geometrical parameters was found from which the resulting failure mode can be predicted.
CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila
2015-03-10
We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less
Bittante, G; Ferragina, A; Cipolat-Gotet, C; Cecchinato, A
2014-10-01
Cheese yield is an important technological trait in the dairy industry. The aim of this study was to infer the genetic parameters of some cheese yield-related traits predicted using Fourier-transform infrared (FTIR) spectral analysis and compare the results with those obtained using an individual model cheese-producing procedure. A total of 1,264 model cheeses were produced using 1,500-mL milk samples collected from individual Brown Swiss cows, and individual measurements were taken for 10 traits: 3 cheese yield traits (fresh curd, curd total solids, and curd water as a percent of the weight of the processed milk), 4 milk nutrient recovery traits (fat, protein, total solids, and energy of the curd as a percent of the same nutrient in the processed milk), and 3 daily cheese production traits per cow (fresh curd, total solids, and water weight of the curd). Each unprocessed milk sample was analyzed using a MilkoScan FT6000 (Foss, Hillerød, Denmark) over the spectral range, from 5,000 to 900 wavenumber × cm(-1). The FTIR spectrum-based prediction models for the previously mentioned traits were developed using modified partial least-square regression. Cross-validation of the whole data set yielded coefficients of determination between the predicted and measured values in cross-validation of 0.65 to 0.95 for all traits, except for the recovery of fat (0.41). A 3-fold external validation was also used, in which the available data were partitioned into 2 subsets: a training set (one-third of the herds) and a testing set (two-thirds). The training set was used to develop calibration equations, whereas the testing subsets were used for external validation of the calibration equations and to estimate the heritabilities and genetic correlations of the measured and FTIR-predicted phenotypes. The coefficients of determination between the predicted and measured values in cross-validation results obtained from the training sets were very similar to those obtained from the whole data set, but the coefficient of determination of validation values for the external validation sets were much lower for all traits (0.30 to 0.73), and particularly for fat recovery (0.05 to 0.18), for the training sets compared with the full data set. For each testing subset, the (co)variance components for the measured and FTIR-predicted phenotypes were estimated using bivariate Bayesian analyses and linear models. The intraherd heritabilities for the predicted traits obtained from our internal cross-validation using the whole data set ranged from 0.085 for daily yield of curd solids to 0.576 for protein recovery, and were similar to those obtained from the measured traits (0.079 to 0.586, respectively). The heritabilities estimated from the testing data set used for external validation were more variable but similar (on average) to the corresponding values obtained from the whole data set. Moreover, the genetic correlations between the predicted and measured traits were high in general (0.791 to 0.996), and they were always higher than the corresponding phenotypic correlations (0.383 to 0.995), especially for the external validation subset. In conclusion, we herein report that application of the cross-validation technique to the whole data set tended to overestimate the predictive ability of FTIR spectra, give more precise phenotypic predictions than the calibrations obtained using smaller data sets, and yield genetic correlations similar to those obtained from the measured traits. Collectively, our findings indicate that FTIR predictions have the potential to be used as indicator traits for the rapid and inexpensive selection of dairy populations for improvement of cheese yield, milk nutrient recovery in curd, and daily cheese production per cow. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Ismail, Ahmad Muhaimin; Mohamad, Mohd Saberi; Abdul Majid, Hairudin; Abas, Khairul Hamimah; Deris, Safaai; Zaki, Nazar; Mohd Hashim, Siti Zaiton; Ibrahim, Zuwairie; Remli, Muhammad Akmal
2017-12-01
Mathematical modelling is fundamental to understand the dynamic behavior and regulation of the biochemical metabolisms and pathways that are found in biological systems. Pathways are used to describe complex processes that involve many parameters. It is important to have an accurate and complete set of parameters that describe the characteristics of a given model. However, measuring these parameters is typically difficult and even impossible in some cases. Furthermore, the experimental data are often incomplete and also suffer from experimental noise. These shortcomings make it challenging to identify the best-fit parameters that can represent the actual biological processes involved in biological systems. Computational approaches are required to estimate these parameters. The estimation is converted into multimodal optimization problems that require a global optimization algorithm that can avoid local solutions. These local solutions can lead to a bad fit when calibrating with a model. Although the model itself can potentially match a set of experimental data, a high-performance estimation algorithm is required to improve the quality of the solutions. This paper describes an improved hybrid of particle swarm optimization and the gravitational search algorithm (IPSOGSA) to improve the efficiency of a global optimum (the best set of kinetic parameter values) search. The findings suggest that the proposed algorithm is capable of narrowing down the search space by exploiting the feasible solution areas. Hence, the proposed algorithm is able to achieve a near-optimal set of parameters at a fast convergence speed. The proposed algorithm was tested and evaluated based on two aspartate pathways that were obtained from the BioModels Database. The results show that the proposed algorithm outperformed other standard optimization algorithms in terms of accuracy and near-optimal kinetic parameter estimation. Nevertheless, the proposed algorithm is only expected to work well in small scale systems. In addition, the results of this study can be used to estimate kinetic parameter values in the stage of model selection for different experimental conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Experimental design and efficient parameter estimation in preclinical pharmacokinetic studies.
Ette, E I; Howie, C A; Kelman, A W; Whiting, B
1995-05-01
Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive ("quantic") preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.
Field-Scale Evaluation of Infiltration Parameters From Soil Texture for Hydrologic Analysis
NASA Astrophysics Data System (ADS)
Springer, Everett P.; Cundy, Terrance W.
1987-02-01
Recent interest in predicting soil hydraulic properties from simple physical properties such as texture has major implications in the parameterization of physically based models of surface runoff. This study was undertaken to (1) compare, on a field scale, soil hydraulic parameters predicted from texture to those derived from field measurements and (2) compare simulated overland flow response using these two parameter sets. The parameters for the Green-Ampt infiltration equation were obtained from field measurements and using texture-based predictors for two agricultural fields, which were mapped as single soil units. Results of the analyses were that (1) the mean and variance of the field-based parameters were not preserved by the texture-based estimates, (2) spatial and cross correlations between parameters were induced by the texture-based estimation procedures, (3) the overland flow simulations using texture-based parameters were significantly different than those from field-based parameters, and (4) simulations using field-measured hydraulic conductivities and texture-based storage parameters were very close to simulations using only field-based parameters.
Optimal SVM parameter selection for non-separable and unbalanced datasets.
Jiang, Peng; Missoum, Samy; Chen, Zhao
2014-10-01
This article presents a study of three validation metrics used for the selection of optimal parameters of a support vector machine (SVM) classifier in the case of non-separable and unbalanced datasets. This situation is often encountered when the data is obtained experimentally or clinically. The three metrics selected in this work are the area under the ROC curve (AUC), accuracy, and balanced accuracy. These validation metrics are tested using computational data only, which enables the creation of fully separable sets of data. This way, non-separable datasets, representative of a real-world problem, can be created by projection onto a lower dimensional sub-space. The knowledge of the separable dataset, unknown in real-world problems, provides a reference to compare the three validation metrics using a quantity referred to as the "weighted likelihood". As an application example, the study investigates a classification model for hip fracture prediction. The data is obtained from a parameterized finite element model of a femur. The performance of the various validation metrics is studied for several levels of separability, ratios of unbalance, and training set sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pugmire, R.J.; Solum, M.S.
This study was designed to apply {sup 13}C-nuclear magnetic resonance (NMR) spectrometry to the analysis of direct coal liquefaction process-stream materials. {sup 13}C-NMR was shown to have a high potential for application to direct coal liquefaction-derived samples in Phase II of this program. In this Phase III project, {sup 13}C-NMR was applied to a set of samples derived from the HRI Inc. bench-scale liquefaction Run CC-15. The samples include the feed coal, net products and intermediate streams from three operating periods of the run. High-resolution {sup 13}C-NMR data were obtained for the liquid samples and solid-state CP/MAS {sup 13}C-NMR datamore » were obtained for the coal and filter-cake samples. The {sup 1}C-NMR technique is used to derive a set of twelve carbon structural parameters for each sample (CONSOL Table A). Average molecular structural descriptors can then be derived from these parameters (CONSOL Table B).« less
Tangen, C M; Koch, G G
1999-03-01
In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.
Experimental Investigation and Optimization of Response Variables in WEDM of Inconel - 718
NASA Astrophysics Data System (ADS)
Karidkar, S. S.; Dabade, U. A.
2016-02-01
Effective utilisation of Wire Electrical Discharge Machining (WEDM) technology is challenge for modern manufacturing industries. Day by day new materials with high strengths and capabilities are being developed to fulfil the customers need. Inconel - 718 is similar kind of material which is extensively used in aerospace applications, such as gas turbine, rocket motors, and spacecraft as well as in nuclear reactors and pumps etc. This paper deals with the experimental investigation of optimal machining parameters in WEDM for Surface Roughness, Kerf Width and Dimensional Deviation using DoE such as Taguchi methodology, L9 orthogonal array. By keeping peak current constant at 70 A, the effect of other process parameters on above response variables were analysed. Obtained experimental results were statistically analysed using Minitab-16 software. Analysis of Variance (ANOVA) shows pulse on time as the most influential parameter followed by wire tension whereas spark gap set voltage is observed to be non-influencing parameter. Multi-objective optimization technique, Grey Relational Analysis (GRA), shows optimal machining parameters such as pulse on time 108 Machine unit, spark gap set voltage 50 V and wire tension 12 gm for optimal response variables considered for the experimental analysis.
Theoretical Tools and Software for Modeling, Simulation and Control Design of Rocket Test Facilities
NASA Technical Reports Server (NTRS)
Richter, Hanz
2004-01-01
A rocket test stand and associated subsystems are complex devices whose operation requires that certain preparatory calculations be carried out before a test. In addition, real-time control calculations must be performed during the test, and further calculations are carried out after a test is completed. The latter may be required in order to evaluate if a particular test conformed to specifications. These calculations are used to set valve positions, pressure setpoints, control gains and other operating parameters so that a desired system behavior is obtained and the test can be successfully carried out. Currently, calculations are made in an ad-hoc fashion and involve trial-and-error procedures that may involve activating the system with the sole purpose of finding the correct parameter settings. The goals of this project are to develop mathematical models, control methodologies and associated simulation environments to provide a systematic and comprehensive prediction and real-time control capability. The models and controller designs are expected to be useful in two respects: 1) As a design tool, a model is the only way to determine the effects of design choices without building a prototype, which is, in the context of rocket test stands, impracticable; 2) As a prediction and tuning tool, a good model allows to set system parameters off-line, so that the expected system response conforms to specifications. This includes the setting of physical parameters, such as valve positions, and the configuration and tuning of any feedback controllers in the loop.
Standardless quantification by parameter optimization in electron probe microanalysis
NASA Astrophysics Data System (ADS)
Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.
2012-11-01
A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively.
On the ab initio evaluation of Hubbard parameters. II. The κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal
NASA Astrophysics Data System (ADS)
Fortunelli, Alessandro; Painelli, Anna
1997-05-01
A previously proposed approach for the ab initio evaluation of Hubbard parameters is applied to BEDT-TTF dimers. The dimers are positioned according to four geometries taken as the first neighbors from the experimental data on the κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal. RHF-SCF, CAS-SCF and frozen-orbital calculations using the 6-31G** basis set are performed with different values of the total charge, allowing us to derive all the relevant parameters. It is found that the electronic structure of the BEDT-TTF planes is adequately described by the standard Extended Hubbard Model, with the off-diagonal electron-electron interaction terms (X and W) of negligible size. The derived parameters are in good agreement with available experimental data. Comparison with previous theoretical estimates shows that the t values compare well with those obtained from Extended Hückel Theory (whereas the minimal basis set estimates are completely unreliable). On the other hand, the Uaeff values exhibit an appreciable dependence on the chemical environment.
Single Object & Time Series Spectroscopy with JWST NIRCam
NASA Technical Reports Server (NTRS)
Greene, Tom; Schlawin, Everett A.
2017-01-01
JWST will enable high signal-to-noise spectroscopic observations of the atmospheres of transiting planets with high sensitivity at wavelengths that are inaccessible with HST or other existing facilities. We plan to exploit this by measuring abundances, chemical compositions, cloud properties, and temperature-pressure parameters of a set of mostly warm (T 600 - 1200 K) and low mass (14 -200 Earth mass) planets in our guaranteed time program. These planets are expected to have significant molecular absorptions of H2O, CH4, CO2, CO, and other molecules that are key for determining these parameters and illuminating how and where the planets formed. We describe how we will use the NIRCam grisms to observe slitless transmission and emission spectra of these planets over 2.4 - 5.0 microns wavelength and how well these observations can measure our desired parameters. This will include how we set integration times, exposure parameters, and obtain simultaneous shorter wavelength images to track telescope pointing and stellar variability. We will illustrate this with specific examples showing model spectra, simulated observations, expected information retrieval results, completed Astronomer's Proposal Tools observing templates, target visibility, and other considerations.
Ivanov, J.; Miller, R.D.; Markiewicz, R.D.; Xia, J.
2008-01-01
We apply the P-wave refraction-tomography method to seismic data collected with a landstreamer. Refraction-tomography inversion solutions were determined using regularization parameters that provided the most realistic near-surface solutions that best matched the dipping layer structure of nearby outcrops. A reasonably well matched solution was obtained using an unusual set of optimal regularization parameters. In comparison, the use of conventional regularization parameters did not provide as realistic results. Thus, we consider that even if there is only qualitative a-priori information about a site (i.e., visual) - in the case of the East Canyon Dam, Utah - it might be possible to minimize the refraction nonuniqueness by estimating the most appropriate regularization parameters.
NASA Astrophysics Data System (ADS)
Bermeo Varon, L. A.; Orlande, H. R. B.; Eliçabe, G. E.
2016-09-01
The particle filter methods have been widely used to solve inverse problems with sequential Bayesian inference in dynamic models, simultaneously estimating sequential state variables and fixed model parameters. This methods are an approximation of sequences of probability distributions of interest, that using a large set of random samples, with presence uncertainties in the model, measurements and parameters. In this paper the main focus is the solution combined parameters and state estimation in the radiofrequency hyperthermia with nanoparticles in a complex domain. This domain contains different tissues like muscle, pancreas, lungs, small intestine and a tumor which is loaded iron oxide nanoparticles. The results indicated that excellent agreements between estimated and exact value are obtained.
Mackenzie, C J; McGowan, C M; Pinchbeck, G; Carslake, H B
2018-05-01
Evaluation of coagulation status is an important component of critical care. Ongoing monitoring of coagulation status in hospitalised horses has previously been via serial venipuncture due to concerns that sampling directly from the intravenous catheter (IVC) may alter the accuracy of the results. Adverse effects such as patient anxiety and trauma to the sampled vessel could be avoided by the use of an indwelling IVC for repeat blood sampling. To compare coagulation parameters from blood obtained by jugular venipuncture with IVC sampling in critically ill horses. Prospective observational study. A single set of paired blood samples were obtained from horses (n = 55) admitted to an intensive care unit by direct jugular venipuncture and, following removal of a presample, via an indwelling IVC. The following coagulation parameters were measured on venipuncture and IVC samples: whole blood prothrombin time (PT), fresh plasma PT and activated partial thromboplastin time (aPTT) and stored plasma antithrombin activity (AT) and fibrinogen concentration. D-dimer concentration was also measured in some horses (n = 22). Comparison of venipuncture and IVC results was performed using Lin's concordance correlation coefficient. Agreement between paired results was assessed using Bland Altman analysis. Correlation was substantial and agreement was good between sample methods for all parameters except AT and D-dimers. Each coagulation parameter was tested using only one assay. Sampling was limited to a convenience sample and timing of sample collection was not standardised in relation to when the catheter was flushed with heparinised saline. With the exception of AT and D-dimers, coagulation parameters measured on blood samples obtained via an IVC have clinically equivalent values to those obtained by jugular venipuncture. © 2017 EVJ Ltd.
Uloza, Virgilijus; Padervinskis, Evaldas; Vegiene, Aurelija; Pribuisiene, Ruta; Saferis, Viktoras; Vaiciukynas, Evaldas; Gelzinis, Adas; Verikas, Antanas
2015-11-01
The objective of this study is to evaluate the reliability of acoustic voice parameters obtained using smart phone (SP) microphones and investigate the utility of use of SP voice recordings for voice screening. Voice samples of sustained vowel/a/obtained from 118 subjects (34 normal and 84 pathological voices) were recorded simultaneously through two microphones: oral AKG Perception 220 microphone and SP Samsung Galaxy Note3 microphone. Acoustic voice signal data were measured for fundamental frequency, jitter and shimmer, normalized noise energy (NNE), signal to noise ratio and harmonic to noise ratio using Dr. Speech software. Discriminant analysis-based Correct Classification Rate (CCR) and Random Forest Classifier (RFC) based Equal Error Rate (EER) were used to evaluate the feasibility of acoustic voice parameters classifying normal and pathological voice classes. Lithuanian version of Glottal Function Index (LT_GFI) questionnaire was utilized for self-assessment of the severity of voice disorder. The correlations of acoustic voice parameters obtained with two types of microphones were statistically significant and strong (r = 0.73-1.0) for the entire measurements. When classifying into normal/pathological voice classes, the Oral-NNE revealed the CCR of 73.7% and the pair of SP-NNE and SP-shimmer parameters revealed CCR of 79.5%. However, fusion of the results obtained from SP voice recordings and GFI data provided the CCR of 84.60% and RFC revealed the EER of 7.9%, respectively. In conclusion, measurements of acoustic voice parameters using SP microphone were shown to be reliable in clinical settings demonstrating high CCR and low EER when distinguishing normal and pathological voice classes, and validated the suitability of the SP microphone signal for the task of automatic voice analysis and screening.
A theoretical study of hydrogen complexes of the XH-pi type between propyne and HF, HCL or HCN.
Tavares, Alessandra M; da Silva, Washington L V; Lopes, Kelson C; Ventura, Elizete; Araújo, Regiane C M U; do Monte, Silmar A; da Silva, João Bosco P; Ramos, Mozart N
2006-05-15
The present manuscript reports a systematic investigation of the basis set dependence of some properties of hydrogen-bonded (pi type) complexes formed by propyne and a HX molecule, where X=F, Cl and CN. The calculations have been performed at Hartree-Fock, MP2 and B3LYP levels. Geometries, H-bond energies and vibrational have been considered. The more pronounced effects on the structural parameters of the isolated molecules, as a result of complexation, are verified on RCtriple bondC and HX bond lengths. As compared to double-zeta (6-31G**), triple-zeta (6-311G**) basis set leads to an increase of RCtriple bondC bond distance, at all three computational levels. In the case where diffuse functions are added to both hydrogen and 'heavy' atoms, the effect is more pronounced. The propyne-HX structural parameters are quite similar to the corresponding parameters of acetylene-HX complexes, at all levels. The largest difference is obtained for hydrogen bond distance, RH, with a smaller value for propyne-HX complex, indicating a stronger bond. Concerning the electronic properties, the results yield the following ordering for H-bond energies, DeltaE: propynecdots, three dots, centeredHF>propynecdots, three dots, centeredHCl>propynecdots, three dots, centeredHCN. It is also important to point out that the inclusion of BSSE and zero-point energies (ZPE) corrections cause significant changes on DeltaE. The smaller effect of ZPE is obtained for propynecdots, three dots, centeredHCN at HF/6-311++G** level, while the greatest difference is obtained at MP2/6-31G** level for propynecdots, three dots, centeredHF system. Concerning the IR vibrational it was obtained that larger shift can be associated with stronger hydrogen bonds. The more pronounced effect on the normal modes of the isolated molecule after the complexation is obtained for HX stretching frequency, which is shifted downward.
A theoretical study of hydrogen complexes of the X sbnd H-π type between propyne and HF, HCL or HCN
NASA Astrophysics Data System (ADS)
Tavares, Alessandra M.; da Silva, Washington L. V.; Lopes, Kelson C.; Ventura, Elizete; Araújo, Regiane C. M. U.; do Monte, Silmar A.; da Silva, João Bosco P.; Ramos, Mozart N.
2006-05-01
The present manuscript reports a systematic investigation of the basis set dependence of some properties of hydrogen-bonded (π type) complexes formed by propyne and a HX molecule, where X = F, Cl and CN. The calculations have been performed at Hartree-Fock, MP2 and B3LYP levels. Geometries, H-bond energies and vibrational have been considered. The more pronounced effects on the structural parameters of the isolated molecules, as a result of complexation, are verified on RC tbnd C and HX bond lengths. As compared to double-ζ (6-31G **), triple-ζ (6-311G **) basis set leads to an increase of RC tbnd C bond distance, at all three computational levels. In the case where diffuse functions are added to both hydrogen and 'heavy' atoms, the effect is more pronounced. The propyne-HX structural parameters are quite similar to the corresponding parameters of acetylene-HX complexes, at all levels. The largest difference is obtained for hydrogen bond distance, RH, with a smaller value for propyne-HX complex, indicating a stronger bond. Concerning the electronic properties, the results yield the following ordering for H-bond energies, Δ E: propyne⋯HF > propyne⋯HCl > propyne⋯HCN. It is also important to point out that the inclusion of BSSE and zero-point energies (ZPE) corrections cause significant changes on Δ E. The smaller effect of ZPE is obtained for propyne⋯HCN at HF/6-311++G ** level, while the greatest difference is obtained at MP2/6-31G ** level for propyne⋯HF system. Concerning the IR vibrational it was obtained that larger shift can be associated with stronger hydrogen bonds. The more pronounced effect on the normal modes of the isolated molecule after the complexation is obtained for H sbnd X stretching frequency, which is shifted downward.
NASA Technical Reports Server (NTRS)
Reph, M. G.
1984-01-01
This document provides a summary of information available in the NASA Climate Data Catalog. The catalog provides scientific users with technical information about selected climate parameter data sets and the associated sensor measurements from which they are derived. It is an integral part of the Pilot Climate Data System (PCDS), an interactive, scientific management system for locating, obtaining, manipulating, and displaying climate research data. The catalog is maintained in a machine readable representation which can easily be accessed via the PCDS. The purposes, format and content of the catalog are discussed. Summarized information is provided about each of the data sets currently described in the catalog. Sample detailed descriptions are included for individual data sets or families of related data sets.
Soft cooperation systems and games
NASA Astrophysics Data System (ADS)
Fernández, J. R.; Gallego, I.; Jiménez-Losada, A.; Ordóñez, M.
2018-04-01
A cooperative game for a set of agents establishes a fair allocation of the profit obtained for their cooperation. In order to obtain this allocation, a characteristic function is known. It establishes the profit of each coalition of agents if this coalition decides to act alone. Originally players are considered symmetric and then the allocation only depends on the characteristic function; this paper is about cooperative games with an asymmetric set of agents. We introduced cooperative games with a soft set of agents which explains those parameters determining the asymmetry among them in the cooperation. Now the characteristic function is defined not over the coalitions but over the soft coalitions, namely the profit depends not only on the formed coalition but also on the attributes considered for the players in the coalition. The best known of the allocation rules for cooperative games is the Shapley value. We propose a Shapley kind solution for soft games.
Interactive Database of Pulsar Flux Density Measurements
NASA Astrophysics Data System (ADS)
Koralewska, O.; Krzeszowski, K.; Kijak, J.; Lewandowski, W.
2012-12-01
The number of astronomical observations is steadily growing, giving rise to the need of cataloguing the obtained results. There are a lot of databases, created to store different types of data and serve a variety of purposes, e. g. databases providing basic data for astronomical objects (SIMBAD Astronomical Database), databases devoted to one type of astronomical object (ATNF Pulsar Database) or to a set of values of the specific parameter (Lorimer 1995 - database of flux density measurements for 280 pulsars on the frequencies up to 1606 MHz), etc. We found that creating an online database of pulsar flux measurements, provided with facilities for plotting diagrams and histograms, calculating mean values for a chosen set of data, filtering parameter values and adding new measurements by the registered users, could be useful in further studies on pulsar spectra.
Sparse Gaussian elimination with controlled fill-in on a shared memory multiprocessor
NASA Technical Reports Server (NTRS)
Alaghband, Gita; Jordan, Harry F.
1989-01-01
It is shown that in sparse matrices arising from electronic circuits, it is possible to do computations on many diagonal elements simultaneously. A technique for obtaining an ordered compatible set directly from the ordered incompatible table is given. The ordering is based on the Markowitz number of the pivot candidates. This technique generates a set of compatible pivots with the property of generating few fills. A novel heuristic algorithm is presented that combines the idea of an order-compatible set with a limited binary tree search to generate several sets of compatible pivots in linear time. An elimination set for reducing the matrix is generated and selected on the basis of a minimum Markowitz sum number. The parallel pivoting technique presented is a stepwise algorithm and can be applied to any submatrix of the original matrix. Thus, it is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds. Parameters are suggested to obtain a balance between parallelism and fill-ins. Results of applying the proposed algorithms on several large application matrices using the HEP multiprocessor (Kowalik, 1985) are presented and analyzed.
On the formation of TW Crv optical radiation
NASA Astrophysics Data System (ADS)
Shimansky, V. V.; Mitrofanova, A. A.; Borisov, N. V.; Fabrika, S. N.; Galeev, A. I.
2016-10-01
We present the analysis of the optical radiation of the young pre-cataclysmic variable TW Crv. Spectroscopic and photometric observations were obtained at the SAO RAS 6-m BTA telescope and at the Russian-Turkish RTT-150 telescope. The light curves of the system posses nearly sinusoidal shapes with the amplitudes of Δ m > 0.m7, what is typical for young pre-cataclysmic variables with sdO-subdwarfs and orbit inclinations of less than 45◦. The optical spectrum contains dominant radiation of the hot subdwarf with the HI and He II absorption lines and strong emission lines, which are formed in the atmosphere of the secondary owing to the reflection effects. Radial velocities of the cool star were measured by analyzing the λλ 4630-4650 Å Bowen blend, which for the first time allowed to determine the component masses. A numerical simulation of the light curves and spectra of TW Crv, obtaining a complete set of systems fundamental parameters was carried out. The hot star parameters prompt its belonging to the sdOsubdwarf class at the stage of transition to the cooling white dwarf sequence. The absence of its observable planetary nebula is caused by a long-lasting evolution of the system after the common envelope state. The secondary component has a luminosity excess, which is typical for other young sdO-subdwarf precataclysmic variables. Its position on the " age-luminosity excess" diagram points at the accuracy of the obtained set of TW Crv fundamental parameters and at the similarity of its evolutionary and physical conditions with that of other BE UMa-type objects.
Test of Parameterized Post-Newtonian Gravity with Galaxy-scale Strong Lensing Systems
NASA Astrophysics Data System (ADS)
Cao, Shuo; Li, Xiaolei; Biesiada, Marek; Xu, Tengpeng; Cai, Yongzhi; Zhu, Zong-Hong
2017-01-01
Based on a mass-selected sample of galaxy-scale strong gravitational lenses from the SLACS, BELLS, LSD, and SL2S surveys and using a well-motivated fiducial set of lens-galaxy parameters, we tested the weak-field metric on kiloparsec scales and found a constraint on the post-Newtonian parameter γ ={0.995}-0.047+0.037 under the assumption of a flat ΛCDM universe with parameters taken from Planck observations. General relativity (GR) predicts exactly γ = 1. Uncertainties concerning the total mass density profile, anisotropy of the velocity dispersion, and the shape of the light profile combine to systematic uncertainties of ˜25%. By applying a cosmological model-independent method to the simulated future LSST data, we found a significant degeneracy between the PPN γ parameter and the spatial curvature of the universe. Setting a prior on the cosmic curvature parameter -0.007 < Ωk < 0.006, we obtained the constraint on the PPN parameter that γ ={1.000}-0.0025+0.0023. We conclude that strong lensing systems with measured stellar velocity dispersions may serve as another important probe to investigate validity of the GR, if the mass-dynamical structure of the lensing galaxies is accurately constrained in future lens surveys.
Yi, Jinhua; Yu, Hongliu; Zhang, Ying; Hu, Xin; Shi, Ping
2015-12-01
The present paper proposed a central-driven structure of upper limb rehabilitation robot in order to reduce the volume of the robotic arm in the structure, and also to reduce the influence of motor noise, radiation and other adverse factors on upper limb dysfunction patient. The forward and inverse kinematics equations have been obtained with using the Denavit-Hartenberg (D-H) parameter method. The motion simulation has been done to obtain the angle-time curve of each joint and the position-time curve of handle under setting rehabilitation path by using Solid Works software. Experimental results showed that the rationality with the central-driven structure design had been verified by the fact that the handle could move under setting rehabilitation path. The effectiveness of kinematics equations had been proved, and the error was less than 3° by comparing the angle-time curves obtained from calculation with those from motion simulation.
Spectroscopic Survey of Circumstellar Disks in Orion
NASA Astrophysics Data System (ADS)
Contreras, Maria; Hernandez, Jesus; Olguin, Lorenzo; Briceno, Cesar
2013-07-01
As a second stage of a project focused on characterizing candidate stars bearing a circumstellar disk in Orion, we present a spectroscopic follow-up of a set of about 170 bright stars. The present set of stars was selected by their optical (UBVRI) and infrared behavior in different color-color and color-magnitude diagrams. Observations were carried out at the Observatorio Astronomico Nacional located at the Sierra San Pedro Martir in B.C., Mexico and at the Observatorio Guillermo Haro in Cananea, Sonora, Mexico. Low-resolution spectra were obtained for all candidates in the sample. Using the SPTCLASS code, we have obtained spectral types and equivalent widths of the Li I 6707 and Halpha lines for each one of the stars. This project is a cornerstone of a large scale survey aimed to obtain stellar parameters in a homogeneous way using spectroscopic data. This work was partially supported by UNAM-PAPIIT grant IN-109311.
Constraints on CDM cosmology from galaxy power spectrum, CMB and SNIa evolution
NASA Astrophysics Data System (ADS)
Ferramacho, L. D.; Blanchard, A.; Zolnierowski, Y.
2009-05-01
Aims: We examine the constraints that can be obtained on standard cold dark matter models from the most currently used data set: CMB anisotropies, type Ia supernovae and the SDSS luminous red galaxies. We also examine how these constraints are widened when the equation of state parameter w and the curvature parameter Ωk are left as free parameters. Finally, we investigate the impact on these constraints of a possible form of evolution in SNIa intrinsic luminosity. Methods: We obtained our results from MCMC analysis using the full likelihood of each data set. Results: For the ΛCDM model, our “vanilla” model, cosmological parameters are tightly constrained and consistent with current estimates from various methods. When the dark energy parameter w is free we find that the constraints remain mostly unchanged, i.e. changes are smaller than the 1 sigma uncertainties. Similarly, relaxing the assumption of a flat universe leads to nearly identical constraints on the dark energy density parameter of the universe Ω_Λ , baryon density of the universe Ω_b, the optical depth τ, the index of the power spectrum of primordial fluctuations n_S, with most one sigma uncertainties better than 5%. More significant changes appear on other parameters: while preferred values are almost unchanged, uncertainties for the physical dark matter density Ω_ch^2, Hubble constant H0 and σ8 are typically twice as large. The constraint on the age of the Universe, which is very accurate for the vanilla model, is the most degraded. We found that different methodological approaches on large scale structure estimates lead to appreciable differences in preferred values and uncertainty widths. We found that possible evolution in SNIa intrinsic luminosity does not alter these constraints by much, except for w, for which the uncertainty is twice as large. At the same time, this possible evolution is severely constrained. Conclusions: We conclude that systematic uncertainties for some estimated quantities are similar or larger than statistical ones.
Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J
2016-01-01
A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.
Ting, T O; Man, Ka Lok; Lim, Eng Gee; Leach, Mark
2014-01-01
In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area.
Barry, U; Choubert, J-M; Canler, J-P; Héduit, A; Robin, L; Lessard, P
2012-01-01
This work suggests a procedure to correctly calibrate the parameters of a one-dimensional MBBR dynamic model in nitrification treatment. The study deals with the MBBR configuration with two reactors in series, one for carbon treatment and the other for nitrogen treatment. Because of the influence of the first reactor on the second one, the approach needs a specific calibration strategy. Firstly, a comparison between measured values and simulated ones obtained with default parameters has been carried out. Simulated values of filtered COD, NH(4)-N and dissolved oxygen are underestimated and nitrates are overestimated compared with observed data. Thus, nitrifying rate and oxygen transfer into the biofilm are overvalued. Secondly, a sensitivity analysis was carried out for parameters and for COD fractionation. It revealed three classes of sensitive parameters: physical, diffusional and kinetic. Then a calibration protocol of the MBBR dynamic model was proposed. It was successfully tested on data recorded at a pilot-scale plant and a calibrated set of values was obtained for four parameters: the maximum biofilm thickness, the detachment rate, the maximum autotrophic growth rate and the oxygen transfer rate.
Ting, T. O.; Lim, Eng Gee
2014-01-01
In this work, a state-space battery model is derived mathematically to estimate the state-of-charge (SoC) of a battery system. Subsequently, Kalman filter (KF) is applied to predict the dynamical behavior of the battery model. Results show an accurate prediction as the accumulated error, in terms of root-mean-square (RMS), is a very small value. From this work, it is found that different sets of Q and R values (KF's parameters) can be applied for better performance and hence lower RMS error. This is the motivation for the application of a metaheuristic algorithm. Hence, the result is further improved by applying a genetic algorithm (GA) to tune Q and R parameters of the KF. In an online application, a GA can be applied to obtain the optimal parameters of the KF before its application to a real plant (system). This simply means that the instantaneous response of the KF is not affected by the time consuming GA as this approach is applied only once to obtain the optimal parameters. The relevant workable MATLAB source codes are given in the appendix to ease future work and analysis in this area. PMID:25162041
Revisiting gamma-ray burst afterglows with time-dependent parameters
NASA Astrophysics Data System (ADS)
Yang, Chao; Zou, Yuan-Chuan; Chen, Wei; Liao, Bin; Lei, Wei-Hua; Liu, Yu
2018-02-01
The relativistic external shock model of gamma-ray burst (GRB) afterglows has been established with five free parameters, i.e., the total kinetic energy E, the equipartition parameters for electrons {{ε }}{{e}} and for the magnetic field {{ε }}{{B}}, the number density of the environment n and the index of the power-law distribution of shocked electrons p. A lot of modified models have been constructed to consider the variety of GRB afterglows, such as: the wind medium environment by letting n change with radius, the energy injection model by letting kinetic energy change with time and so on. In this paper, by assuming all four parameters (except p) change with time, we obtain a set of formulas for the dynamics and radiation, which can be used as a reference for modeling GRB afterglows. Some interesting results are obtained. For example, in some spectral segments, the radiated flux density does not depend on the number density or the profile of the environment. As an application, through modeling the afterglow of GRB 060607A, we find that it can be interpreted in the framework of the time dependent parameter model within a reasonable range.
Image parameters for maturity determination of a composted material containing sewage sludge
NASA Astrophysics Data System (ADS)
Kujawa, S.; Nowakowski, K.; Tomczak, R. J.; Boniecki, P.; Dach, J.
2013-07-01
Composting is one of the best methods for management of sewage sludge. In a reasonably conducted composting process it is important to early identify the moment in which a material reaches the young compost stage. The objective of this study was to determine parameters contained in images of composted material's samples that can be used for evaluation of the degree of compost maturity. The study focused on two types of compost: containing sewage sludge with corn straw and sewage sludge with rapeseed straw. The photographing of the samples was carried out on a prepared stand for the image acquisition using VIS, UV-A and mixed (VIS + UV-A) light. In the case of UV-A light, three values of the exposure time were assumed. The values of 46 parameters were estimated for each of the images extracted from the photographs of the composted material's samples. Exemplary averaged values of selected parameters obtained from the images of the composted material in the following sampling days were presented. All of the parameters obtained from the composted material's images are the basis for preparation of training, validation and test data sets necessary in development of neural models for classification of the young compost stage.
Optimizing the Determination of Roughness Parameters for Model Urban Canopies
NASA Astrophysics Data System (ADS)
Huq, Pablo; Rahman, Auvi
2018-05-01
We present an objective optimization procedure to determine the roughness parameters for very rough boundary-layer flow over model urban canopies. For neutral stratification the mean velocity profile above a model urban canopy is described by the logarithmic law together with the set of roughness parameters of displacement height d, roughness length z_0 , and friction velocity u_* . Traditionally, values of these roughness parameters are obtained by fitting the logarithmic law through (all) the data points comprising the velocity profile. The new procedure generates unique velocity profiles from subsets or combinations of the data points of the original velocity profile, after which all possible profiles are examined. Each of the generated profiles is fitted to the logarithmic law for a sequence of values of d, with the representative value of d obtained from the minima of the summed least-squares errors for all the generated profiles. The representative values of z_0 and u_* are identified by the peak in the bivariate histogram of z_0 and u_* . The methodology has been verified against laboratory datasets of flow above model urban canopies.
NASA Astrophysics Data System (ADS)
Sethuramalingam, Prabhu; Vinayagam, Babu Kupusamy
2016-07-01
Carbon nanotube mixed grinding wheel is used in the grinding process to analyze the surface characteristics of AISI D2 tool steel material. Till now no work has been carried out using carbon nanotube based grinding wheel. Carbon nanotube based grinding wheel has excellent thermal conductivity and good mechanical properties which are used to improve the surface finish of the workpiece. In the present study, the multi response optimization of process parameters like surface roughness and metal removal rate of grinding process of single wall carbon nanotube (CNT) in mixed cutting fluids is undertaken using orthogonal array with grey relational analysis. Experiments are performed with designated grinding conditions obtained using the L9 orthogonal array. Based on the results of the grey relational analysis, a set of optimum grinding parameters is obtained. Using the analysis of variance approach the significant machining parameters are found. Empirical model for the prediction of output parameters has been developed using regression analysis and the results are compared empirically, for conditions of with and without CNT grinding wheel in grinding process.
NASA Astrophysics Data System (ADS)
Azmi, H.; Haron, C. H. C.; Ghani, J. A.; Suhaily, M.; Yuzairi, A. R.
2018-04-01
The surface roughness (Ra) and delamination factor (Fd) of a milled kenaf reinforced plastic composite materials are depending on the milling parameters (spindle speed, feed rate and depth of cut). Therefore, a study was carried out to investigate the relationship between the milling parameters and their effects on a kenaf reinforced plastic composite materials. The composite panels were fabricated using vacuum assisted resin transfer moulding (VARTM) method. A full factorial design of experiments was use as an initial step to screen the significance of the parameters on the defects using Analysis of Variance (ANOVA). If the curvature of the collected data shows significant, Response Surface Methodology (RSM) is then applied for obtaining a quadratic modelling equation that has more reliable in expressing the optimization. Thus, the objective of this research is obtaining an optimum setting of milling parameters and modelling equations to minimize the surface roughness (Ra) and delamination factor (Fd) of milled kenaf reinforced plastic composite materials. The spindle speed and feed rate contributed the most in affecting the surface roughness and the delamination factor of the kenaf composite materials.
NASA Astrophysics Data System (ADS)
Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko
2014-05-01
Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands. For most of the country this led to over 15 hours of near-continuous precipitation, which resulted in total event accumulations exceeding 150 mm in the eastern part of the Netherlands. Such accumulations belong to the largest sums ever recorded in this country and gave rise to local flooding. Measuring precipitation by weather radar within such mesoscale convective systems is known to be a challenge, since measurements are affected by multiple sources of error. For the current event the operational weather radar rainfall product only estimated about 30% of the actual amount of precipitation as measured by rain gauges. In the current presentation we will try to identify what gave rise to such large underestimations. In general weather radar measurement errors can be subdivided into two different groups: 1) errors affecting the volumetric reflectivity measurements taken, and 2) errors related to the conversion of reflectivity values in rainfall intensity and attenuation estimates. To correct for the first group of errors, the quality of the weather radar reflectivity data was improved by successively correcting for 1) clutter and anomalous propagation, 2) radar calibration, 3) wet radome attenuation, 4) signal attenuation and 5) the vertical profile of reflectivity. Such consistent corrections are generally not performed by operational meteorological services. Results show a large improvement in the quality of the precipitation data, however still only ~65% of the actual observed accumulations was estimated. To further improve the quality of the precipitation estimates, the second group of errors are corrected for by making use of disdrometer measurements taken in close vicinity of the radar. Based on these data the parameters of a normalized drop size distribution are estimated for the total event as well as for each precipitation type separately (convective, stratiform and undefined). These are then used to obtain coherent parameter sets for the radar reflectivity-rainfall rate (Z-R) and radar reflectivity-attenuation (Z-k) relationship, specifically applicable for this event. By applying a single parameter set to correct for both sources of errors, the quality of the rainfall product improves further, leading to >80% of the observed accumulations. However, by differentiating between precipitation type no better results are obtained as when using the operational relationships. This leads to the question: how representative are local disdrometer observations to correct large scale weather radar measurements? In order to tackle this question a Monte Carlo approach was used to generate >10000 sets of the normalized dropsize distribution parameters and to assess their impact on the estimated precipitation amounts. Results show that a large number of parameter sets result in improved precipitation estimated by the weather radar closely resembling observations. However, these optimal sets vary considerably as compared to those obtained from the local disdrometer measurements.
An algorithm for surface smoothing with rational splines
NASA Technical Reports Server (NTRS)
Schiess, James R.
1987-01-01
Discussed is an algorithm for smoothing surfaces with spline functions containing tension parameters. The bivariate spline functions used are tensor products of univariate rational-spline functions. A distinct tension parameter corresponds to each rectangular strip defined by a pair of consecutive spline knots along either axis. Equations are derived for writing the bivariate rational spline in terms of functions and derivatives at the knots. Estimates of these values are obtained via weighted least squares subject to continuity constraints at the knots. The algorithm is illustrated on a set of terrain elevation data.
Data free inference with processed data products
Chowdhary, K.; Najm, H. N.
2014-07-12
Here, we consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.
Bustamante, P; Pena, M A; Barra, J
2000-01-20
Sodium salts are often used in drug formulation but their partial solubility parameters are not available. Sodium alters the physical properties of the drug and the knowledge of these parameters would help to predict adhesion properties that cannot be estimated using the solubility parameters of the parent acid. This work tests the applicability of the modified extended Hansen method to determine partial solubility parameters of sodium salts of acidic drugs containing a single hydrogen bonding group (ibuprofen, sodium ibuprofen, benzoic acid and sodium benzoate). The method uses a regression analysis of the logarithm of the experimental mole fraction solubility of the drug against the partial solubility parameters of the solvents, using models with three and four parameters. The solubility of the drugs was determined in a set of solvents representative of several chemical classes, ranging from low to high solubility parameter values. The best results were obtained with the four parameter model for the acidic drugs and with the three parameter model for the sodium derivatives. The four parameter model includes both a Lewis-acid and a Lewis-base term. Since the Lewis acid properties of the sodium derivatives are blocked by sodium, the three parameter model is recommended for these kind of compounds. Comparison of the parameters obtained shows that sodium greatly changes the polar parameters whereas the dispersion parameter is not much affected. Consequently the total solubility parameters of the salts are larger than for the parent acids in good agreement with the larger hydrophilicity expected from the introduction of sodium. The results indicate that the modified extended Hansen method can be applied to determine the partial solubility parameters of acidic drugs and their sodium salts.
Gilliom, Robert J.; Helsel, Dennis R.
1986-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.
Estimation of distributional parameters for censored trace-level water-quality data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1984-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less
Extreme data compression for the CMB
Zablocki, Alan; Dodelson, Scott
2016-04-28
We apply the Karhunen-Loéve methods to cosmic microwave background (CMB) data sets, and show that we can recover the input cosmology and obtain the marginalized likelihoods in Λ cold dark matter cosmologies in under a minute, much faster than Markov chain Monte Carlo methods. This is achieved by forming a linear combination of the power spectra at each multipole l, and solving a system of simultaneous equations such that the Fisher matrix is locally unchanged. Instead of carrying out a full likelihood evaluation over the whole parameter space, we need evaluate the likelihood only for the parameter of interest, with themore » data compression effectively marginalizing over all other parameters. The weighting vectors contain insight about the physical effects of the parameters on the CMB anisotropy power spectrum C l. The shape and amplitude of these vectors give an intuitive feel for the physics of the CMB, the sensitivity of the observed spectrum to cosmological parameters, and the relative sensitivity of different experiments to cosmological parameters. We test this method on exact theory C l as well as on a Wilkinson Microwave Anisotropy Probe (WMAP)-like CMB data set generated from a random realization of a fiducial cosmology, comparing the compression results to those from a full likelihood analysis using CosmoMC. Furthermore, after showing that the method works, we apply it to the temperature power spectrum from the WMAP seven-year data release, and discuss the successes and limitations of our method as applied to a real data set.« less
Impact of uncertainties in free stream conditions on the aerodynamics of a rectangular cylinder
NASA Astrophysics Data System (ADS)
Mariotti, Alessandro; Shoeibi Omrani, Pejman; Witteveen, Jeroen; Salvetti, Maria Vittoria
2015-11-01
The BARC benchmark deals with the flow around a rectangular cylinder with chord-to-depth ratio equal to 5. This flow configuration is of practical interest for civil and industrial structures and it is characterized by massively separated flow and unsteadiness. In a recent review of BARC results, significant dispersion was observed both in experimental and numerical predictions of some flow quantities, which are extremely sensitive to various uncertainties, which may be present in experiments and simulations. Besides modeling and numerical errors, in simulations it is difficult to exactly reproduce the experimental conditions due to uncertainties in the set-up parameters, which sometimes cannot be exactly controlled or characterized. Probabilistic methods and URANS simulations are used to investigate the impact of the uncertainties in the following set-up parameters: the angle of incidence, the free stream longitudinal turbulence intensity and length scale. Stochastic collocation is employed to perform the probabilistic propagation of the uncertainty. The discretization and modeling errors are estimated by repeating the same analysis for different grids and turbulence models. The results obtained for different assumed PDF of the set-up parameters are also compared.
Links between the charge model and bonded parameter force constants in biomolecular force fields
NASA Astrophysics Data System (ADS)
Cerutti, David S.; Debiec, Karl T.; Case, David A.; Chong, Lillian T.
2017-10-01
The ff15ipq protein force field is a fixed charge model built by automated tools based on the two charge sets of the implicitly polarized charge method: one set (appropriate for vacuum) for deriving bonded parameters and the other (appropriate for aqueous solution) for running simulations. The duality is intended to treat water-induced electronic polarization with an understanding that fitting data for bonded parameters will come from quantum mechanical calculations in the gas phase. In this study, we compare ff15ipq to two alternatives produced with the same fitting software and a further expanded data set but following more conventional methods for tailoring bonded parameters (harmonic angle terms and torsion potentials) to the charge model. First, ff15ipq-Qsolv derives bonded parameters in the context of the ff15ipq solution phase charge set. Second, ff15ipq-Vac takes ff15ipq's bonded parameters and runs simulations with the vacuum phase charge set used to derive those parameters. The IPolQ charge model and associated protocol for deriving bonded parameters are shown to be an incremental improvement over protocols that do not account for the material phases of each source of their fitting data. Both force fields incorporating the polarized charge set depict stable globular proteins and have varying degrees of success modeling the metastability of short (5-19 residues) peptides. In this particular case, ff15ipq-Qsolv increases stability in a number of α -helices, correctly obtaining 70% helical character in the K19 system at 275 K and showing appropriately diminishing content up to 325 K, but overestimating the helical fraction of AAQAA3 by 50% or more, forming long-lived α -helices in simulations of a β -hairpin, and increasing the likelihood that the disordered p53 N-terminal peptide will also form a helix. This may indicate a systematic bias imparted by the ff15ipq-Qsolv parameter development strategy, which has the hallmarks of strategies used to develop other popular force fields, and may explain some of the need for manual corrections in this force fields' evolution. In contrast, ff15ipq-Vac incorrectly depicts globular protein unfolding in numerous systems tested, including Trp cage, villin, lysozyme, and GB3, and does not perform any better than ff15ipq or ff15ipq-Qsolv in tests on short peptides. We analyze the free energy surfaces of individual amino acid dipeptides and the electrostatic potential energy surfaces of each charge model to explain the differences.
Taccheo, Stefano; Gebavi, Hrvoje; Monteville, Achille; Le Goffic, Olivier; Landais, David; Mechin, David; Tregoat, Denis; Cadier, Benoit; Robin, Thierry; Milanese, Daniel; Durrant, Tim
2011-09-26
We report on an extensive investigation of photodarkening in Yb-doped silica fibers. A set of similar fibers, covering a large Yb concentration range, was made so as to compare the photodarkening induced losses. Careful measurements were made to ensure equal and uniform inversion for all the tested fibers. The results show that, with the specific set-up, the stretching parameter obtained through fitting has a very limited variation. This gives more meaning to the fitting parameters. Results tend to indicate a square law dependence of the concentration of excited ions on the final saturated loss. We also demonstrate self-similarity of loss evolution when experimental curves are simply normalized to fitting parameters. This evidence of self-similarity also supports the possibility of introducing a preliminary figure of merit for Yb-doped fiber. This will allow the impact of photodarkening on laser/amplifier devices to be evaluated. © 2011 Optical Society of America
Simple method for quick estimation of aquifer hydrogeological parameters
NASA Astrophysics Data System (ADS)
Ma, C.; Li, Y. Y.
2017-08-01
Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.
Snyder, James A; Abramyan, Tigran; Yancey, Jeremy A; Thyparambil, Aby A; Wei, Yang; Stuart, Steven J; Latour, Robert A
2012-12-01
Adsorption free energies for eight host-guest peptides (TGTG-X-GTGT, with X = N, D, G, K, F, T, W, and V) on two different silica surfaces [quartz (100) and silica glass] were calculated using umbrella sampling and replica exchange molecular dynamics and compared with experimental values determined by atomic force microscopy. Using the CHARMM force field, adsorption free energies were found to be overestimated (i.e., too strongly adsorbing) by about 5-9 kcal/mol compared to the experimental data for both types of silica surfaces. Peptide adsorption behavior for the silica glass surface was then adjusted using a modified version of the CHARMM program, which we call dual force-field CHARMM, which allows separate sets of nonbonded parameters (i.e., partial charge and Lennard-Jones parameters) to be used to represent intra-phase and inter-phase interactions within a given molecular system. Using this program, interfacial force field (IFF) parameters for the peptide-silica glass systems were corrected to obtain adsorption free energies within about 0.5 kcal/mol of their respective experimental values, while IFF tuning for the quartz (100) surface remains for future work. The tuned IFF parameter set for silica glass will subsequently be used for simulations of protein adsorption behavior on silica glass with greater confidence in the balance between relative adsorption affinities of amino acid residues and the aqueous solution for the silica glass surface.
Snyder, James A.; Abramyan, Tigran; Yancey, Jeremy A.; Thyparambil, Aby A.; Wei, Yang; Stuart, Steven J.; Latour, Robert A.
2012-01-01
Adsorption free energies for eight host–guest peptides (TGTG-X-GTGT, with X = N, D, G, K, F, T, W, and V) on two different silica surfaces [quartz (100) and silica glass] were calculated using umbrella sampling and replica exchange molecular dynamics and compared with experimental values determined by atomic force microscopy. Using the CHARMM force field, adsorption free energies were found to be overestimated (i.e., too strongly adsorbing) by about 5–9 kcal/mol compared to the experimental data for both types of silica surfaces. Peptide adsorption behavior for the silica glass surface was then adjusted using a modified version of the CHARMM program, which we call dual force-field CHARMM, which allows separate sets of nonbonded parameters (i.e., partial charge and Lennard-Jones parameters) to be used to represent intra-phase and inter-phase interactions within a given molecular system. Using this program, interfacial force field (IFF) parameters for the peptide-silica glass systems were corrected to obtain adsorption free energies within about 0.5 kcal/mol of their respective experimental values, while IFF tuning for the quartz (100) surface remains for future work. The tuned IFF parameter set for silica glass will subsequently be used for simulations of protein adsorption behavior on silica glass with greater confidence in the balance between relative adsorption affinities of amino acid residues and the aqueous solution for the silica glass surface. PMID:22941539
Cooperative inversion of magnetotelluric and seismic data sets
NASA Astrophysics Data System (ADS)
Markovic, M.; Santos, F.
2012-04-01
Cooperative inversion of magnetotelluric and seismic data sets Milenko Markovic,Fernando Monteiro Santos IDL, Faculdade de Ciências da Universidade de Lisboa 1749-016 Lisboa Inversion of single geophysical data has well-known limitations due to the non-linearity of the fields and non-uniqueness of the model. There is growing need, both in academy and industry to use two or more different data sets and thus obtain subsurface property distribution. In our case ,we are dealing with magnetotelluric and seismic data sets. In our approach,we are developing algorithm based on fuzzy-c means clustering technique, for pattern recognition of geophysical data. Separate inversion is performed on every step, information exchanged for model integration. Interrelationships between parameters from different models is not required in analytical form. We are investigating how different number of clusters, affects zonation and spatial distribution of parameters. In our study optimization in fuzzy c-means clustering (for magnetotelluric and seismic data) is compared for two cases, firstly alternating optimization and then hybrid method (alternating optimization+ Quasi-Newton method). Acknowledgment: This work is supported by FCT Portugal
Interaction of cadmium with phosphate on goethite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venema, P.; Hiemstra, T.; Riemsdijk, W.H. van
1997-08-01
Interactions between different ions are of importance in understanding chemical processes in natural systems. In this study simultaneous adsorption of phosphate and cadmium on goethite is studied in detail. The charge distribution (CD)-multisite complexation (MUSIC) model has been successful in describing extended data sets of cadmium adsorption and phosphate adsorption on goethite. In this study, the parameters of this model for these two data sets were combined to describe a new data set of simultaneous adsorption of cadmium and phosphate on goethite. Attention is focused on the surface speciation of cadmium. With the extra information that can be obtained frommore » the interaction experiments, the cadmium adsorption model is refined. For a perfect description of the data, the singly coordinated surface groups at the 110 face of goethite were assumed to form both monodentate and bidentate surface species with cadmium. The CD-MUSIC model is able to describe data sets of both simultaneous and single adsorption of cadmium and phosphate with the same parameters. The model calculations confirmed the idea that only singly coordinated surface groups are reactive for specific ion binding.« less
Optimal Control for Fast and Robust Generation of Entangled States in Anisotropic Heisenberg Chains
NASA Astrophysics Data System (ADS)
Zhang, Xiong-Peng; Shao, Bin; Zou, Jian
2017-05-01
Motivated by some recent results of the optimal control (OC) theory, we study anisotropic XXZ Heisenberg spin-1/2 chains with control fields acting on a single spin, with the aim of exploring how maximally entangled state can be prepared. To achieve the goal, we use a numerical optimization algorithm (e.g., the Krotov algorithm, which was shown to be capable of reaching the quantum speed limit) to search an optimal set of control parameters, and then obtain OC pulses corresponding to the target fidelity. We find that the minimum time for implementing our target state depending on the anisotropy parameter Δ of the model. Finally, we analyze the robustness of the obtained results for the optimal fidelities and the effectiveness of the Krotov method under some realistic conditions.
Sanz, J M; Saiz, J M; González, F; Moreno, F
2011-07-20
In this research, the polar decomposition (PD) method is applied to experimental Mueller matrices (MMs) measured on two-dimensional microstructured surfaces. Polarization information is expressed through a set of parameters of easier physical interpretation. It is shown that evaluating the first derivative of the retardation parameter, δ, a clear indication of the presence of defects either built on or dug in the scattering flat surface (a silicon wafer in our case) can be obtained. Although the rule of thumb thus obtained is established through PD, it can be easily implemented on conventional surface polarimetry. These results constitute an example of the capabilities of the PD approach to MM analysis, and show a direct application in surface characterization. © 2011 Optical Society of America
Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.
Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf
2010-05-25
Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.
Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks
2010-01-01
Background Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. Results In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. Conclusions The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates. PMID:20500862
NASA Astrophysics Data System (ADS)
Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian
2016-12-01
Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.
Trushkov, V F; Perminov, K A; Sapozhnikova, V V; Ignatova, O L
2013-01-01
The connection of thermodynamic properties and parameters of toxicity of chemical substances was determined. Obtained data are used for the evaluation of toxicity and hygienic rate setting of chemical compounds. The relationship between enthalpy and toxicity of chemical compounds has been established. Orthogonal planning of the experiment was carried out in the course of the investigations. Equation of unified hygienic rate setting in combined, complex, conjunct influence on the organism is presented. Prospects of determination of toxicity and methodology of unified hygienic rate setting in combined, complex, conjunct influence on the organism are presented
Theoretical study of the XP3 (X = Al, B, Ga) clusters
NASA Astrophysics Data System (ADS)
Ueno, Leonardo T.; Lopes, Cinara; Malaspina, Thaciana; Roberto-Neto, Orlando; Canuto, Sylvio; Machado, Francisco B. C.
2012-05-01
The lowest singlet and triplet states of AlP3, GaP3 and BP3 molecules with Cs, C2v and C3v symmetries were characterized using the B3LYP functional and the aug-cc-pVTZ and aug-cc-pVQZ correlated consistent basis sets. Geometrical parameters and vibrational frequencies were calculated and compared to existent experimental and theoretical data. Relative energies were obtained with single point CCSD(T) calculations using the aug-cc-pVTZ, aug-cc-pVQZ and aug-cc-pV5Z basis sets, and then extrapolating to the complete basis set (CBS) limit.
Bowd, Christopher; Medeiros, Felipe A.; Zhang, Zuohua; Zangwill, Linda M.; Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.; Weinreb, Robert N.; Goldbaum, Michael H.
2010-01-01
Purpose To classify healthy and glaucomatous eyes using relevance vector machine (RVM) and support vector machine (SVM) learning classifiers trained on retinal nerve fiber layer (RNFL) thickness measurements obtained by scanning laser polarimetry (SLP). Methods Seventy-two eyes of 72 healthy control subjects (average age = 64.3 ± 8.8 years, visual field mean deviation =−0.71 ± 1.2 dB) and 92 eyes of 92 patients with glaucoma (average age = 66.9 ± 8.9 years, visual field mean deviation =−5.32 ± 4.0 dB) were imaged with SLP with variable corneal compensation (GDx VCC; Laser Diagnostic Technologies, San Diego, CA). RVM and SVM learning classifiers were trained and tested on SLP-determined RNFL thickness measurements from 14 standard parameters and 64 sectors (approximately 5.6° each) obtained in the circumpapillary area under the instrument-defined measurement ellipse (total 78 parameters). Tenfold cross-validation was used to train and test RVM and SVM classifiers on unique subsets of the full 164-eye data set and areas under the receiver operating characteristic (AUROC) curve for the classification of eyes in the test set were generated. AUROC curve results from RVM and SVM were compared to those for 14 SLP software-generated global and regional RNFL thickness parameters. Also reported was the AUROC curve for the GDx VCC software-generated nerve fiber indicator (NFI). Results The AUROC curves for RVM and SVM were 0.90 and 0.91, respectively, and increased to 0.93 and 0.94 when the training sets were optimized with sequential forward and backward selection (resulting in reduced dimensional data sets). AUROC curves for optimized RVM and SVM were significantly larger than those for all individual SLP parameters. The AUROC curve for the NFI was 0.87. Conclusions Results from RVM and SVM trained on SLP RNFL thickness measurements are similar and provide accurate classification of glaucomatous and healthy eyes. RVM may be preferable to SVM, because it provides a Bayesian-derived probability of glaucoma as an output. These results suggest that these machine learning classifiers show good potential for glaucoma diagnosis. PMID:15790898
Agile convolutional neural network for pulmonary nodule classification using CT images.
Zhao, Xinzhuo; Liu, Liyao; Qi, Shouliang; Teng, Yueyang; Li, Jianhua; Qian, Wei
2018-04-01
To distinguish benign from malignant pulmonary nodules using CT images is critical for their precise diagnosis and treatment. A new Agile convolutional neural network (CNN) framework is proposed to conquer the challenges of a small-scale medical image database and the small size of the nodules, and it improves the performance of pulmonary nodule classification using CT images. A hybrid CNN of LeNet and AlexNet is constructed through combining the layer settings of LeNet and the parameter settings of AlexNet. A dataset with 743 CT image nodule samples is built up based on the 1018 CT scans of LIDC to train and evaluate the Agile CNN model. Through adjusting the parameters of the kernel size, learning rate, and other factors, the effect of these parameters on the performance of the CNN model is investigated, and an optimized setting of the CNN is obtained finally. After finely optimizing the settings of the CNN, the estimation accuracy and the area under the curve can reach 0.822 and 0.877, respectively. The accuracy of the CNN is significantly dependent on the kernel size, learning rate, training batch size, dropout, and weight initializations. The best performance is achieved when the kernel size is set to [Formula: see text], the learning rate is 0.005, the batch size is 32, and dropout and Gaussian initialization are used. This competitive performance demonstrates that our proposed CNN framework and the optimization strategy of the CNN parameters are suitable for pulmonary nodule classification characterized by small medical datasets and small targets. The classification model might help diagnose and treat pulmonary nodules effectively.
Limit on the photon mass deduced from Pioneer-10 observations of Jupiter's magnetic field
NASA Technical Reports Server (NTRS)
Davis, L., Jr.; Goldhaber, A. S.; Nieto, M. M.
1975-01-01
Analysis of the Pioneer-10 data on Jupiter's magnetic field, in which the mass of the photon was treated as a free parameter. An upper limit of 8 to the negative 49th grams was set for the photon mass. This is the smallest limit so far obtained from direct measurements.
Item Estimates under Low-Stakes Conditions: How Should Omits Be Treated?
ERIC Educational Resources Information Center
DeMars, Christine
Using data from a pilot test of science and math from students in 30 high schools, item difficulties were estimated with a one-parameter model (partial-credit model for the multi-point items). Some items were multiple-choice items, and others were constructed-response items (open-ended). Four sets of estimates were obtained: estimates for males…
Global Seabed Materials and Habitats Mapped: The Computational Methods
NASA Astrophysics Data System (ADS)
Jenkins, C. J.
2016-02-01
What the seabed is made of has proven difficult to map on the scale of whole ocean-basins. Direct sampling and observation can be augmented with proxy-parameter methods such as acoustics. Both avenues are essential to obtain enough detail and coverage, and also to validate the mapping methods. We focus on the direct observations such as samplings, photo and video, probes, diver and sub reports, and surveyed features. These are often in word-descriptive form: over 85% of the records for site materials are in this form, whether as sample/view descriptions or classifications, or described parameters such as consolidation, color, odor, structures and components. Descriptions are absolutely necessary for unusual materials and for processes - in other words, for research. This project dbSEABED not only has the largest collection of seafloor materials data worldwide, but it uses advanced computing math to obtain the best possible coverages and detail. Included in those techniques are linguistic text analysis (e.g., Natural Language Processing, NLP), fuzzy set theory (FST), and machine learning (ML, e.g., Random Forest). These techniques allow efficient and accurate import of huge datasets, thereby optimizing the data that exists. They merge quantitative and qualitative types of data for rich parameter sets, and extrapolate where the data are sparse for best map production. The dbSEABED data resources are now very widely used worldwide in oceanographic research, environmental management, the geosciences, engineering and survey.
Estimating clinical chemistry reference values based on an existing data set of unselected animals.
Dimauro, Corrado; Bonelli, Piero; Nicolussi, Paola; Rassu, Salvatore P G; Cappio-Borlino, Aldo; Pulina, Giuseppe
2008-11-01
In an attempt to standardise the determination of biological reference values, the International Federation of Clinical Chemistry (IFCC) has published a series of recommendations on developing reference intervals. The IFCC recommends the use of an a priori sampling of at least 120 healthy individuals. However, such a high number of samples and laboratory analysis is expensive, time-consuming and not always feasible, especially in veterinary medicine. In this paper, an alternative (a posteriori) method is described and is used to determine reference intervals for biochemical parameters of farm animals using an existing laboratory data set. The method used was based on the detection and removal of outliers to obtain a large sample of animals likely to be healthy from the existing data set. This allowed the estimation of reliable reference intervals for biochemical parameters in Sarda dairy sheep. This method may also be useful for the determination of reference intervals for different species, ages and gender.
Kosmulski, Marek
2012-01-01
The numerical values of points of zero charge (PZC, obtained by potentiometric titration) and of isoelectric points (IEP) of various materials reported in the literature have been analyzed. In sets of results reported for the same chemical compound (corresponding to certain chemical formula and crystallographic structure), the IEP are relatively consistent. In contrast, in materials other than metal oxides, the sets of PZC are inconsistent. In view of the inconsistence in the sets of PZC and of the discrepancies between PZC and IEP reported for the same material, it seems that IEP is more suitable than PZC as the unique number characterizing the pH-dependent surface charging of materials other than metal oxides. The present approach is opposite to the usual approach, in which the PZC and IEP are considered as two equally important parameters characterizing the pH-dependent surface charging of materials other than metal oxides. Copyright © 2012 Elsevier B.V. All rights reserved.
Automatic Differentiation in Quantum Chemistry with Applications to Fully Variational Hartree-Fock.
Tamayo-Mendoza, Teresa; Kreisbeck, Christoph; Lindh, Roland; Aspuru-Guzik, Alán
2018-05-23
Automatic differentiation (AD) is a powerful tool that allows calculating derivatives of implemented algorithms with respect to all of their parameters up to machine precision, without the need to explicitly add any additional functions. Thus, AD has great potential in quantum chemistry, where gradients are omnipresent but also difficult to obtain, and researchers typically spend a considerable amount of time finding suitable analytical forms when implementing derivatives. Here, we demonstrate that AD can be used to compute gradients with respect to any parameter throughout a complete quantum chemistry method. We present DiffiQult , a Hartree-Fock implementation, entirely differentiated with the use of AD tools. DiffiQult is a software package written in plain Python with minimal deviation from standard code which illustrates the capability of AD to save human effort and time in implementations of exact gradients in quantum chemistry. We leverage the obtained gradients to optimize the parameters of one-particle basis sets in the context of the floating Gaussian framework.
Event generator tunes obtained from underlying event and multiparton scattering measurements.
Khachatryan, V; Sirunyan, A M; Tumasyan, A; Adam, W; Asilar, E; Bergauer, T; Brandstetter, J; Brondolin, E; Dragicevic, M; Erö, J; Friedl, M; Frühwirth, R; Ghete, V M; Hartl, C; Hörmann, N; Hrubec, J; Jeitler, M; Knünz, V; König, A; Krammer, M; Krätschmer, I; Liko, D; Matsushita, T; Mikulec, I; Rabady, D; Rahbaran, B; Rohringer, H; Schieck, J; Schöfbeck, R; Strauss, J; Treberer-Treberspurg, W; Waltenberger, W; Wulz, C-E; Mossolov, V; Shumeiko, N; Suarez Gonzalez, J; Alderweireldt, S; Cornelis, T; De Wolf, E A; Janssen, X; Knutsson, A; Lauwers, J; Luyckx, S; Van De Klundert, M; Van Haevermaet, H; Van Mechelen, P; Van Remortel, N; Van Spilbeeck, A; Abu Zeid, S; Blekman, F; D'Hondt, J; Daci, N; De Bruyn, I; Deroover, K; Heracleous, N; Keaveney, J; Lowette, S; Moreels, L; Olbrechts, A; Python, Q; Strom, D; Tavernier, S; Van Doninck, W; Van Mulders, P; Van Onsem, G P; Van Parijs, I; Barria, P; Brun, H; Caillol, C; Clerbaux, B; De Lentdecker, G; Fasanella, G; Favart, L; Grebenyuk, A; Karapostoli, G; Lenzi, T; Léonard, A; Maerschalk, T; Marinov, A; Perniè, L; Randle-Conde, A; Seva, T; Vander Velde, C; Yonamine, R; Vanlaer, P; Yonamine, R; Zenoni, F; Zhang, F; Adler, V; Beernaert, K; Benucci, L; Cimmino, A; Crucy, S; Dobur, D; Fagot, A; Garcia, G; Gul, M; Mccartin, J; Ocampo Rios, A A; Poyraz, D; Ryckbosch, D; Salva, S; Sigamani, M; Tytgat, M; Van Driessche, W; Yazgan, E; Zaganidis, N; Basegmez, S; Beluffi, C; Bondu, O; Brochet, S; Bruno, G; Caudron, A; Ceard, L; Da Silveira, G G; Delaere, C; Favart, D; Forthomme, L; Giammanco, A; Hollar, J; Jafari, A; Jez, P; Komm, M; Lemaitre, V; Mertens, A; Musich, M; Nuttens, C; Perrini, L; Pin, A; Piotrzkowski, K; Popov, A; Quertenmont, L; Selvaggi, M; Vidal Marono, M; Beliy, N; Hammad, G H; Júnior, W L Aldá; Alves, F L; Alves, G A; Brito, L; Correa Martins Junior, M; Hamer, M; Hensel, C; Moraes, A; Pol, M E; Rebello Teles, P; Belchior Batista Das Chagas, E; Carvalho, W; Chinellato, J; Custódio, A; Da Costa, E M; De Jesus Damiao, D; De Oliveira Martins, C; Fonseca De Souza, S; Huertas Guativa, L M; Malbouisson, H; Matos Figueiredo, D; Mora Herrera, C; Mundim, L; Nogima, H; Prado Da Silva, W L; Santoro, A; Sznajder, A; Tonelli Manganote, E J; Vilela Pereira, A; Ahuja, S; Bernardes, C A; De Souza Santos, A; Dogra, S; Fernandez Perez Tomei, T R; Gregores, E M; Mercadante, P G; Moon, C S; Novaes, S F; Padula, Sandra S; Romero Abad, D; Ruiz Vargas, J C; Aleksandrov, A; Hadjiiska, R; Iaydjiev, P; Rodozov, M; Stoykova, S; Sultanov, G; Vutova, M; Dimitrov, A; Glushkov, I; Litov, L; Pavlov, B; Petkov, P; Ahmad, M; Bian, J G; Chen, G M; Chen, H S; Chen, M; Cheng, T; Du, R; Jiang, C H; Plestina, R; Romeo, F; Shaheen, S M; Spiezia, A; Tao, J; Wang, C; Wang, Z; Zhang, H; Asawatangtrakuldee, C; Ban, Y; Li, Q; Liu, S; Mao, Y; Qian, S J; Wang, D; Xu, Z; Avila, C; Cabrera, A; Chaparro Sierra, L F; Florez, C; Gomez, J P; Gomez Moreno, B; Sanabria, J C; Godinovic, N; Lelas, D; Puljak, I; Ribeiro Cipriano, P M; Antunovic, Z; Kovac, M; Brigljevic, V; Kadija, K; Luetic, J; Micanovic, S; Sudic, L; Attikis, A; Mavromanolakis, G; Mousa, J; Nicolaou, C; Ptochos, F; Razis, P A; Rykaczewski, H; Bodlak, M; Finger, M; Finger, M; Abdelalim, A A; Awad, A; Mahrous, A; Mohammed, Y; Radi, A; Calpas, B; Kadastik, M; Murumaa, M; Raidal, M; Tiko, A; Veelken, C; Eerola, P; Pekkanen, J; Voutilainen, M; Härkönen, J; Karimäki, V; Kinnunen, R; Lampén, T; Lassila-Perini, K; Lehti, S; Lindén, T; Luukka, P; Mäenpää, T; Peltola, T; Tuominen, E; Tuominiemi, J; Tuovinen, E; Wendland, L; Talvitie, J; Tuuva, T; Besancon, M; Couderc, F; Dejardin, M; Denegri, D; Fabbro, B; Faure, J L; Favaro, C; Ferri, F; Ganjour, S; Givernaud, A; Gras, P; Hamel de Monchenault, G; Jarry, P; Locci, E; Machet, M; Malcles, J; Rander, J; Rosowsky, A; Titov, M; Zghiche, A; Antropov, I; Baffioni, S; Beaudette, F; Busson, P; Cadamuro, L; Chapon, E; Charlot, C; Dahms, T; Davignon, O; Filipovic, N; Granier de Cassagnac, R; Jo, M; Lisniak, S; Mastrolorenzo, L; Miné, P; Naranjo, I N; Nguyen, M; Ochando, C; Ortona, G; Paganini, P; Pigard, P; Regnard, S; Salerno, R; Sauvan, J B; Sirois, Y; Strebler, T; Yilmaz, Y; Zabi, A; Agram, J-L; Andrea, J; Aubin, A; Bloch, D; Brom, J-M; Buttignol, M; Chabert, E C; Chanon, N; Collard, C; Conte, E; Coubez, X; Fontaine, J-C; Gelé, D; Goerlach, U; Goetzmann, C; Le Bihan, A-C; Merlin, J A; Skovpen, K; Van Hove, P; Gadrat, S; Beauceron, S; Bernet, C; Boudoul, G; Bouvier, E; Carrillo Montoya, C A; Chierici, R; Contardo, D; Courbon, B; Depasse, P; El Mamouni, H; Fan, J; Fay, J; Gascon, S; Gouzevitch, M; Ille, B; Lagarde, F; Laktineh, I B; Lethuillier, M; Mirabito, L; Pequegnot, A L; Perries, S; Ruiz Alvarez, J D; Sabes, D; Sgandurra, L; Sordini, V; Vander Donckt, M; Verdier, P; Viret, S; Toriashvili, T; Lomidze, D; Autermann, C; Beranek, S; Edelhoff, M; Feld, L; Heister, A; Kiesel, M K; Klein, K; Lipinski, M; Ostapchuk, A; Preuten, M; Raupach, F; Schael, S; Schulte, J F; Verlage, T; Weber, H; Wittmer, B; Zhukov, V; Ata, M; Brodski, M; Dietz-Laursonn, E; Duchardt, D; Endres, M; Erdmann, M; Erdweg, S; Esch, T; Fischer, R; Güth, A; Hebbeker, T; Heidemann, C; Hoepfner, K; Knutzen, S; Kreuzer, P; Merschmeyer, M; Meyer, A; Millet, P; Olschewski, M; Padeken, K; Papacz, P; Pook, T; Radziej, M; Reithler, H; Rieger, M; Scheuch, F; Sonnenschein, L; Teyssier, D; Thüer, S; Cherepanov, V; Erdogan, Y; Flügge, G; Geenen, H; Geisler, M; Hoehle, F; Kargoll, B; Kress, T; Kuessel, Y; Künsken, A; Lingemann, J; Nehrkorn, A; Nowack, A; Nugent, I M; Pistone, C; Pooth, O; Stahl, A; Aldaya Martin, M; Asin, I; Bartosik, N; Behnke, O; Behrens, U; Bell, A J; Borras, K; Burgmeier, A; Campbell, A; Choudhury, S; Costanza, F; Diez Pardos, C; Dolinska, G; Dooling, S; Dorland, T; Eckerlin, G; Eckstein, D; Eichhorn, T; Flucke, G; Gallo, E; Garcia, J Garay; Geiser, A; Gizhko, A; Gunnellini, P; Hauk, J; Hempel, M; Jung, H; Kalogeropoulos, A; Karacheban, O; Kasemann, M; Katsas, P; Kieseler, J; Kleinwort, C; Korol, I; Lange, W; Leonard, J; Lipka, K; Lobanov, A; Lohmann, W; Mankel, R; Marfin, I; Melzer-Pellmann, I-A; Meyer, A B; Mittag, G; Mnich, J; Mussgiller, A; Naumann-Emme, S; Nayak, A; Ntomari, E; Perrey, H; Pitzl, D; Placakyte, R; Raspereza, A; Roland, B; Sahin, M Ö; Saxena, P; Schoerner-Sadenius, T; Schröder, M; Seitz, C; Spannagel, S; Trippkewitz, K D; Walsh, R; Wissing, C; Blobel, V; Centis Vignali, M; Draeger, A R; Erfle, J; Garutti, E; Goebel, K; Gonzalez, D; Görner, M; Haller, J; Hoffmann, M; Höing, R S; Junkes, A; Klanner, R; Kogler, R; Kovalchuk, N; Lapsien, T; Lenz, T; Marchesini, I; Marconi, D; Meyer, M; Nowatschin, D; Ott, J; Pantaleo, F; Peiffer, T; Perieanu, A; Pietsch, N; Poehlsen, J; Rathjens, D; Sander, C; Scharf, C; Schettler, H; Schleper, P; Schlieckau, E; Schmidt, A; Schwandt, J; Sola, V; Stadie, H; Steinbrück, G; Tholen, H; Troendle, D; Usai, E; Vanelderen, L; Vanhoefer, A; Vormwald, B; Barth, C; Baus, C; Berger, J; Böser, C; Butz, E; Chwalek, T; Colombo, F; De Boer, W; Descroix, A; Dierlamm, A; Fink, S; Frensch, F; Friese, R; Giffels, M; Gilbert, A; Haitz, D; Hartmann, F; Heindl, S M; Husemann, U; Katkov, I; Kornmayer, A; Lobelle Pardo, P; Maier, B; Mildner, H; Mozer, M U; Müller, T; Müller, Th; Plagge, M; Quast, G; Rabbertz, K; Röcker, S; Roscher, F; Sieber, G; Simonis, H J; Stober, F M; Ulrich, R; Wagner-Kuhr, J; Wayand, S; Weber, M; Weiler, T; Williamson, S; Wöhrmann, C; Wolf, R; Anagnostou, G; Daskalakis, G; Geralis, T; Giakoumopoulou, V A; Kyriakis, A; Loukas, D; Psallidas, A; Topsis-Giotis, I; Agapitos, A; Kesisoglou, S; Panagiotou, A; Saoulidou, N; Tziaferi, E; Evangelou, I; Flouris, G; Foudas, C; Kokkas, P; Loukas, N; Manthos, N; Papadopoulos, I; Paradas, E; Strologas, J; Bencze, G; Hajdu, C; Hazi, A; Hidas, P; Horvath, D; Sikler, F; Veszpremi, V; Vesztergombi, G; Zsigmond, A J; Beni, N; Czellar, S; Karancsi, J; Molnar, J; Szillasi, Z; Bartók, M; Makovec, A; Raics, P; Trocsanyi, Z L; Ujvari, B; Mal, P; Mandal, K; Sahoo, D K; Sahoo, N; Swain, S K; Bansal, S; Beri, S B; Bhatnagar, V; Chawla, R; Gupta, R; Bhawandeep, U; Kalsi, A K; Kaur, A; Kaur, M; Kumar, R; Mehta, A; Mittal, M; Singh, J B; Walia, G; Kumar, Ashok; Bhardwaj, A; Choudhary, B C; Garg, R B; Kumar, A; Malhotra, S; Naimuddin, M; Nishu, N; Ranjan, K; Sharma, R; Sharma, V; Bhattacharya, S; Chatterjee, K; Dey, S; Dutta, S; Jain, Sa; Majumdar, N; Modak, A; Mondal, K; Mukherjee, S; Mukhopadhyay, S; Roy, A; Roy, D; Roy Chowdhury, S; Sarkar, S; Sharan, M; Abdulsalam, A; Chudasama, R; Dutta, D; Jha, V; Kumar, V; Mohanty, A K; Pant, L M; Shukla, P; Topkar, A; Aziz, T; Banerjee, S; Bhowmik, S; Chatterjee, R M; Dewanjee, R K; Dugad, S; Ganguly, S; Ghosh, S; Guchait, M; Gurtu, A; Kole, G; Kumar, S; Mahakud, B; Maity, M; Majumder, G; Mazumdar, K; Mitra, S; Mohanty, G B; Parida, B; Sarkar, T; Sur, N; Sutar, B; Wickramage, N; Chauhan, S; Dube, S; Kapoor, A; Kothekar, K; Sharma, S; Bakhshiansohi, H; Behnamian, H; Etesami, S M; Fahim, A; Goldouzian, R; Khakzad, M; Mohammadi Najafabadi, M; Naseri, M; Paktinat Mehdiabadi, S; Rezaei Hosseinabadi, F; Safarzadeh, B; Zeinali, M; Felcini, M; Grunewald, M; Abbrescia, M; Calabria, C; Caputo, C; Colaleo, A; Creanza, D; Cristella, L; De Filippis, N; De Palma, M; Fiore, L; Iaselli, G; Maggi, G; Miniello, G; Maggi, M; My, S; Nuzzo, S; Pompili, A; Pugliese, G; Radogna, R; Ranieri, A; Selvaggi, G; Silvestris, L; Venditti, R; Verwilligen, P; Abbiendi, G; Battilana, C; Benvenuti, A C; Bonacorsi, D; Braibant-Giacomelli, S; Brigliadori, L; Campanini, R; Capiluppi, P; Castro, A; Cavallo, F R; Chhibra, S S; Codispoti, G; Cuffiani, M; Dallavalle, G M; Fabbri, F; Fanfani, A; Fasanella, D; Giacomelli, P; Grandi, C; Guiducci, L; Marcellini, S; Masetti, G; Montanari, A; Navarria, F L; Perrotta, A; Rossi, A M; Primavera, F; Rovelli, T; Siroli, G P; Tosi, N; Travaglini, R; Cappello, G; Chiorboli, M; Costa, S; Mattia, A Di; Giordano, F; Potenza, R; Tricomi, A; Tuve, C; Barbagli, G; Ciulli, V; Civinini, C; D'Alessandro, R; Focardi, E; Gonzi, S; Gori, V; Lenzi, P; Meschini, M; Paoletti, S; Sguazzoni, G; Tropiano, A; Viliani, L; Benussi, L; Bianco, S; Fabbri, F; Piccolo, D; Primavera, F; Calvelli, V; Ferro, F; Lo Vetere, M; Monge, M R; Robutti, E; Tosi, S; Brianza, L; Dinardo, M E; Fiorendi, S; Gennai, S; Gerosa, R; Ghezzi, A; Govoni, P; Malvezzi, S; Manzoni, R A; Marzocchi, B; Menasce, D; Moroni, L; Paganoni, M; Pedrini, D; Ragazzi, S; Redaelli, N; Tabarelli de Fatis, T; Buontempo, S; Cavallo, N; Di Guida, S; Esposito, M; Fabozzi, F; Iorio, A O M; Lanza, G; Lista, L; Meola, S; Merola, M; Paolucci, P; Sciacca, C; Thyssen, F; Azzi, P; Bacchetta, N; Benato, L; Bisello, D; Boletti, A; Branca, A; Carlin, R; Checchia, P; Dall'Osso, M; Dorigo, T; Dosselli, U; Fantinel, S; Fanzago, F; Gasparini, F; Gasparini, U; Gozzelino, A; Kanishchev, K; Lacaprara, S; Margoni, M; Meneguzzo, A T; Pazzini, J; Pozzobon, N; Ronchese, P; Simonetto, F; Torassa, E; Tosi, M; Zanetti, M; Zotto, P; Zucchetta, A; Braghieri, A; Magnani, A; Montagna, P; Ratti, S P; Re, V; Riccardi, C; Salvini, P; Vai, I; Vitulo, P; Alunni Solestizi, L; Bilei, G M; Ciangottini, D; Fanò, L; Lariccia, P; Mantovani, G; Menichelli, M; Saha, A; Santocchia, A; Androsov, K; Azzurri, P; Bagliesi, G; Bernardini, J; Boccali, T; Castaldi, R; Ciocci, M A; Dell'Orso, R; Donato, S; Fedi, G; Fiori, F; Foà, L; Giassi, A; Grippo, M T; Ligabue, F; Lomtadze, T; Martini, L; Messineo, A; Palla, F; Rizzi, A; Savoy-Navarro, A; Serban, A T; Spagnolo, P; Tenchini, R; Tonelli, G; Venturi, A; Verdini, P G; Barone, L; Cavallari, F; D'imperio, G; Del Re, D; Diemoz, M; Gelli, S; Jorda, C; Longo, E; Margaroli, F; Meridiani, P; Organtini, G; Paramatti, R; Preiato, F; Rahatlou, S; Rovelli, C; Santanastasio, F; Traczyk, P; Amapane, N; Arcidiacono, R; Argiro, S; Arneodo, M; Bellan, R; Biino, C; Cartiglia, N; Costa, M; Covarelli, R; Degano, A; Demaria, N; Finco, L; Kiani, B; Mariotti, C; Maselli, S; Migliore, E; Monaco, V; Monteil, E; Obertino, M M; Pacher, L; Pastrone, N; Pelliccioni, M; Pinna Angioni, G L; Ravera, F; Potenza, A; Romero, A; Ruspa, M; Sacchi, R; Solano, A; Staiano, A; Belforte, S; Candelise, V; Casarsa, M; Cossutti, F; Della Ricca, G; Gobbo, B; La Licata, C; Marone, M; Schizzi, A; Zanetti, A; Kropivnitskaya, T A; Nam, S K; Kim, D H; Kim, G N; Kim, M S; Kim, M S; Kong, D J; Lee, S; Oh, Y D; Sakharov, A; Son, D C; Brochero Cifuentes, J A; Kim, H; Kim, T J; Song, S; Choi, S; Go, Y; Gyun, D; Hong, B; Kim, H; Kim, Y; Lee, B; Lee, K; Lee, K S; Lee, S; Lee, S; Park, S K; Roh, Y; Yoo, H D; Choi, M; Kim, H; Kim, J H; Lee, J S H; Park, I C; Ryu, G; Ryu, M S; Choi, Y; Goh, J; Kim, D; Kwon, E; Lee, J; Yu, I; Dudenas, V; Juodagalvis, A; Vaitkus, J; Ahmed, I; Ibrahim, Z A; Komaragiri, J R; Md Ali, M A B; Mohamad Idris, F; Wan Abdullah, W A T; Yusli, M N; Wan Abdullah, W A T; Casimiro Linares, E; Castilla-Valdez, H; De La Cruz-Burelo, E; Heredia-De La Cruz, I; Hernandez-Almada, A; Lopez-Fernandez, R; Sanchez-Hernandez, A; Carrillo Moreno, S; Vazquez Valencia, F; Pedraza, I; Salazar Ibarguen, H A; Morelos Pineda, A; Krofcheck, D; Butler, P H; Ahmad, A; Ahmad, M; Hassan, Q; Hoorani, H R; Khan, W A; Khurshid, T; Shoaib, M; Bialkowska, H; Bluj, M; Boimska, B; Frueboes, T; Górski, M; Kazana, M; Nawrocki, K; Romanowska-Rybinska, K; Szleper, M; Zalewski, P; Brona, G; Bunkowski, K; Byszuk, A; Doroba, K; Kalinowski, A; Konecki, M; Krolikowski, J; Misiura, M; Olszewski, M; Walczak, M; Bargassa, P; Da Cruz E Silva, C Beir Ao; Di Francesco, A; Faccioli, P; Parracho, P G Ferreira; Gallinaro, M; Leonardo, N; Lloret Iglesias, L; Nguyen, F; Rodrigues Antunes, J; Seixas, J; Toldaiev, O; Vadruccio, D; Varela, J; Vischia, P; Afanasiev, S; Bunin, P; Gavrilenko, M; Golutvin, I; Gorbunov, I; Kamenev, A; Karjavin, V; Konoplyanikov, V; Lanev, A; Malakhov, A; Matveev, V; Moisenz, P; Palichik, V; Perelygin, V; Savina, M; Shmatov, S; Shulha, S; Smirnov, V; Zarubin, A; Golovtsov, V; Ivanov, Y; Kim, V; Kuznetsova, E; Levchenko, P; Murzin, V; Oreshkin, V; Smirnov, I; Sulimov, V; Uvarov, L; Vavilov, S; Vorobyev, A; Andreev, Yu; Dermenev, A; Gninenko, S; Golubev, N; Karneyeu, A; Kirsanov, M; Krasnikov, N; Pashenkov, A; Tlisov, D; Toropin, A; Epshteyn, V; Gavrilov, V; Lychkovskaya, N; Popov, V; Pozdnyakov, L; Safronov, G; Spiridonov, A; Vlasov, E; Zhokin, A; Bylinkin, A; Andreev, V; Azarkin, M; Dremin, I; Kirakosyan, M; Leonidov, A; Mesyats, G; Rusakov, S V; Baskakov, A; Belyaev, A; Boos, E; Dubinin, M; Dudko, L; Ershov, A; Gribushin, A; Klyukhin, V; Kodolova, O; Lokhtin, I; Myagkov, I; Obraztsov, S; Petrushanko, S; Savrin, V; Snigirev, A; Azhgirey, I; Bayshev, I; Bitioukov, S; Kachanov, V; Kalinin, A; Konstantinov, D; Krychkine, V; Petrov, V; Ryutin, R; Sobol, A; Tourtchanovitch, L; Troshin, S; Tyurin, N; Uzunian, A; Volkov, A; Adzic, P; Cirkovic, P; Milosevic, J; Rekovic, V; Alcaraz Maestre, J; Battilana, C; Calvo, E; Cerrada, M; Chamizo Llatas, M; Colino, N; De La Cruz, B; Delgado Peris, A; Escalante Del Valle, A; Fernandez Bedoya, C; Ramos, J P Fernández; Flix, J; Fouz, M C; Garcia-Abia, P; Gonzalez Lopez, O; Goy Lopez, S; Hernandez, J M; Josa, M I; Navarro De Martino, E; Yzquierdo, A Pérez-Calero; Puerta Pelayo, J; Quintario Olmeda, A; Redondo, I; Romero, L; Santaolalla, J; Soares, M S; Albajar, C; de Trocóniz, J F; Missiroli, M; Moran, D; Cuevas, J; Fernandez Menendez, J; Folgueras, S; Gonzalez Caballero, I; Palencia Cortezon, E; Vizan Garcia, J M; Cabrillo, I J; Calderon, A; Castiñeiras De Saa, J R; De Castro Manzano, P; Fernandez, M; Garcia-Ferrero, J; Gomez, G; Lopez Virto, A; Marco, J; Marco, R; Martinez Rivero, C; Matorras, F; Piedra Gomez, J; Rodrigo, T; Rodríguez-Marrero, A Y; Ruiz-Jimeno, A; Scodellaro, L; Trevisani, N; Vila, I; Vilar Cortabitarte, R; Abbaneo, D; Auffray, E; Auzinger, G; Bachtis, M; Baillon, P; Ball, A H; Barney, D; Benaglia, A; Bendavid, J; Benhabib, L; Benitez, J F; Berruti, G M; Bloch, P; Bocci, A; Bonato, A; Botta, C; Breuker, H; Camporesi, T; Castello, R; Cerminara, G; D'Alfonso, M; d'Enterria, D; Dabrowski, A; Daponte, V; David, A; De Gruttola, M; De Guio, F; De Roeck, A; De Visscher, S; Di Marco, E; Dobson, M; Dordevic, M; Dorney, B; du Pree, T; Duggan, D; Dünser, M; Dupont, N; Elliott-Peisert, A; Franzoni, G; Fulcher, J; Funk, W; Gigi, D; Gill, K; Giordano, D; Girone, M; Glege, F; Guida, R; Gundacker, S; Guthoff, M; Hammer, J; Harris, P; Hegeman, J; Innocente, V; Janot, P; Kirschenmann, H; Kortelainen, M J; Kousouris, K; Krajczar, K; Lecoq, P; Lourenço, C; Lucchini, M T; Magini, N; Malgeri, L; Mannelli, M; Martelli, A; Masetti, L; Meijers, F; Mersi, S; Meschi, E; Moortgat, F; Morovic, S; Mulders, M; Nemallapudi, M V; Neugebauer, H; Orfanelli, S; Orsini, L; Pape, L; Perez, E; Peruzzi, M; Petrilli, A; Petrucciani, G; Pfeiffer, A; Piparo, D; Racz, A; Reis, T; Rolandi, G; Rovere, M; Ruan, M; Sakulin, H; Schäfer, C; Schwick, C; Seidel, M; Sharma, A; Silva, P; Simon, M; Sphicas, P; Steggemann, J; Stieger, B; Stoye, M; Takahashi, Y; Treille, D; Triossi, A; Tsirou, A; Veres, G I; Wardle, N; Wöhri, H K; Zagozdzinska, A; Zeuner, W D; Bertl, W; Deiters, K; Erdmann, W; Horisberger, R; Ingram, Q; Kaestli, H C; Kotlinski, D; Langenegger, U; Renker, D; Rohe, T; Bachmair, F; Bäni, L; Bianchini, L; Casal, B; Dissertori, G; Dittmar, M; Donegà, M; Eller, P; Grab, C; Heidegger, C; Hits, D; Hoss, J; Kasieczka, G; Lustermann, W; Mangano, B; Marionneau, M; Martinez Ruiz Del Arbol, P; Masciovecchio, M; Meister, D; Micheli, F; Musella, P; Nessi-Tedaldi, F; Pandolfi, F; Pata, J; Pauss, F; Perrozzi, L; Quittnat, M; Rossini, M; Starodumov, A; Takahashi, M; Tavolaro, V R; Theofilatos, K; Wallny, R; Aarrestad, T K; Amsler, C; Caminada, L; Canelli, M F; Chiochia, V; De Cosa, A; Galloni, C; Hinzmann, A; Hreus, T; Kilminster, B; Lange, C; Ngadiuba, J; Pinna, D; Robmann, P; Ronga, F J; Salerno, D; Yang, Y; Cardaci, M; Chen, K H; Doan, T H; Jain, Sh; Khurana, R; Konyushikhin, M; Kuo, C M; Lin, W; Lu, Y J; Yu, S S; Kumar, Arun; Bartek, R; Chang, P; Chang, Y H; Chao, Y; Chen, K F; Chen, P H; Dietz, C; Fiori, F; Grundler, U; Hou, W-S; Hsiung, Y; Liu, Y F; Lu, R-S; Miñano Moya, M; Petrakou, E; Tsai, J F; Tzeng, Y M; Asavapibhop, B; Kovitanggoon, K; Singh, G; Srimanobhas, N; Suwonjandee, N; Adiguzel, A; Bakirci, M N; Cerci, S; Demiroglu, Z S; Dozen, C; Eskut, E; Gecit, F H; Girgis, S; Gokbulut, G; Guler, Y; Guler, Y; Gurpinar, E; Hos, I; Kangal, E E; Onengut, G; Ozcan, M; Ozdemir, K; Polatoz, A; Sunar Cerci, D; Topakli, H; Vergili, M; Zorbilmez, C; Akin, I V; Bilin, B; Bilmis, S; Isildak, B; Karapinar, G; Yalvac, M; Zeyrek, M; Gülmez, E; Kaya, M; Kaya, O; Yetkin, E A; Yetkin, T; Cakir, A; Cankocak, K; Sen, S; Vardarlı, F I; Grynyov, B; Levchuk, L; Sorokin, P; Aggleton, R; Ball, F; Beck, L; Brooke, J J; Clement, E; Cussans, D; Flacher, H; Goldstein, J; Grimes, M; Heath, G P; Heath, H F; Jacob, J; Kreczko, L; Lucas, C; Meng, Z; Newbold, D M; Paramesvaran, S; Poll, A; Sakuma, T; Seif El Nasr-Storey, S; Senkin, S; Smith, D; Smith, V J; Bell, K W; Belyaev, A; Brew, C; Brown, R M; Calligaris, L; Cieri, D; Cockerill, D J A; Coughlan, J A; Harder, K; Harper, S; Olaiya, E; Petyt, D; Shepherd-Themistocleous, C H; Thea, A; Tomalin, I R; Williams, T; Worm, S D; Baber, M; Bainbridge, R; Buchmuller, O; Bundock, A; Burton, D; Casasso, S; Citron, M; Colling, D; Corpe, L; Cripps, N; Dauncey, P; Davies, G; De Wit, A; Della Negra, M; Dunne, P; Elwood, A; Elwood, A; Ferguson, W; Futyan, D; Hall, G; Iles, G; Kenzie, M; Lane, R; Lucas, R; Lyons, L; Magnan, A-M; Malik, S; Nash, J; Nikitenko, A; Pela, J; Pesaresi, M; Petridis, K; Raymond, D M; Richards, A; Rose, A; Seez, C; Tapper, A; Uchida, K; Vazquez Acosta, M; Virdee, T; Zenz, S C; Cole, J E; Hobson, P R; Khan, A; Kyberd, P; Leggat, D; Leslie, D; Reid, I D; Symonds, P; Teodorescu, L; Turner, M; Borzou, A; Call, K; Dittmann, J; Hatakeyama, K; Liu, H; Pastika, N; Scarborough, T; Wu, Z; Charaf, O; Cooper, S I; Henderson, C; Rumerio, P; Arcaro, D; Avetisyan, A; Bose, T; Fantasia, C; Gastler, D; Lawson, P; Rankin, D; Richardson, C; Rohlf, J; St John, J; Sulak, L; Zou, D; Alimena, J; Berry, E; Bhattacharya, S; Cutts, D; Dhingra, N; Ferapontov, A; Garabedian, A; Hakala, J; Heintz, U; Laird, E; Landsberg, G; Mao, Z; Narain, M; Piperov, S; Sagir, S; Syarif, R; Breedon, R; Breto, G; De La Barca Sanchez, M Calderon; Chauhan, S; Chertok, M; Conway, J; Conway, R; Cox, P T; Erbacher, R; Funk, G; Gardner, M; Ko, W; Lander, R; Mulhearn, M; Pellett, D; Pilot, J; Ricci-Tam, F; Shalhout, S; Smith, J; Squires, M; Stolp, D; Tripathi, M; Wilbur, S; Yohay, R; Bravo, C; Cousins, R; Everaerts, P; Farrell, C; Florent, A; Hauser, J; Ignatenko, M; Saltzberg, D; Schnaible, C; Valuev, V; Weber, M; Burt, K; Clare, R; Ellison, J; Gary, J W; Hanson, G; Heilman, J; Ivova Paneva, M; Jandir, P; Kennedy, E; Lacroix, F; Long, O R; Luthra, A; Malberti, M; Negrete, M Olmedo; Shrinivas, A; Wei, H; Wimpenny, S; Yates, B R; Branson, J G; Cerati, G B; Cittolin, S; D'Agnolo, R T; Derdzinski, M; Holzner, A; Kelley, R; Klein, D; Letts, J; Macneill, I; Olivito, D; Padhi, S; Pieri, M; Sani, M; Sharma, V; Simon, S; Tadel, M; Tu, Y; Vartak, A; Wasserbaech, S; Welke, C; Würthwein, F; Yagil, A; Zevi Della Porta, G; Bradmiller-Feld, J; Campagnari, C; Dishaw, A; Dutta, V; Flowers, K; Franco Sevilla, M; Geffert, P; George, C; Golf, F; Gouskos, L; Gran, J; Incandela, J; Mccoll, N; Mullin, S D; Mullin, S D; Richman, J; Stuart, D; Suarez, I; West, C; Yoo, J; Anderson, D; Apresyan, A; Bornheim, A; Bunn, J; Chen, Y; Duarte, J; Mott, A; Newman, H B; Pena, C; Pierini, M; Spiropulu, M; Vlimant, J R; Xie, S; Zhu, R Y; Andrews, M B; Azzolini, V; Calamba, A; Carlson, B; Ferguson, T; Paulini, M; Russ, J; Sun, M; Vogel, H; Vorobiev, I; Cumalat, J P; Ford, W T; Gaz, A; Jensen, F; Johnson, A; Krohn, M; Mulholland, T; Nauenberg, U; Stenson, K; Wagner, S R; Alexander, J; Chatterjee, A; Chaves, J; Chu, J; Dittmer, S; Eggert, N; Mirman, N; Nicolas Kaufman, G; Patterson, J R; Rinkevicius, A; Ryd, A; Skinnari, L; Soffi, L; Sun, W; Tan, S M; Teo, W D; Thom, J; Thompson, J; Tucker, J; Weng, Y; Wittich, P; Abdullin, S; Albrow, M; Apollinari, G; Banerjee, S; Bauerdick, L A T; Beretvas, A; Berryhill, J; Bhat, P C; Bolla, G; Burkett, K; Butler, J N; Cheung, H W K; Chlebana, F; Cihangir, S; Elvira, V D; Fisk, I; Freeman, J; Gottschalk, E; Gray, L; Green, D; Grünendahl, S; Gutsche, O; Hanlon, J; Hare, D; Harris, R M; Hasegawa, S; Hirschauer, J; Hu, Z; Jayatilaka, B; Jindariani, S; Johnson, M; Joshi, U; Jung, A W; Klima, B; Kreis, B; Lammel, S; Linacre, J; Lincoln, D; Lipton, R; Liu, T; Lopes De Sá, R; Lykken, J; Maeshima, K; Marraffino, J M; Martinez Outschoorn, V I; Maruyama, S; Mason, D; McBride, P; Merkel, P; Mishra, K; Mrenna, S; Nahn, S; Newman-Holmes, C; O'Dell, V; Pedro, K; Prokofyev, O; Rakness, G; Sexton-Kennedy, E; Soha, A; Spalding, W J; Spiegel, L; Strobbe, N; Taylor, L; Tkaczyk, S; Tran, N V; Uplegger, L; Vaandering, E W; Vernieri, C; Verzocchi, M; Vidal, R; Weber, H A; Whitbeck, A; Acosta, D; Avery, P; Bortignon, P; Bourilkov, D; Carnes, A; Carver, M; Curry, D; Das, S; Field, R D; Furic, I K; Gleyzer, S V; Hugon, J; Konigsberg, J; Korytov, A; Kotov, K; Low, J F; Ma, P; Matchev, K; Mei, H; Milenovic, P; Mitselmakher, G; Rank, D; Rossin, R; Shchutska, L; Snowball, M; Sperka, D; Terentyev, N; Thomas, L; Wang, J; Wang, S; Yelton, J; Hewamanage, S; Linn, S; Markowitz, P; Martinez, G; Rodriguez, J L; Adams, J R; Ackert, A; Adams, T; Askew, A; Bein, S; Bochenek, J; Diamond, B; Haas, J; Hagopian, S; Hagopian, V; Johnson, K F; Khatiwada, A; Prosper, H; Weinberg, M; Baarmand, M M; Bhopatkar, V; Colafranceschi, S; Hohlmann, M; Kalakhety, H; Noonan, D; Roy, T; Yumiceva, F; Adams, M R; Apanasevich, L; Berry, D; Betts, R R; Bucinskaite, I; Cavanaugh, R; Evdokimov, O; Gauthier, L; Gerber, C E; Hofman, D J; Kurt, P; O'Brien, C; Sandoval Gonzalez, L D; Silkworth, C; Turner, P; Varelas, N; Wu, Z; Zakaria, M; Bilki, B; Clarida, W; Dilsiz, K; Durgut, S; Gandrajula, R P; Haytmyradov, M; Khristenko, V; Merlo, J-P; Mermerkaya, H; Mestvirishvili, A; Moeller, A; Nachtman, J; Ogul, H; Onel, Y; Ozok, F; Penzo, A; Snyder, C; Tiras, E; Wetzel, J; Yi, K; Anderson, I; Anderson, I; Barnett, B A; Blumenfeld, B; Eminizer, N; Fehling, D; Feng, L; Gritsan, A V; Maksimovic, P; Martin, C; Osherson, M; Roskes, J; Sady, A; Sarica, U; Swartz, M; Xiao, M; Xin, Y; You, C; Xiao, M; Baringer, P; Bean, A; Benelli, G; Bruner, C; Kenny, R P; Majumder, D; Majumder, D; Malek, M; Murray, M; Sanders, S; Stringer, R; Wang, Q; Ivanov, A; Kaadze, K; Khalil, S; Makouski, M; Maravin, Y; Mohammadi, A; Saini, L K; Skhirtladze, N; Toda, S; Lange, D; Rebassoo, F; Wright, D; Anelli, C; Baden, A; Baron, O; Belloni, A; Calvert, B; Eno, S C; Ferraioli, C; Gomez, J A; Hadley, N J; Jabeen, S; Jabeen, S; Kellogg, R G; Kolberg, T; Kunkle, J; Lu, Y; Mignerey, A C; Shin, Y H; Skuja, A; Tonjes, M B; Tonwar, S C; Apyan, A; Barbieri, R; Baty, A; Bierwagen, K; Brandt, S; Bierwagen, K; Busza, W; Cali, I A; Demiragli, Z; Di Matteo, L; Gomez Ceballos, G; Goncharov, M; Gulhan, D; Iiyama, Y; Innocenti, G M; Klute, M; Kovalskyi, D; Lai, Y S; Lee, Y-J; Levin, A; Luckey, P D; Marini, A C; Mcginn, C; Mironov, C; Narayanan, S; Niu, X; Paus, C; Ralph, D; Roland, C; Roland, G; Salfeld-Nebgen, J; Stephans, G S F; Sumorok, K; Varma, M; Velicanu, D; Veverka, J; Wang, J; Wang, T W; Wyslouch, B; Yang, M; Zhukova, V; Dahmes, B; Evans, A; Finkel, A; Gude, A; Hansen, P; Kalafut, S; Kao, S C; Klapoetke, K; Kubota, Y; Lesko, Z; Mans, J; Nourbakhsh, S; Ruckstuhl, N; Rusack, R; Tambe, N; Turkewitz, J; Acosta, J G; Oliveros, S; Avdeeva, E; Bloom, K; Bose, S; Claes, D R; Dominguez, A; Fangmeier, C; Gonzalez Suarez, R; Kamalieddin, R; Keller, J; Knowlton, D; Kravchenko, I; Meier, F; Monroy, J; Ratnikov, F; Siado, J E; Snow, G R; Alyari, M; Dolen, J; George, J; Godshalk, A; Harrington, C; Iashvili, I; Kaisen, J; Kharchilava, A; Kumar, A; Rappoccio, S; Roozbahani, B; Alverson, G; Barberis, E; Baumgartel, D; Chasco, M; Hortiangtham, A; Massironi, A; Morse, D M; Nash, D; Orimoto, T; Teixeira De Lima, R; Trocino, D; Wang, R-J; Wood, D; Zhang, J; Hahn, K A; Kubik, A; Mucia, N; Odell, N; Pollack, B; Pozdnyakov, A; Schmitt, M; Stoynev, S; Sung, K; Trovato, M; Velasco, M; Brinkerhoff, A; Dev, N; Hildreth, M; Jessop, C; Karmgard, D J; Kellams, N; Lannon, K; Marinelli, N; Meng, F; Mueller, C; Musienko, Y; Planer, M; Reinsvold, A; Ruchti, R; Smith, G; Taroni, S; Valls, N; Wayne, M; Wolf, M; Woodard, A; Antonelli, L; Brinson, J; Bylsma, B; Durkin, L S; Flowers, S; Hart, A; Hill, C; Hughes, R; Ji, W; Ling, T Y; Liu, B; Luo, W; Puigh, D; Rodenburg, M; Winer, B L; Wulsin, H W; Driga, O; Elmer, P; Hardenbrook, J; Hebda, P; Koay, S A; Lujan, P; Marlow, D; Medvedeva, T; Mooney, M; Olsen, J; Palmer, C; Piroué, P; Saka, H; Stickland, D; Tully, C; Zuranski, A; Malik, S; Barnes, V E; Benedetti, D; Bortoletto, D; Gutay, L; Jha, M K; Jones, M; Jung, K; Miller, D H; Neumeister, N; Primavera, F; Radburn-Smith, B C; Shi, X; Shipsey, I; Silvers, D; Sun, J; Svyatkovskiy, A; Wang, F; Xie, W; Xu, L; Parashar, N; Stupak, J; Adair, A; Akgun, B; Chen, Z; Ecklund, K M; Geurts, F J M; Guilbaud, M; Li, W; Michlin, B; Northup, M; Padley, B P; Redjimi, R; Roberts, J; Rorie, J; Tu, Z; Zabel, J; Betchart, B; Bodek, A; de Barbaro, P; Demina, R; Eshaq, Y; Ferbel, T; Galanti, M; Galanti, M; Garcia-Bellido, A; Han, J; Harel, A; Hindrichs, O; Hindrichs, O; Khukhunaishvili, A; Petrillo, G; Tan, P; Verzetti, M; Arora, S; Barker, A; Chou, J P; Contreras-Campana, C; Contreras-Campana, E; Ferencek, D; Gershtein, Y; Gray, R; Halkiadakis, E; Hidas, D; Hughes, E; Kaplan, S; Kunnawalkam Elayavalli, R; Lath, A; Nash, K; Panwalkar, S; Park, M; Salur, S; Schnetzer, S; Sheffield, D; Somalwar, S; Stone, R; Thomas, S; Thomassen, P; Walker, M; Foerster, M; Riley, G; Rose, K; Spanier, S; York, A; Bouhali, O; Castaneda Hernandez, A; Celik, A; Dalchenko, M; De Mattia, M; Delgado, A; Dildick, S; Dildick, S; Eusebi, R; Gilmore, J; Huang, T; Kamon, T; Krutelyov, V; Krutelyov, V; Mueller, R; Osipenkov, I; Pakhotin, Y; Patel, R; Patel, R; Perloff, A; Rose, A; Safonov, A; Tatarinov, A; Ulmer, K A; Akchurin, N; Cowden, C; Damgov, J; Dragoiu, C; Dudero, P R; Faulkner, J; Kunori, S; Lamichhane, K; Lee, S W; Libeiro, T; Undleeb, S; Volobouev, I; Appelt, E; Delannoy, A G; Greene, S; Gurrola, A; Janjam, R; Johns, W; Maguire, C; Mao, Y; Melo, A; Ni, H; Sheldon, P; Snook, B; Tuo, S; Velkovska, J; Xu, Q; Arenton, M W; Cox, B; Francis, B; Goodell, J; Hirosky, R; Ledovskoy, A; Li, H; Lin, C; Neu, C; Sinthuprasith, T; Sun, X; Wang, Y; Wolfe, E; Wood, J; Xia, F; Clarke, C; Harr, R; Karchin, P E; Kottachchi Kankanamge Don, C; Lamichhane, P; Sturdy, J; Belknap, D A; Carlsmith, D; Cepeda, M; Dasu, S; Dodd, L; Duric, S; Gomber, B; Grothe, M; Hall-Wilton, R; Herndon, M; Hervé, A; Klabbers, P; Lanaro, A; Levine, A; Long, K; Loveless, R; Mohapatra, A; Ojalvo, I; Perry, T; Pierro, G A; Polese, G; Ruggles, T; Sarangi, T; Savin, A; Sharma, A; Smith, N; Smith, W H; Taylor, D; Woods, N
New sets of parameters ("tunes") for the underlying-event (UE) modelling of the pythia8, pythia6 and herwig++ Monte Carlo event generators are constructed using different parton distribution functions. Combined fits to CMS UE proton-proton ([Formula: see text]) data at [Formula: see text] and to UE proton-antiproton ([Formula: see text]) data from the CDF experiment at lower [Formula: see text], are used to study the UE models and constrain their parameters, providing thereby improved predictions for proton-proton collisions at 13[Formula: see text]. In addition, it is investigated whether the values of the parameters obtained from fits to UE observables are consistent with the values determined from fitting observables sensitive to double-parton scattering processes. Finally, comparisons are presented of the UE tunes to "minimum bias" (MB) events, multijet, and Drell-Yan ([Formula: see text] lepton-antilepton+jets) observables at 7 and 8[Formula: see text], as well as predictions for MB and UE observables at 13[Formula: see text].
Rapid Airplane Parametric Input Design(RAPID)
NASA Technical Reports Server (NTRS)
Smith, Robert E.; Bloor, Malcolm I. G.; Wilson, Michael J.; Thomas, Almuttil M.
2004-01-01
An efficient methodology is presented for defining a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. A small set of design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tail, horizontal tail, and canard components. The wing, tail, and canard components are manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. Grid sensitivity is obtained by applying the automatic differentiation precompiler ADIFOR to software for the grid generation. The computed surface grids, volume grids, and sensitivity derivatives are suitable for a wide range of Computational Fluid Dynamics simulation and configuration optimizations.
Automatic Differentiation in Quantum Chemistry with Applications to Fully Variational Hartree–Fock
2018-01-01
Automatic differentiation (AD) is a powerful tool that allows calculating derivatives of implemented algorithms with respect to all of their parameters up to machine precision, without the need to explicitly add any additional functions. Thus, AD has great potential in quantum chemistry, where gradients are omnipresent but also difficult to obtain, and researchers typically spend a considerable amount of time finding suitable analytical forms when implementing derivatives. Here, we demonstrate that AD can be used to compute gradients with respect to any parameter throughout a complete quantum chemistry method. We present DiffiQult, a Hartree–Fock implementation, entirely differentiated with the use of AD tools. DiffiQult is a software package written in plain Python with minimal deviation from standard code which illustrates the capability of AD to save human effort and time in implementations of exact gradients in quantum chemistry. We leverage the obtained gradients to optimize the parameters of one-particle basis sets in the context of the floating Gaussian framework.
Quantitative Determination of Spring Water Quality Parameters via Electronic Tongue.
Carbó, Noèlia; López Carrero, Javier; Garcia-Castillo, F Javier; Tormos, Isabel; Olivas, Estela; Folch, Elisa; Alcañiz Fillol, Miguel; Soto, Juan; Martínez-Máñez, Ramón; Martínez-Bisbal, M Carmen
2017-12-25
The use of a voltammetric electronic tongue for the quantitative analysis of quality parameters in spring water is proposed here. The electronic voltammetric tongue consisted of a set of four noble electrodes (iridium, rhodium, platinum, and gold) housed inside a stainless steel cylinder. These noble metals have a high durability and are not demanding for maintenance, features required for the development of future automated equipment. A pulse voltammetry study was conducted in 83 spring water samples to determine concentrations of nitrate (range: 6.9-115 mg/L), sulfate (32-472 mg/L), fluoride (0.08-0.26 mg/L), chloride (17-190 mg/L), and sodium (11-94 mg/L) as well as pH (7.3-7.8). These parameters were also determined by routine analytical methods in spring water samples. A partial least squares (PLS) analysis was run to obtain a model to predict these parameter. Orthogonal signal correction (OSC) was applied in the preprocessing step. Calibration (67%) and validation (33%) sets were selected randomly. The electronic tongue showed good predictive power to determine the concentrations of nitrate, sulfate, chloride, and sodium as well as pH and displayed a lower R² and slope in the validation set for fluoride. Nitrate and fluoride concentrations were estimated with errors lower than 15%, whereas chloride, sulfate, and sodium concentrations as well as pH were estimated with errors below 10%.
The added value of remote sensing products in constraining hydrological models
NASA Astrophysics Data System (ADS)
Nijzink, Remko C.; Almeida, Susana; Pechlivanidis, Ilias; Capell, René; Gustafsson, David; Arheimer, Berit; Freer, Jim; Han, Dawei; Wagener, Thorsten; Sleziak, Patrik; Parajka, Juraj; Savenije, Hubert; Hrachowitz, Markus
2017-04-01
The calibration of a hydrological model still depends on the availability of streamflow data, even though more additional sources of information (i.e. remote sensed data products) have become more widely available. In this research, the model parameters of four different conceptual hydrological models (HYPE, HYMOD, TUW, FLEX) were constrained with remotely sensed products. The models were applied over 27 catchments across Europe to cover a wide range of climates, vegetation and landscapes. The fluxes and states of the models were correlated with the relevant products (e.g. MOD10A snow with modelled snow states), after which new a-posteriori parameter distributions were determined based on a weighting procedure using conditional probabilities. Briefly, each parameter was weighted with the coefficient of determination of the relevant regression between modelled states/fluxes and products. In this way, final feasible parameter sets were derived without the use of discharge time series. Initial results show that improvements in model performance, with regard to streamflow simulations, are obtained when the models are constrained with a set of remotely sensed products simultaneously. In addition, we present a more extensive analysis to assess a model's ability to reproduce a set of hydrological signatures, such as rising limb density or peak distribution. Eventually, this research will enhance our understanding and recommendations in the use of remotely sensed products for constraining conceptual hydrological modelling and improving predictive capability, especially for data sparse regions.
NASA Astrophysics Data System (ADS)
Ghikas, Demetris P. K.; Oikonomou, Fotios D.
2018-04-01
Using the generalized entropies which depend on two parameters we propose a set of quantitative characteristics derived from the Information Geometry based on these entropies. Our aim, at this stage, is to construct first some fundamental geometric objects which will be used in the development of our geometrical framework. We first establish the existence of a two-parameter family of probability distributions. Then using this family we derive the associated metric and we state a generalized Cramer-Rao Inequality. This gives a first two-parameter classification of complex systems. Finally computing the scalar curvature of the information manifold we obtain a further discrimination of the corresponding classes. Our analysis is based on the two-parameter family of generalized entropies of Hanel and Thurner (2011).
Welding of Al6061and Al6082-Cu composite by friction stir processing
NASA Astrophysics Data System (ADS)
Iyer, R. B.; Dhabale, R. B.; Jatti, V. S.
2016-09-01
Present study aims at investigating the influence of process parameters on the microstructure and mechanical properties such as tensile strength and hardness of the dissimilar metal without and with copper powder. Before conducting the copper powder experiments, optimum process parameters were obtained by conducting experiments without copper powder. Taguchi's experimental L9 orthogonal design layout was used to carry out the experiments without copper powder. Threaded pin tool geometry was used for conducting the experiments. Based on the experimental results and Taguchi's analysis it was found that maximum tensile strength of 66.06 MPa was obtained at 1400 rpm spindle speed and weld speed of 20 mm/min. Maximum micro hardness (92 HV) was obtained at 1400 rpm spindle speed and weld speed of 16 mm/min. At these optimal setting of process parameters aluminium alloys were welded with the copper powder. Experimental results demonstrated that the tensile strength (96.54 MPa) and micro hardness (105 HV) of FSW was notably affected by the addition of copper powder when compared with FSW joint without copper powder. Tensile failure specimen was analysed using Scanning Electron Microscopy in order to study the failure mechanism.
Ponz, Ezequiel; Ladaga, Juan Luis; Bonetto, Rita Dominga
2006-04-01
Scanning electron microscopy (SEM) is widely used in the science of materials and different parameters were developed to characterize the surface roughness. In a previous work, we studied the surface topography with fractal dimension at low scale and two parameters at high scale by using the variogram, that is, variance vs. step log-log graph, of a SEM image. Those studies were carried out with the FERImage program, previously developed by us. To verify the previously accepted hypothesis by working with only an image, it is indispensable to have reliable three-dimensional (3D) surface data. In this work, a new program (EZEImage) to characterize 3D surface topography in SEM has been developed. It uses fast cross correlation and dynamic programming to obtain reliable dense height maps in a few seconds which can be displayed as an image where each gray level represents a height value. This image can be used for the FERImage program or any other software to obtain surface topography characteristics. EZEImage also generates anaglyph images as well as characterizes 3D surface topography by means of a parameter set to describe amplitude properties and three functional indices for characterizing bearing and fluid properties.
NASA Astrophysics Data System (ADS)
Ribes, S.; Voicu, I.; Girault, J. M.; Fournier, M.; Perrotin, F.; Tranquart, F.; Kouamé, D.
2011-03-01
Electronic fetal monitoring may be required during the whole pregnancy to closely monitor specific fetal and maternal disorders. Currently used methods suffer from many limitations and are not sufficient to evaluate fetal asphyxia. Fetal activity parameters such as movements, heart rate and associated parameters are essential indicators of the fetus well being, and no current device gives a simultaneous and sufficient estimation of all these parameters to evaluate the fetus well-being. We built for this purpose, a multi-transducer-multi-gate Doppler system and developed dedicated signal processing techniques for fetal activity parameter extraction in order to investigate fetus's asphyxia or well-being through fetal activity parameters. To reach this goal, this paper shows preliminary feasibility of separating normal and compromised fetuses using our system. To do so, data set consisting of two groups of fetal signals (normal and compromised) has been established and provided by physicians. From estimated parameters an instantaneous Manning-like score, referred to as ultrasonic score was introduced and was used together with movements, heart rate and associated parameters in a classification process using Support Vector Machines (SVM) method. The influence of the fetal activity parameters and the performance of the SVM were evaluated using the computation of sensibility, specificity, percentage of support vectors and total classification accuracy. We showed our ability to separate the data into two sets : normal fetuses and compromised fetuses and obtained an excellent matching with the clinical classification performed by physician.
SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Z; Folkert, M; Wang, J
2016-06-15
Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidentialmore » reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.« less
Fidelity under isospectral perturbations: a random matrix study
NASA Astrophysics Data System (ADS)
Leyvraz, F.; García, A.; Kohler, H.; Seligman, T. H.
2013-07-01
The set of Hamiltonians generated by all unitary transformations from a single Hamiltonian is the largest set of isospectral Hamiltonians we can form. Taking advantage of the fact that the unitary group can be generated from Hermitian matrices we can take the ones generated by the Gaussian unitary ensemble with a small parameter as small perturbations. Similarly, the transformations generated by Hermitian antisymmetric matrices from orthogonal matrices form isospectral transformations among symmetric matrices. Based on this concept we can obtain the fidelity decay of a system that decays under a random isospectral perturbation with well-defined properties regarding time-reversal invariance. If we choose the Hamiltonian itself also from a classical random matrix ensemble, then we obtain solutions in terms of form factors in the limit of large matrices.
Neuro-genetic system for optimization of GMI samples sensitivity.
Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E
2016-03-01
Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Czaplik, Michael; Biener, Ingeborg; Leonhardt, Steffen; Rossaint, Rolf
2014-03-01
Since mechanical ventilation can cause harm to lung tissue it should be as protective as possible. Whereas numerous options exist to set ventilator parameters, an adequate monitoring is lacking up to date. The Electrical Impedance Tomography (EIT) provides a non-invasive visualization of ventilation which is relatively easy to apply and commercially available. Although there are a number of published measures and parameters derived from EIT, it is not clear how to use EIT to improve clinical outcome of e.g. patients suffering from acute respiratory distress syndrome (ARDS), a severe disease with a high mortality rate. On the one hand, parameters should be easy to obtain, on the other hand clinical algorithms should consider them to optimize ventilator settings. The so called Global inhomogeneity (GI) index bases on the fact that ARDS is characterized by an inhomogeneous injury pattern. By applying positive endexpiratory pressures (PEEP), homogeneity should be attained. In this study, ARDS was induced by a double hit procedure in six pigs. They were randomly assigned to either the EIT or the control group. Whereas in the control group the ARDS network table was used to set the PEEP according to the current inspiratory oxygen fraction, in the EIT group the GI index was calculated during a decremental PEEP trial. PEEP was kept when GI index was lowest. Interestingly, PEEP was significantly higher in the EIT group. Additionally, two of these animals died ahead of the schedule. Obviously, not only homogeneity of ventilation distribution matters but also limitation of over-distension.
Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set
NASA Astrophysics Data System (ADS)
Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.
2017-12-01
In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.
Spectral Analysis and Experimental Modeling of Ice Accretion Roughness
NASA Technical Reports Server (NTRS)
Orr, D. J.; Breuer, K. S.; Torres, B. E.; Hansman, R. J., Jr.
1996-01-01
A self-consistent scheme for relating wind tunnel ice accretion roughness to the resulting enhancement of heat transfer is described. First, a spectral technique of quantitative analysis of early ice roughness images is reviewed. The image processing scheme uses a spectral estimation technique (SET) which extracts physically descriptive parameters by comparing scan lines from the experimentally-obtained accretion images to a prescribed test function. Analysis using this technique for both streamwise and spanwise directions of data from the NASA Lewis Icing Research Tunnel (IRT) are presented. An experimental technique is then presented for constructing physical roughness models suitable for wind tunnel testing that match the SET parameters extracted from the IRT images. The icing castings and modeled roughness are tested for enhancement of boundary layer heat transfer using infrared techniques in a "dry" wind tunnel.
An Image Encryption Algorithm Utilizing Julia Sets and Hilbert Curves
Sun, Yuanyuan; Chen, Lina; Xu, Rudan; Kong, Ruiqing
2014-01-01
Image encryption is an important and effective technique to protect image security. In this paper, a novel image encryption algorithm combining Julia sets and Hilbert curves is proposed. The algorithm utilizes Julia sets’ parameters to generate a random sequence as the initial keys and gets the final encryption keys by scrambling the initial keys through the Hilbert curve. The final cipher image is obtained by modulo arithmetic and diffuse operation. In this method, it needs only a few parameters for the key generation, which greatly reduces the storage space. Moreover, because of the Julia sets’ properties, such as infiniteness and chaotic characteristics, the keys have high sensitivity even to a tiny perturbation. The experimental results indicate that the algorithm has large key space, good statistical property, high sensitivity for the keys, and effective resistance to the chosen-plaintext attack. PMID:24404181
EK Draconis. Magnetic activity in the photosphere and chromosphere
NASA Astrophysics Data System (ADS)
Järvinen, S. P.; Berdyugina, S. V.; Korhonen, H.; Ilyin, I.; Tuominen, I.
2007-09-01
Context: As a young solar analogue, EK Draconis provides an opportunity to study the magnetic activity of the infant Sun. Aims: We present three new surface temperature maps of EK Draconis and compare them with previous results obtained from long-term photometry. Furthermore, we determined a set of stellar parameters and compared the determined values with the corresponding solar values. Methods: Atmospheric parameters were determined by comparing observed and synthetic spectra calculated with stellar atmosphere models. Surface temperature maps were obtained using the Occamian approach inversion technique. The differential rotation of EK Dra was estimated using two different methods. Results: A detailed model atmosphere analysis of high resolution spectra of EK Dra has yielded a self-consistent set of atmospheric parameters: T_eff = 5750 K, log g = 4.5, [M/H] = 0.0, ξt = 1.6 km s-1. The evolutionary models imply that the star is slightly more massive than the Sun and has an age between 30-50 Myr, which agrees with the determined lithium abundance of log N(Li) = 3.02. Moreover, the atmospheric parameters, as well as the wings of the Ca ii 8662 Å, indicate that the photosphere of EK Dra is very similar to the one of the present Sun, while their chromospheres differ. There also seems to be a correlation between magnetic features seen in the photosphere and chromosphere. The temperature images reveal spots of only 500 K cooler than the quiet photosphere. The mean spot latitude varies with time. The obtained differential rotation is very small, but the sign of it supports solar type differential rotation on EK Dra. Based on observations made with the Nordic Optical Telescope, operated on the island of La Palma jointly by Denmark, Finland, Iceland, Norway, and Sweden, in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. Table [see full text] and Figs. [see full text] and [see full text] are only available in electronic form at http://www.aanda.org
Chase, J Geoffrey; Lambermont, Bernard; Starfinger, Christina; Hann, Christopher E; Shaw, Geoffrey M; Ghuysen, Alexandre; Kolh, Philippe; Dauby, Pierre C; Desaive, Thomas
2011-01-01
A cardiovascular system (CVS) model and parameter identification method have previously been validated for identifying different cardiac and circulatory dysfunctions in simulation and using porcine models of pulmonary embolism, hypovolemia with PEEP titrations and induced endotoxic shock. However, these studies required both left and right heart catheters to collect the data required for subject-specific monitoring and diagnosis-a maximally invasive data set in a critical care setting although it does occur in practice. Hence, use of this model-based diagnostic would require significant additional invasive sensors for some subjects, which is unacceptable in some, if not all, cases. The main goal of this study is to prove the concept of using only measurements from one side of the heart (right) in a 'minimal' data set to identify an effective patient-specific model that can capture key clinical trends in endotoxic shock. This research extends existing methods to a reduced and minimal data set requiring only a single catheter and reducing the risk of infection and other complications-a very common, typical situation in critical care patients, particularly after cardiac surgery. The extended methods and assumptions that found it are developed and presented in a case study for the patient-specific parameter identification of pig-specific parameters in an animal model of induced endotoxic shock. This case study is used to define the impact of this minimal data set on the quality and accuracy of the model application for monitoring, detecting and diagnosing septic shock. Six anesthetized healthy pigs weighing 20-30 kg received a 0.5 mg kg(-1) endotoxin infusion over a period of 30 min from T0 to T30. For this research, only right heart measurements were obtained. Errors for the identified model are within 8% when the model is identified from data, re-simulated and then compared to the experimentally measured data, including measurements not used in the identification process for validation. Importantly, all identified parameter trends match physiologically and clinically and experimentally expected changes, indicating that no diagnostic power is lost. This work represents a further with human subjects validation for this model-based approach to cardiovascular diagnosis and therapy guidance in monitoring endotoxic disease states. The results and methods obtained can be readily extended from this case study to the other animal model results presented previously. Overall, these results provide further support for prospective, proof of concept clinical testing with humans.
NASA Technical Reports Server (NTRS)
Sagdeev, Roald
1995-01-01
The main scientific objectives of the project were: (1) Calculation of average time history for different subsets of BATSE gamma-ray bursts; (2) Comparison of averaged parameters and averaged time history for different Burst And Transient Source Experiments (BASTE) Gamma Ray Bursts (GRB's) sets; (3) Comparison of results obtained with BATSE data with those obtained with APEX experiment at PHOBOS mission; and (4) Use the results of (1)-(3) to compare current models of gamma-ray bursts sources.
Fused Deposition Technique for Continuous Fiber Reinforced Thermoplastic
NASA Astrophysics Data System (ADS)
Bettini, Paolo; Alitta, Gianluca; Sala, Giuseppe; Di Landro, Luca
2017-02-01
A simple technique for the production of continuous fiber reinforced thermoplastic by fused deposition modeling, which involves a common 3D printer with quite limited modifications, is presented. An adequate setting of processing parameters and deposition path allows to obtain components with well-enhanced mechanical characteristics compared to conventional 3D printed items. The most relevant problems related to the simultaneous feeding of fibers and polymer are discussed. The properties of obtained aramid fiber reinforced polylactic acid (PLA) in terms of impregnation quality and of mechanical response are measured.
Cellular signaling identifiability analysis: a case study.
Roper, Ryan T; Pia Saccomani, Maria; Vicini, Paolo
2010-05-21
Two primary purposes for mathematical modeling in cell biology are (1) simulation for making predictions of experimental outcomes and (2) parameter estimation for drawing inferences from experimental data about unobserved aspects of biological systems. While the former purpose has become common in the biological sciences, the latter is less common, particularly when studying cellular and subcellular phenomena such as signaling-the focus of the current study. Data are difficult to obtain at this level. Therefore, even models of only modest complexity can contain parameters for which the available data are insufficient for estimation. In the present study, we use a set of published cellular signaling models to address issues related to global parameter identifiability. That is, we address the following question: assuming known time courses for some model variables, which parameters is it theoretically impossible to estimate, even with continuous, noise-free data? Following an introduction to this problem and its relevance, we perform a full identifiability analysis on a set of cellular signaling models using DAISY (Differential Algebra for the Identifiability of SYstems). We use our analysis to bring to light important issues related to parameter identifiability in ordinary differential equation (ODE) models. We contend that this is, as of yet, an under-appreciated issue in biological modeling and, more particularly, cell biology. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zábranová, Eliška; Matyska, Ctirad
2014-10-01
After the 2010 Maule and 2011 Tohoku earthquakes the spheroidal modes up to 1 mHz were clearly registered by the Global Geodynamic Project (GGP) network of superconducting gravimeters (SG). Fundamental parameters in synthetic calculations of the signals are the quality factors of the modes. We study the role of their uncertainties in the centroid-moment-tensor (CMT) inversions. First, we have inverted the SG data from selected GGP stations to jointly determine the quality factors of these normal modes and the three low-frequency CMT components, Mrr,(Mϑϑ-Mφφ)/2 and Mϑφ, that generate the observed SG signal. We have used several-days-long records to minimize the trade-off between the quality factors and the CMT but it was not eliminated completely. We have also inverted each record separately to get error estimates of the obtained parameters. Consequently, we have employed the GGP records of 60-h lengths for several published modal-quality-factor sets and inverted only the same three CMT components. The obtained CMT tensors are close to the solution from the joint Q-CMT inversion of longer records and resulting variability of the CMT components is smaller than differences among routine agency solutions. Reliable low-frequency CMT components can thus be obtained for any quality factors from the studied sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Masood; Malik, Rabia, E-mail: rabiamalik.qau@gmail.com; Munir, Asif
In this article, the mixed convective heat transfer to Sisko fluid over a radially stretching surface in the presence of convective boundary conditions is investigated. The viscous dissipation and thermal radiation effects are also taken into account. The suitable transformations are applied to convert the governing partial differential equations into a set of nonlinear coupled ordinary differential equations. The analytical solution of the governing problem is obtained by using the homotopy analysis method (HAM). Additionally, these analytical results are compared with the numerical results obtained by the shooting technique. The obtained results for the velocity and temperature are analyzed graphicallymore » for several physical parameters for the assisting and opposing flows. It is found that the effect of buoyancy parameter is more prominent in case of the assisting flow as compared to the opposing flow. Further, in tabular form the numerical values are given for the local skin friction coefficient and local Nusselt number. A remarkable agreement is noticed by comparing the present results with the results reported in the literature as a special case.« less
Laser induced periodic surface structures on pyrolytic carbon prosthetic heart valve
NASA Astrophysics Data System (ADS)
Stepak, Bogusz D.; Łecka, Katarzyna M.; Płonek, Tomasz; Antończak, Arkadiusz J.
2016-12-01
Laser-induced periodic surface structures (LIPSS) can appear in different forms such as ripples, grooves or cones. Those highly periodic wavy surface features which are frequently smaller than incident light wavelength bring possibility of nanostructuring of many different materials. Furthermore, by changing laser parameters one can obtain wide spectrum of periodicities and geometries. The aim of this research was to determine possibility of nanostructuring pyrolytic carbon (PyC) heart valve leaflets using different irradiation conditions. The study was performed using two laser sources with different pulse duration (15 ps, 450 fs) as well as different wavelengths (1064, 532, 355 nm). Both low and high spatial frequency LIPSS were observed for each set of irradiation parameters. In case femtosecond laser pulses we obtained deep subwavelength ripple period which was even ten times smaller than applied wavelength. Obtained ripple period was ranging from 90 up to 860 nm. Raman spectra revealed the increase of disorder after laser irradiation which was comparable for both pico- and femtosecond laser.
NASA Astrophysics Data System (ADS)
Cipriano, F. R.; Lagmay, A. M. A.; Horritt, M.; Mendoza, J.; Sabio, G.; Punay, K. N.; Taniza, H. J.; Uichanco, C.
2015-12-01
Widespread flooding is a major problem in the Philippines. The country experiences heavy amount of rainfall throughout the year and several areas are prone to flood hazards because of its unique topography. Human casualties and destruction of infrastructure are just some of the damages caused by flooding and the Philippine government has undertaken various efforts to mitigate these hazards. One of the solutions was to create flood hazard maps of different floodplains and use them to predict the possible catastrophic results of different rain scenarios. To produce these maps with accurate output, different input parameters were needed and one of those is calculating hydrological components from topographical data. This paper presents how a calibrated lag time (TL) equation was obtained using measurable catchment parameters. Lag time is an essential input in flood mapping and is defined as the duration between the peak rainfall and peak discharge of the watershed. The lag time equation involves three measurable parameters, namely, watershed length (L), maximum potential retention (S) derived from the curve number, and watershed slope (Y), all of which were available from RADARSAT Digital Elevation Models (DEM). This approach was based on a similar method developed by CH2M Hill and Horritt for Taiwan, which has a similar set of meteorological and hydrological parameters with the Philippines. Rainfall data from fourteen water level sensors covering 67 storms from all the regions in the country were used to estimate the actual lag time. These sensors were chosen by using a screening process that considers the distance of the sensors from the sea, the availability of recorded data, and the catchment size. The actual lag time values were plotted against the values obtained from the Natural Resource Conservation Management handbook lag time equation. Regression analysis was used to obtain the final calibrated equation that would be used to calculate the lag time specifically for rivers in the Philippine setting. The calculated lag time values could then be used as a parameter for modeling different flood scenarios in the country.
NASA Astrophysics Data System (ADS)
Ito, Shin-ichi; Yoshie, Naoki; Okunishi, Takeshi; Ono, Tsuneo; Okazaki, Yuji; Kuwata, Akira; Hashioka, Taketo; Rose, Kenneth A.; Megrey, Bernard A.; Kishi, Michio J.; Nakamachi, Miwa; Shimizu, Yugo; Kakehi, Shigeho; Saito, Hiroaki; Takahashi, Kazutaka; Tadokoro, Kazuaki; Kusaka, Akira; Kasai, Hiromi
2010-10-01
The Oyashio region in the western North Pacific supports high biological productivity and has been well monitored. We applied the NEMURO (North Pacific Ecosystem Model for Understanding Regional Oceanography) model to simulate the nutrients, phytoplankton, and zooplankton dynamics. Determination of parameters values is very important, yet ad hoc calibration methods are often used. We used the automatic calibration software PEST (model-independent Parameter ESTimation), which has been used previously with NEMURO but in a system without ontogenetic vertical migration of the large zooplankton functional group. Determining the performance of PEST with vertical migration, and obtaining a set of realistic parameter values for the Oyashio, will likely be useful in future applications of NEMURO. Five identical twin simulation experiments were performed with the one-box version of NEMURO. The experiments differed in whether monthly snapshot or averaged state variables were used, in whether state variables were model functional groups or were aggregated (total phytoplankton, small plus large zooplankton), and in whether vertical migration of large zooplankton was included or not. We then applied NEMURO to monthly climatological field data covering 1 year for the Oyashio, and compared model fits and parameter values between PEST-determined estimates and values used in previous applications to the Oyashio region that relied on ad hoc calibration. We substituted the PEST and ad hoc calibrated parameter values into a 3-D version of NEMURO for the western North Pacific, and compared the two sets of spatial maps of chlorophyll- a with satellite-derived data. The identical twin experiments demonstrated that PEST could recover the known model parameter values when vertical migration was included, and that over-fitting can occur as a result of slight differences in the values of the state variables. PEST recovered known parameter values when using monthly snapshots of aggregated state variables, but estimated a different set of parameters with monthly averaged values. Both sets of parameters resulted in good fits of the model to the simulated data. Disaggregating the variables provided to PEST into functional groups did not solve the over-fitting problem, and including vertical migration seemed to amplify the problem. When we used the climatological field data, simulated values with PEST-estimated parameters were closer to these field data than with the previously determined ad hoc set of parameter values. When these same PEST and ad hoc sets of parameter values were substituted into 3-D-NEMURO (without vertical migration), the PEST-estimated parameter values generated spatial maps that were similar to the satellite data for the Kuroshio Extension during January and March and for the subarctic ocean from May to November. With non-linear problems, such as vertical migration, PEST should be used with caution because parameter estimates can be sensitive to how the data are prepared and to the values used for the searching parameters of PEST. We recommend the usage of PEST, or other parameter optimization methods, to generate first-order parameter estimates for simulating specific systems and for insertion into 2-D and 3-D models. The parameter estimates that are generated are useful, and the inconsistencies between simulated values and the available field data provide valuable information on model behavior and the dynamics of the ecosystem.
Zyout, Imad; Czajkowska, Joanna; Grzegorzek, Marcin
2015-12-01
The high number of false positives and the resulting number of avoidable breast biopsies are the major problems faced by current mammography Computer Aided Detection (CAD) systems. False positive reduction is not only a requirement for mass but also for calcification CAD systems which are currently deployed for clinical use. This paper tackles two problems related to reducing the number of false positives in the detection of all lesions and masses, respectively. Firstly, textural patterns of breast tissue have been analyzed using several multi-scale textural descriptors based on wavelet and gray level co-occurrence matrix. The second problem addressed in this paper is the parameter selection and performance optimization. For this, we adopt a model selection procedure based on Particle Swarm Optimization (PSO) for selecting the most discriminative textural features and for strengthening the generalization capacity of the supervised learning stage based on a Support Vector Machine (SVM) classifier. For evaluating the proposed methods, two sets of suspicious mammogram regions have been used. The first one, obtained from Digital Database for Screening Mammography (DDSM), contains 1494 regions (1000 normal and 494 abnormal samples). The second set of suspicious regions was obtained from database of Mammographic Image Analysis Society (mini-MIAS) and contains 315 (207 normal and 108 abnormal) samples. Results from both datasets demonstrate the efficiency of using PSO based model selection for optimizing both classifier hyper-parameters and parameters, respectively. Furthermore, the obtained results indicate the promising performance of the proposed textural features and more specifically, those based on co-occurrence matrix of wavelet image representation technique. Copyright © 2015 Elsevier Ltd. All rights reserved.
The estimation of material and patch parameters in a PDE-based circular plate model
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.; Brown, D. E.; Metcalf, Vern L.; Silcox, R. J.
1995-01-01
The estimation of material and patch parameters for a system involving a circular plate, to which piezoceramic patches are bonded, is considered. A partial differential equation (PDE) model for the thin circular plate is used with the passive and active contributions form the patches included in the internal and external bending moments. This model contains piecewise constant parameters describing the density, flexural rigidity, Poisson ratio, and Kelvin-Voigt damping for the system as well as patch constants and a coefficient for viscous air damping. Examples demonstrating the estimation of these parameters with experimental acceleration data and a variety of inputs to the experimental plate are presented. By using a physically-derived PDE model to describe the system, parameter sets consistent across experiments are obtained, even when phenomena such as damping due to electric circuits affect the system dynamics.
Sensitivity and specificity of univariate MRI analysis of experimentally degraded cartilage
Lin, Ping-Chang; Reiter, David A.; Spencer, Richard G.
2010-01-01
MRI is increasingly used to evaluate cartilage in tissue constructs, explants, and animal and patient studies. However, while mean values of MR parameters, including T1, T2, magnetization transfer rate km, apparent diffusion coefficient ADC, and the dGEMRIC-derived fixed charge density, correlate with tissue status, the ability to classify tissue according to these parameters has not been explored. Therefore, the sensitivity and specificity with which each of these parameters was able to distinguish between normal and trypsin- degraded, and between normal and collagenase-degraded, cartilage explants were determined. Initial analysis was performed using a training set to determine simple group means to which parameters obtained from a validation set were compared. T1 and ADC showed the greatest ability to discriminate between normal and degraded cartilage. Further analysis with k-means clustering, which eliminates the need for a priori identification of sample status, generally performed comparably. Use of fuzzy c-means (FCM) clustering to define centroids likewise did not result in improvement in discrimination. Finally, a FCM clustering approach in which validation samples were assigned in a probabilistic fashion to control and degraded groups was implemented, reflecting the range of tissue characteristics seen with cartilage degradation. PMID:19705467
Gupta, Diksha; Singh, Bani
2014-01-01
The objective of this investigation is to analyze the effect of unsteadiness on the mixed convection boundary layer flow of micropolar fluid over a permeable shrinking sheet in the presence of viscous dissipation. At the sheet a variable distribution of suction is assumed. The unsteadiness in the flow and temperature fields is caused by the time dependence of the shrinking velocity and surface temperature. With the aid of similarity transformations, the governing partial differential equations are transformed into a set of nonlinear ordinary differential equations, which are solved numerically, using variational finite element method. The influence of important physical parameters, namely, suction parameter, unsteadiness parameter, buoyancy parameter and Eckert number on the velocity, microrotation, and temperature functions is investigated and analyzed with the help of their graphical representations. Additionally skin friction and the rate of heat transfer have also been computed. Under special conditions, an exact solution for the flow velocity is compared with the numerical results obtained by finite element method. An excellent agreement is observed for the two sets of solutions. Furthermore, to verify the convergence of numerical results, calculations are conducted with increasing number of elements. PMID:24672310
A Taguchi approach on optimal process control parameters for HDPE pipe extrusion process
NASA Astrophysics Data System (ADS)
Sharma, G. V. S. S.; Rao, R. Umamaheswara; Rao, P. Srinivasa
2017-06-01
High-density polyethylene (HDPE) pipes find versatile applicability for transportation of water, sewage and slurry from one place to another. Hence, these pipes undergo tremendous pressure by the fluid carried. The present work entails the optimization of the withstanding pressure of the HDPE pipes using Taguchi technique. The traditional heuristic methodology stresses on a trial and error approach and relies heavily upon the accumulated experience of the process engineers for determining the optimal process control parameters. This results in setting up of less-than-optimal values. Hence, there arouse a necessity to determine optimal process control parameters for the pipe extrusion process, which can ensure robust pipe quality and process reliability. In the proposed optimization strategy, the design of experiments (DoE) are conducted wherein different control parameter combinations are analyzed by considering multiple setting levels of each control parameter. The concept of signal-to-noise ratio ( S/ N ratio) is applied and ultimately optimum values of process control parameters are obtained as: pushing zone temperature of 166 °C, Dimmer speed at 08 rpm, and Die head temperature to be 192 °C. Confirmation experimental run is also conducted to verify the analysis and research result and values proved to be in synchronization with the main experimental findings and the withstanding pressure showed a significant improvement from 0.60 to 1.004 Mpa.
Ronald E. McRoberts
2005-01-01
Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...
Nijran, Kuldip S; Houston, Alex S; Fleming, John S; Jarritt, Peter H; Heikkinen, Jari O; Skrypniuk, John V
2014-07-01
In this second UK audit of quantitative parameters obtained from renography, phantom simulations were used in cases in which the 'true' values could be estimated, allowing the accuracy of the parameters measured to be assessed. A renal physical phantom was used to generate a set of three phantom simulations (six kidney functions) acquired on three different gamma camera systems. A total of nine phantom simulations and three real patient studies were distributed to UK hospitals participating in the audit. Centres were asked to provide results for the following parameters: relative function and time-to-peak (whole kidney and cortical region). As with previous audits, a questionnaire collated information on methodology. Errors were assessed as the root mean square deviation from the true value. Sixty-one centres responded to the audit, with some hospitals providing multiple sets of results. Twenty-one centres provided a complete set of parameter measurements. Relative function and time-to-peak showed a reasonable degree of accuracy and precision in most UK centres. The overall average root mean squared deviation of the results for (i) the time-to-peak measurement for the whole kidney and (ii) the relative function measurement from the true value was 7.7 and 4.5%, respectively. These results showed a measure of consistency in the relative function and time-to-peak that was similar to the results reported in a previous renogram audit by our group. Analysis of audit data suggests a reasonable degree of accuracy in the quantification of renography function using relative function and time-to-peak measurements. However, it is reasonable to conclude that the objectives of the audit could not be fully realized because of the limitations of the mechanical phantom in providing true values for renal parameters.
NASA Astrophysics Data System (ADS)
Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng
2016-09-01
This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.
Data Summarization in the Node by Parameters (DSNP): Local Data Fusion in an IoT Environment.
Maschi, Luis F C; Pinto, Alex S R; Meneguette, Rodolfo I; Baldassin, Alexandro
2018-03-07
With the advent of the Internet of Things, billions of objects or devices are inserted into the global computer network, generating and processing data at a volume never imagined before. This paper proposes a way to collect and process local data through a data fusion technology called summarization. The main feature of the proposal is the local data fusion, through parameters provided by the application, ensuring the quality of data collected by the sensor node. In the evaluation, the sensor node was compared when performing the data summary with another that performed a continuous recording of the collected data. Two sets of nodes were created, one with a sensor node that analyzed the luminosity of the room, which in this case obtained a reduction of 97% in the volume of data generated, and another set that analyzed the temperature of the room, obtaining a reduction of 80% in the data volume. Through these tests, it has been proven that the local data fusion at the node can be used to reduce the volume of data generated, consequently decreasing the volume of messages generated by IoT environments.
Wang, Xiaodong; Bi, Xuejun; Hem, Lars John; Ratnaweera, Harsha
2018-07-15
Microbial community diversity determines the function of each chamber of multi-stage moving bed biofilm reactor (MBBR) systems. How the microbial community data can be further used to serve wastewater treatment process modelling and optimization has been rarely studied. In this study, a MBBR system was set up to investigate the microbial community diversity of biofilm in each functional chamber. The compositions of microbial community of biofilm from different chambers of MBBR were quantified by high-throughput sequencing. Significantly higher proportion of autotrophs were found in the second aerobic chamber (15.4%), while 4.3% autotrophs were found in the first aerobic chamber. Autotrophs in anoxic chamber were negligible. Moreover, ratios of active heterotrophic biomass and autotrophic biomass (X H /X A ) were obtained by performing respiration tests. By setting heterotroph/autotroph ratios obtained from sequencing analysis equal to X H /X A , a novel approach for kinetic model parameters estimation was developed. This work not only investigated microbial community of MBBR system, but also it provided an approach to make further use of molecular microbiology analysis results. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Protim Das, Partha; Gupta, P.; Das, S.; Pradhan, B. B.; Chakraborty, S.
2018-01-01
Maraging steel (MDN 300) find its application in many industries as it exhibits high hardness which are very difficult to machine material. Electro discharge machining (EDM) is an extensively popular machining process which can be used in machining of such materials. Optimization of response parameters are essential for effective machining of these materials. Past researchers have already used Taguchi for obtaining the optimal responses of EDM process for this material with responses such as material removal rate (MRR), tool wear rate (TWR), relative wear ratio (RWR), and surface roughness (SR) considering discharge current, pulse on time, pulse off time, arc gap, and duty cycle as process parameters. In this paper, grey relation analysis (GRA) with fuzzy logic is applied to this multi objective optimization problem to check the responses by an implementation of the derived parametric setting. It was found that the parametric setting derived by the proposed method results in better a response than those reported by the past researchers. Obtained results are also verified using the technique for order of preference by similarity to ideal solution (TOPSIS). The predicted result also shows that there is a significant improvement in comparison to the results of past researchers.
Anelastic characterization of soft poroelastic materials by anelastography
NASA Astrophysics Data System (ADS)
Flores B, Carolina; Ammann, Jean Jacques; Rivera, Ricardo
2008-11-01
This paper presents the ID characterization of the local anelastic strain determined in soft poroelastic materials through acoustic scattering in a creep test configuration. Backscattering signals are obtained at successive times in a specimen submitted to a constant stress, applied coaxially to the acoustic beam of a 5 MHz ultrasonic transducer operated in pulse-echo mode. The local displacement is measured by determining the local shift between the RF traces by performing a running cross-correlation operation between equivalent segments extracted from two pairs of RF traces. The local strain the in the specimen is obtained as the displacement gradient. The method has been implemented on biphasic porous materials that present poroelastic behaviors such as synthetic latex sponges impregnated with viscous liquids. The strain/time curves have been interpreted through a continuous bimodal anelastic model (CBA), composed of an infinite set of Kelvin-Voigt cells connected in series with an elastic spring. The fit of an experimental strain/time curve selected at a specific depth through the CBA model allow characterizing the local anelastic behavior through a set of 7 characteristics parameters for the specimen at this location: three short-term and three long-term anelastic parameters and one elastic constant.
NASA Astrophysics Data System (ADS)
Kieseler, Jan
2017-11-01
A method is discussed that allows combining sets of differential or inclusive measurements. It is assumed that at least one measurement was obtained with simultaneously fitting a set of nuisance parameters, representing sources of systematic uncertainties. As a result of beneficial constraints from the data all such fitted parameters are correlated among each other. The best approach for a combination of these measurements would be the maximization of a combined likelihood, for which the full fit model of each measurement and the original data are required. However, only in rare cases this information is publicly available. In absence of this information most commonly used combination methods are not able to account for these correlations between uncertainties, which can lead to severe biases as shown in this article. The method discussed here provides a solution for this problem. It relies on the public result and its covariance or Hessian, only, and is validated against the combined-likelihood approach. A dedicated software package implementing this method is also presented. It provides a text-based user interface alongside a C++ interface. The latter also interfaces to ROOT classes for simple combination of binned measurements such as differential cross sections.
NASA Astrophysics Data System (ADS)
Khalaf, A. M.; Khalifa, M. M.; Solieman, A. H. M.; Comsan, M. N. H.
2018-01-01
Owing to its doubly magic nature having equal numbers of protons and neutrons, the 40Ca nuclear scattering can be successfully described by the optical model that assumes a spherical nuclear potential. Therefore, optical model analysis was employed to calculate the elastic scattering cross section for p +40Ca interaction at energies from 9 to 22 MeV as well as the polarization at energies from 10 to 18.2 MeV. New optical model parameters (OMPs) were proposed based on the best fitting to experimental data. It is found that the best fit OMPs depend on the energy by smooth relationships. The results were compared with other OMPs sets regarding their chi square values (χ2). The obtained OMP's set was used to calculate the volume integral of the potentials and the root mean square (rms) value of nuclear matter radius of 40Ca. In addition, 40Ca bulk nuclear matter properties were discussed utilizing both the obtained rms radius and the Thomas-Fermi rms radius calculated using spherical Hartree-Fock formalism employing Skyrme type nucleon-nucleon force. The nuclear scattering SCAT2000 FORTRAN code was used for the optical model analysis.
Measuring opto-thermal parameters of basalt fibers using digital holographic microscopy.
Yassien, Khaled M; Agour, Mostafa
2017-02-01
A method for studying the effect of temperature on the optical properties of basalt fiber is presented. It is based on recording a set of phase-shifted digital holograms for the sample under the test. The holograms are obtained utilizing a system based on Mach-Zehnder interferometer, where the fiber sample inserted in an immersion liquid is placed within a temperature controlled chamber. From the recorded digital holograms the optical path differences which are used to calculate the refractive indices are determined. The accuracy in the measurement of refractive indices is in the range of 4 × 10 -4 . The influence of temperature on the dispersion parameters, polarizability per unit volume and dielectric susceptibility are also obtained. Moreover, the values of dispersion and oscillation energies and Cauchy's constants are provided at different temperatures. © 2016 Wiley Periodicals, Inc.
Completion of the universal I-Love-Q relations in compact stars including the mass
NASA Astrophysics Data System (ADS)
Reina, Borja; Sanchis-Gual, Nicolas; Vera, Raül; Font, José A.
2017-09-01
In a recent paper, we applied a rigorous perturbed matching framework to show the amendment of the mass of rotating stars in Hartle's model. Here, we apply this framework to the tidal problem in binary systems. Our approach fully accounts for the correction to the Love numbers needed to obtain the universal I-Love-Q relations. We compute the corrected mass versus radius configurations of rotating quark stars, revisiting a classical paper on the subject. These corrections allow us to find a universal relation involving the second-order contribution to the mass δM. We thus complete the set of universal relations for the tidal problem in binary systems, involving four perturbation parameters, namely I, Love, Q and δM. These relations can be used to obtain the perturbation parameters directly from observational data.
Alghanem, Bandar; Nikitin, Frédéric; Stricker, Thomas; Duchoslav, Eva; Luban, Jeremy; Strambio-De-Castillia, Caterina; Muller, Markus; Lisacek, Frédérique; Varesio, Emmanuel; Hopfgartner, Gérard
2017-05-15
In peptide quantification by liquid chromatography/mass spectrometry (LC/MS), the optimization of multiple reaction monitoring (MRM) parameters is essential for sensitive detection. We have compared different approaches to build MRM assays, based either on flow injection analysis (FIA) of isotopically labelled peptides, or on the knowledge and the prediction of the best settings for MRM transitions and collision energies (CE). In this context, we introduce MRMOptimizer, an open-source software tool that processes spectra and assists the user in selecting transitions in the FIA workflow. MS/MS spectral libraries with CE voltages from 10 to 70 V are automatically acquired in FIA mode for isotopically labelled peptides. Then MRMOptimizer determines the optimal MRM settings for each peptide. To assess the quantitative performance of our approach, 155 peptides, representing 84 proteins, were analysed by LC/MRM-MS and the peak areas were compared between: (A) the MRMOptimizer-based workflow, (B1) the SRMAtlas transitions set used 'as-is'; (B2) the same SRMAtlas set with CE parameters optimized by Skyline. 51% of the three most intense transitions per peptide were shown to be common to both A and B1/B2 methods, and displayed similar sensitivity and peak area distributions. The peak areas obtained with MRMOptimizer for transitions sharing either the precursor ion charge state or the fragment ions with the SRMAtlas set at unique transitions were increased 1.8- to 2.3-fold. The gain in sensitivity using MRMOptimizer for transitions with different precursor ion charge state and fragment ions (8% of the total), reaches a ~ 11-fold increase. Isotopically labelled peptides can be used to optimize MRM transitions more efficiently in FIA than by searching databases. The MRMOptimizer software is MS independent and enables the post-acquisition selection of MRM parameters. Coefficients of variation for optimal CE values are lower than those obtained with the SRMAtlas approach (B2) and one additional peptide was detected. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Development and validation of the AFIT scene and sensor emulator for testing (ASSET)
NASA Astrophysics Data System (ADS)
Young, Shannon R.; Steward, Bryan J.; Gross, Kevin C.
2017-05-01
ASSET is a physics-based model used to generate synthetic data sets of wide field of view (WFOV) electro-optical and infrared (EO/IR) sensors with realistic radiometric properties, noise characteristics, and sensor artifacts. It was developed to meet the need for applications where precise knowledge of the underlying truth is required but is impractical to obtain for real sensors. For example, due to accelerating advances in imaging technology, the volume of data available from WFOV EO/IR sensors has drastically increased over the past several decades, and as a result, there is a need for fast, robust, automatic detection and tracking algorithms. Evaluation of these algorithms is difficult for objects that traverse a wide area (100-10,000 km) because obtaining accurate truth for the full object trajectory often requires costly instrumentation. Additionally, tracking and detection algorithms perform differently depending on factors such as the object kinematics, environment, and sensor configuration. A variety of truth data sets spanning these parameters are needed for thorough testing, which is often cost prohibitive. The use of synthetic data sets for algorithm development allows for full control of scene parameters with full knowledge of truth. However, in order for analysis using synthetic data to be meaningful, the data must be truly representative of real sensor collections. ASSET aims to provide a means of generating such representative data sets for WFOV sensors operating in the visible through thermal infrared. The work reported here describes the ASSET model, as well as provides validation results from comparisons to laboratory imagers and satellite data (e.g. Landsat-8).
NASA Astrophysics Data System (ADS)
Krishna, M. Veera; Swarnalathamma, B. V.
2017-07-01
We considered the transient MHD flow of a reactive second grade fluid through porous medium between two infinitely long horizontal parallel plates when one of the plate is set into uniform accelerated motion in the presence of a uniform transverse magnetic field under Arrhenius reaction rate. The governing equations are solved by Laplace transform technique. The effects of the pertinent parameters on the velocity, temperature are discussed in detail. The shear stress and Nusselt number at the plates are also obtained analytically and computationally discussed with reference to governing parameters.
Inferential Framework for Autonomous Cryogenic Loading Operations
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry G.; Khasin, Michael; Timucin, Dogan; Sass, Jared; Perotti, Jose; Brown, Barbara
2017-01-01
We address problem of autonomous cryogenic management of loading operations on the ground and in space. As a step towards solution of this problem we develop a probabilistic framework for inferring correlations parameters of two-fluid cryogenic flow. The simulation of two-phase cryogenic flow is performed using nearly-implicit scheme. A concise set of cryogenic correlations is introduced. The proposed approach is applied to an analysis of the cryogenic flow in experimental Propellant Loading System built at NASA KSC. An efficient simultaneous optimization of a large number of model parameters is demonstrated and a good agreement with the experimental data is obtained.
NASA Astrophysics Data System (ADS)
Miyazaki, Eiji; Shimazaki, Kazunori; Numata, Osamu; Waki, Miyuki; Yamanaka, Riyo; Kimoto, Yugo
2016-09-01
Outgassing rate measurement, or dynamic outgassing test, is used to obtain outgassing properties of materials, i.e., Total Mass Loss, "TML," and Collected Volatile Condensed Mass, "CVCM." The properties are used as input parameters for executing contamination analysis, e.g., calculating a prediction of deposition mass on a surface in a spacecraft caused by outgassed substances from contaminant sources onboard. It is likely that results obtained by such calculations are affected by the input parameters. Thus, it is important to get a sufficient experimental data set of outgassing rate measurements for extract good outgassing parameters of materials for calculation. As specified in the standard, ASTM E 1559, TML is measured by a QCM sensor kept at cryogenic temperature; CVCMs are measured at certain temperatures. In the present work, the authors propose a new experimental procedure to obtain more precise VCMs from one run of the current test time with the present equipment. That is, two of four CQCMs in the equipment control the temperature to cool step-by-step during the test run. It is expected that the deposition rate, that is sticking coefficient, with respect to temperature could be discovered. As a result, the sticking coefficient can be obtained directly between -50 and 50 degrees C with 5 degrees C step. It looks like the method could be used as an improved procedure for outgassing rate measurement. The present experiment also specified some issues of the new procedure. It will be considered in future work.
Liu, Hui; Li, Yingzi; Zhang, Yingxu; Chen, Yifu; Song, Zihang; Wang, Zhenyu; Zhang, Suoxin; Qian, Jianqiang
2018-01-01
Proportional-integral-derivative (PID) parameters play a vital role in the imaging process of an atomic force microscope (AFM). Traditional parameter tuning methods require a lot of manpower and it is difficult to set PID parameters in unattended working environments. In this manuscript, an intelligent tuning method of PID parameters based on iterative learning control is proposed to self-adjust PID parameters of the AFM according to the sample topography. This method gets enough information about the output signals of PID controller and tracking error, which will be used to calculate the proper PID parameters, by repeated line scanning until convergence before normal scanning to learn the topography. Subsequently, the appropriate PID parameters are obtained by fitting method and then applied to the normal scanning process. The feasibility of the method is demonstrated by the convergence analysis. Simulations and experimental results indicate that the proposed method can intelligently tune PID parameters of the AFM for imaging different topographies and thus achieve good tracking performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Drago, Raymond J.; Lenski, Joseph W., Jr.; Spencer, Robert H.; Valco, Mark; Oswald, Fred B.
1993-01-01
The real noise reduction benefits which may be obtained through the use of one gear tooth form as compared to another is an important design parameter for any geared system, especially for helicopters in which both weight and reliability are very important factors. This paper describes the design and testing of nine sets of gears which are as identical as possible except for their basic tooth geometry. Noise measurements were made at various combinations of load and speed for each gear set so that direct comparisons could be made. The resultant data was analyzed so that valid conclusions could be drawn and interpreted for design use.
NASA Astrophysics Data System (ADS)
Lei, Meizhen; Wang, Liqiang
2018-01-01
To reduce the difficulty of manufacturing and increase the magnetic thrust density, a moving-magnet linear oscillatory motor (MMLOM) without inner-stators was Proposed. To get the optimal design of maximum electromagnetic thrust with minimal permanent magnetic material, firstly, the 3D finite element analysis (FEA) model of the MMLOM was built and verified by comparison with prototype experiment result. Then the influence of design parameters of permanent magnet (PM) on the electromagnetic thrust was systematically analyzed by the 3D FEA to get the design parameters. Secondly, response surface methodology (RSM) was employed to build the response surface model of the new MMLOM, which can obtain an analytical model of the PM volume and thrust. Then a multi-objective optimization methods for design parameters of PM, using response surface methodology (RSM) with a quantum-behaved PSO (QPSO) operator, was proposed. Then the way to choose the best design parameters of PM among the multi-objective optimization solution sets was proposed. Then the 3D FEA of the optimal design candidates was compared. The comparison results showed that the proposed method can obtain the best combination of the geometric parameters of reducing the PM volume and increasing the thrust.
Hydrogen peroxide clusters: the role of open book motif in cage and helical structures.
Elango, M; Parthasarathi, R; Subramanian, V; Ramachandran, C N; Sathyamurthy, N
2006-05-18
Hartree-Fock (HF) calculations using 6-31G*, 6-311++G(d,p), aug-cc-pVDZ, and aug-cc-pVTZ basis sets show that hydrogen peroxide molecular clusters tend to form hydrogen-bonded cyclic and cage structures along the lines expected of a molecule which can act as a proton donor as well as an acceptor. These results are reiterated by density functional theoretic (DFT) calculations with B3LYP parametrization and also by second-order Møller-Plesset perturbation (MP2) theory using 6-31G* and 6-311++G(d,p) basis sets. Trends in stabilization energies and geometrical parameters obtained at the HF level using 6-311++G(d,p), aug-cc-pVDZ, and aug-cc-pVTZ basis sets are similar to those obtained from HF/6-31G* calculation. In addition, the HF calculations suggest the formation of stable helical structures for larger clusters, provided the neighbors form an open book structure.
Coordinated Platoon Routing in a Metropolitan Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Jeffrey; Munson, Todd; Sokolov, Vadim
2016-10-10
Platooning vehicles—connected and automated vehicles traveling with small intervehicle distances—use less fuel because of reduced aerodynamic drag. Given a network de- fined by vertex and edge sets and a set of vehicles with origin/destination nodes/times, we model and solve the combinatorial optimization problem of coordinated routing of vehicles in a manner that routes them to their destination on time while using the least amount of fuel. Common approaches decompose the platoon coordination and vehicle routing into separate problems. Our model addresses both problems simultaneously to obtain the best solution. We use modern modeling techniques and constraints implied from analyzing themore » platoon routing problem to address larger numbers of vehicles and larger networks than previously considered. While the numerical method used is unable to certify optimality for candidate solutions to all networks and parameters considered, we obtain excellent solutions in approximately one minute for much larger networks and vehicle sets than previously considered in the literature.« less
Fine-tuned Remote Laser Welding of Aluminum to Copper with Local Beam Oscillation
NASA Astrophysics Data System (ADS)
Fetzer, Florian; Jarwitz, Michael; Stritt, Peter; Weber, Rudolf; Graf, Thomas
Local beam oscillation in remote laser welding of aluminum to copper was investigated. Sheets of 1 mm thickness were welded in overlap configuration with aluminum as top material. The laser beam was scanned in a sinusoidal mode perpendicular to the direction of feed and the influence of the oscillation parameters frequency and amplitude on the weld geometry was investigated. Scanning frequencies up to 1 kHz and oscillation amplitudes in the range from 0.25 mm to 1 mm were examined. Throughout the experiments the laser power and the feed rate were kept constant. A decrease of welding depth with amplitude and frequency is found. The scanning amplitude had a strong influence and allowed coarse setting of the welding depth into the lower material, while the frequency allowed fine tuning in the order of 10% of the obtained depth. The oscillation parameters were found to act differently on the aluminum sheet compared to copper sheet regarding the amount of fused material. It is possible to influence the geometry of the fused zones separately for both sheets. Therefore the average composition in the weld can be set with high precision via the oscillation parameters. A setting of the generated intermetallics in the weld zone is possible without adjustment of laser power and feed rate.
NASA Astrophysics Data System (ADS)
Luna, Aderval S.; da Silva, Arnaldo P.; Ferré, Joan; Boqué, Ricard
This research work describes two studies for the classification and characterization of edible oils and its quality parameters through Fourier transform mid infrared spectroscopy (FT-mid-IR) together with chemometric methods. The discrimination of canola, sunflower, corn and soybean oils was investigated using SVM-DA, SIMCA and PLS-DA. Using FT-mid-IR, DPLS was able to classify 100% of the samples from the validation set, but SIMCA and SVM-DA were not. The quality parameters: refraction index and relative density of edible oils were obtained from reference methods. Prediction models for FT-mid-IR spectra were calculated for these quality parameters using partial least squares (PLS) and support vector machines (SVM). Several preprocessing alternatives (first derivative, multiplicative scatter correction, mean centering, and standard normal variate) were investigated. The best result for the refraction index was achieved with SVM as well as for the relative density except when the preprocessing combination of mean centering and first derivative was used. For both of quality parameters, the best results obtained for the figures of merit expressed by the root mean square error of cross validation (RMSECV) and prediction (RMSEP) were equal to 0.0001.
Near infrared spectroscopy (NIRS) for on-line determination of quality parameters in intact olives.
Salguero-Chaparro, Lourdes; Baeten, Vincent; Fernández-Pierna, Juan A; Peña-Rodríguez, Francisco
2013-08-15
The acidity, moisture and fat content in intact olive fruits were determined on-line using a NIR diode array instrument, operating on a conveyor belt. Four sets of calibrations models were obtained by means of different combinations from samples collected during 2009-2010 and 2010-2011, using full-cross and external validation. Several preprocessing treatments such as derivatives and scatter correction were investigated by using the root mean square error of cross-validation (RMSECV) and prediction (RMSEP), as control parameters. The results obtained showed RMSECV values of 2.54-3.26 for moisture, 2.35-2.71 for fat content and 2.50-3.26 for acidity parameters, depending on the calibration model developed. Calibrations for moisture, fat content and acidity gave residual predictive deviation (RPD) values of 2.76, 2.37 and 1.60, respectively. Although, it is concluded that the on-line NIRS prediction results were acceptable for the three parameters measured in intact olive samples in movement, the models developed must be improved in order to increase their accuracy before final NIRS implementation at mills. Copyright © 2013 Elsevier Ltd. All rights reserved.
Solar system expansion and strong equivalence principle as seen by the NASA MESSENGER mission
NASA Astrophysics Data System (ADS)
Genova, Antonio; Mazarico, Erwan; Goossens, Sander; Lemoine, Frank G.; Neumann, Gregory A.; Smith, David E.; Zuber, Maria T.
2018-01-01
The NASA MESSENGER mission explored the innermost planet of the solar system and obtained a rich data set of range measurements for the determination of Mercury's ephemeris. Here we use these precise data collected over 7 years to estimate parameters related to general relativity and the evolution of the Sun. These results confirm the validity of the strong equivalence principle with a significantly refined uncertainty of the Nordtvedt parameter η = (-6.6 ± 7.2) × 10-5. By assuming a metric theory of gravitation, we retrieved the post-Newtonian parameter β = 1 + (-1.6 ± 1.8) × 10-5 and the Sun's gravitational oblateness, J2⊙J2⊙ = (2.246 ± 0.022) × 10-7. Finally, we obtain an estimate of the time variation of the Sun gravitational parameter, GM⊙°/GM⊙GM⊙°/GM⊙ = (-6.13 ± 1.47) × 10-14, which is consistent with the expected solar mass loss due to the solar wind and interior processes. This measurement allows us to constrain ∣∣G°∣∣/GG°/G to be <4 × 10-14 per year.
Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution
NASA Astrophysics Data System (ADS)
Rajulapati, C. R.; Mujumdar, P. P.
2017-12-01
Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.
Plenis, Alina; Rekowska, Natalia; Bączek, Tomasz
2016-01-01
This article focuses on correlating the column classification obtained from the method created at the Katholieke Universiteit Leuven (KUL), with the chromatographic resolution attained in biomedical separation. In the KUL system, each column is described with four parameters, which enables estimation of the FKUL value characterising similarity of those parameters to the selected reference stationary phase. Thus, a ranking list based on the FKUL value can be calculated for the chosen reference column, then correlated with the results of the column performance test. In this study, the column performance test was based on analysis of moclobemide and its two metabolites in human plasma by liquid chromatography (LC), using 18 columns. The comparative study was performed using traditional correlation of the FKUL values with the retention parameters of the analytes describing the column performance test. In order to deepen the comparative assessment of both data sets, factor analysis (FA) was also used. The obtained results indicated that the stationary phase classes, closely related according to the KUL method, yielded comparable separation for the target substances. Therefore, the column ranking system based on the FKUL-values could be considered supportive in the choice of the appropriate column for biomedical analysis. PMID:26805819
Plenis, Alina; Rekowska, Natalia; Bączek, Tomasz
2016-01-21
This article focuses on correlating the column classification obtained from the method created at the Katholieke Universiteit Leuven (KUL), with the chromatographic resolution attained in biomedical separation. In the KUL system, each column is described with four parameters, which enables estimation of the FKUL value characterising similarity of those parameters to the selected reference stationary phase. Thus, a ranking list based on the FKUL value can be calculated for the chosen reference column, then correlated with the results of the column performance test. In this study, the column performance test was based on analysis of moclobemide and its two metabolites in human plasma by liquid chromatography (LC), using 18 columns. The comparative study was performed using traditional correlation of the FKUL values with the retention parameters of the analytes describing the column performance test. In order to deepen the comparative assessment of both data sets, factor analysis (FA) was also used. The obtained results indicated that the stationary phase classes, closely related according to the KUL method, yielded comparable separation for the target substances. Therefore, the column ranking system based on the FKUL-values could be considered supportive in the choice of the appropriate column for biomedical analysis.
NASA Astrophysics Data System (ADS)
Wang, Jing; Shen, Huoming; Zhang, Bo; Liu, Juan
2018-06-01
In this paper, we studied the parametric resonance issue of an axially moving viscoelastic nanobeam with varying velocity. Based on the nonlocal strain gradient theory, we established the transversal vibration equation of the axially moving nanobeam and the corresponding boundary condition. By applying the average method, we obtained a set of self-governing ordinary differential equations when the excitation frequency of the moving parameters is twice the intrinsic frequency or near the sum of certain second-order intrinsic frequencies. On the plane of parametric excitation frequency and excitation amplitude, we can obtain the instability region generated by the resonance, and through numerical simulation, we analyze the influence of the scale effect and system parameters on the instability region. The results indicate that the viscoelastic damping decreases the resonance instability region, and the average velocity and stiffness make the instability region move to the left- and right-hand sides. Meanwhile, the scale effect of the system is obvious. The nonlocal parameter exhibits not only the stiffness softening effect but also the damping weakening effect, while the material characteristic length parameter exhibits the stiffness hardening effect and damping reinforcement effect.
NASA Astrophysics Data System (ADS)
Ur Rehman, Khali; Ali Khan, Abid; Malik, M. Y.; Hussain, Arif
2017-09-01
The effects of temperature stratification on a tangent hyperbolic fluid flow over a stretching cylindrical surface are studied. The fluid flow is achieved by taking the no-slip condition into account. The mathematical modelling of the physical problem yields a nonlinear set of partial differential equations. These obtained partial differential equations are converted in terms of ordinary differential equations. Numerical investigation is done to identify the effects of the involved physical parameters on the dimensionless velocity and temperature profiles. In the presence of temperature stratification it is noticed that the curvature parameter makes both the fluid velocity and fluid temperature increase. In addition, positive variations in the thermal stratification parameter produce retardation with respect to the fluid flow, as a result the fluid temperature drops. The skin friction coefficient shows a decreasing nature for increasing value of both power law index and Weissenberg number, whereas the local Nusselt number is an increasing function of the Prandtl number, but opposite trends are found with respect to the thermal stratification parameter. The obtained results are validated by making a comparison with the existing literature which brings support to the presently developed model.
NASA Astrophysics Data System (ADS)
Astarita, Antonello; Boccarusso, Luca; Carrino, Luigi; Durante, Massimo; Minutolo, Fabrizio Memola Capece; Squillace, Antonino
2018-05-01
Polycarbonate sheets, 3 mm thick, were successfully friction stir welded in butt joint configuration. Aiming to study the feasibility of the process and the influence of the process parameters joints under different processing conditions, obtained by varying the tool rotational speed and the tool travel speed, were realized. Tensile tests were carried out to characterize the joints. Moreover the forces arising during the process were recorded and carefully studied. The experimental outcomes proved the feasibility of the process when the process parameters are properly set, joints retaining more than 70% of the UTS of the base material were produced. The trend of the forces was described and explained, the influence of the process parameters was also introduced.
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Ďuračkoá, Daniela
2014-01-01
The paper describes our experiment with using the Gaussian mixture models (GMM) for classification of speech uttered by a person wearing orthodontic appliances. For the GMM classification, the input feature vectors comprise the basic and the complementary spectral properties as well as the supra-segmental parameters. Dependence of classification correctness on the number of the parameters in the input feature vector and on the computation complexity is also evaluated. In addition, an influence of the initial setting of the parameters for GMM training process was analyzed. Obtained recognition results are compared visually in the form of graphs as well as numerically in the form of tables and confusion matrices for tested sentences uttered using three configurations of orthodontic appliances.
Characterizing English Poetic Style Using Complex Networks
NASA Astrophysics Data System (ADS)
Roxas-Villanueva, Ranzivelle Marianne; Nambatac, Maelori Krista; Tapang, Giovanni
Complex networks have been proven useful in characterizing written texts. Here, we use networks to probe if there exist a similarity within, and difference across, era as reflected within the poem's structure. In literary history, boundary lines are set to distinguish the change in writing styles through time. We obtain the network parameters and motif frequencies of 845 poems published from 1522 to 1931 and relate this to the writing of the Elizabethan, 17th Century, Augustan, Romantic and Victorian eras. Analysis of the different network parameters shows a significant difference of the Augustan era (1667-1780) with the rest. The network parameters and the convex hull and centroids of the motif frequencies reflect the adjectival sequence pattern of the poems of the Augustan era.
NASA Astrophysics Data System (ADS)
Sarikaya, Ebru Karakaş; Dereli, Ömer
2017-02-01
To obtain liquid phase molecular structure, conformational analysis of Orotic acid was performed and six conformers were determined. For these conformations, eight possible radicals were modelled by using Density Functional Theory computations with respect to molecular structure. Electron Paramagnetic Resonance parameters of these model radicals were calculated and then they were compared with the experimental ones. Geometry optimizations of the molecule and modeled radicals were performed using Becke's three-parameter hybrid-exchange functional combined with the Lee-Yang-Parr correlation functional of Density Functional Theory and 6-311++G(d,p) basis sets in p-dioxane solution. Because Orotic acid can be mutagenic in mammalian somatic cells and it is also mutagenic for bacteria and yeast, it has been studied.
Experimental study of ERT monitoring ability to measure solute dispersion.
Lekmine, Grégory; Pessel, Marc; Auradou, Harold
2012-01-01
This paper reports experimental measurements performed to test the ability of electrical resistivity tomography (ERT) imaging to provide quantitative information about transport parameters in porous media such as the dispersivity α, the mixing front velocity u, and the retardation factor R(f) associated with the sorption or trapping of the tracers in the pore structure. The flow experiments are performed in a homogeneous porous column placed between two vertical set of electrodes. Ionic and dyed tracers are injected from the bottom of the porous media over its full width. Under such condition, the mixing front is homogeneous in the transverse direction and shows an S-shape variation in the flow direction. The transport parameters are inferred from the variation of the concentration curves and are compared with data obtained from video analysis of the dyed tracer front. The variations of the transport parameters obtained from an inversion performed by the Gauss-Newton method applied on smoothness-constrained least-squares are studied in detail. While u and R(f) show a relatively small dependence on the inversion procedure, α is strongly dependent on the choice of the inversion parameters. Comparison with the video observations allows for the optimization of the parameters; these parameters are found to be robust with respect to changes in the flow condition and conductivity contrast. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.
An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model
NASA Astrophysics Data System (ADS)
Tiernan, E. D.; Hodges, B. R.
2017-12-01
The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.
Precision and Accuracy Parameters in Structured Light 3-D Scanning
NASA Astrophysics Data System (ADS)
Eiríksson, E. R.; Wilm, J.; Pedersen, D. B.; Aanæs, H.
2016-04-01
Structured light systems are popular in part because they can be constructed from off-the-shelf low cost components. In this paper we quantitatively show how common design parameters affect precision and accuracy in such systems, supplying a much needed guide for practitioners. Our quantitative measure is the established VDI/VDE 2634 (Part 2) guideline using precision made calibration artifacts. Experiments are performed on our own structured light setup, consisting of two cameras and a projector. We place our focus on the influence of calibration design parameters, the calibration procedure and encoding strategy and present our findings. Finally, we compare our setup to a state of the art metrology grade commercial scanner. Our results show that comparable, and in some cases better, results can be obtained using the parameter settings determined in this study.
Benoit, Gaëlle; Heinkélé, Christophe; Gourdon, Emmanuel
2013-12-01
This paper deals with a numerical procedure to identify the acoustical parameters of road pavement from surface impedance measurements. This procedure comprises three steps. First, a suitable equivalent fluid model for the acoustical properties porous media is chosen, the variation ranges for the model parameters are set, and a sensitivity analysis for this model is performed. Second, this model is used in the parameter inversion process, which is performed with simulated annealing in a selected frequency range. Third, the sensitivity analysis and inversion process are repeated to estimate each parameter in turn. This approach is tested on data obtained for porous bituminous concrete and using the Zwikker and Kosten equivalent fluid model. This work provides a good foundation for the development of non-destructive in situ methods for the acoustical characterization of road pavements.
Chaĭkovskiĭ, I A; Baum, O V; Popov, L A; Voloshin, V I; Budnik, N N; Frolov, Iu A; Kovalenko, A S
2014-01-01
While discussing the diagnostic value of the single channel electrocardiogram a set of theoretical considerations emerges inevitably, one of the most important among them is the question about dependence of the electrocardiogram parameters from the direction of electrical axis of heart. In other words, changes in what of electrocardiogram parameters are in fact liable to reflect pathological processes in myocardium, and what ones are determined by extracardiac factors, primarily by anatomic characteristics of patients. It is arguable that while analyzing electrocardiogram it is necessary to orient to such physiologically based informative indexes as ST segment displacement. Also, symmetry of the T wave shape is an important parameter which is independent of patients anatomic features. The results obtained are of interest for theoretical and applied aspects of the biophysics of the cardiac electric field.
Nonlinear ARMA models for the D(st) index and their physical interpretation
NASA Technical Reports Server (NTRS)
Vassiliadis, D.; Klimas, A. J.; Baker, D. N.
1996-01-01
Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Kang, M.; Thomson, N. R.
2007-12-01
The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.
Padró, Juan M; Pellegrino Vidal, Rocío B; Reta, Mario
2014-12-01
The partition coefficients, P IL/w, of several compounds, some of them of biological and pharmacological interest, between water and room-temperature ionic liquids based on the imidazolium, pyridinium, and phosphonium cations, namely 1-octyl-3-methylimidazolium hexafluorophosphate, N-octylpyridinium tetrafluorophosphate, trihexyl(tetradecyl)phosphonium chloride, trihexyl(tetradecyl)phosphonium bromide, trihexyl(tetradecyl)phosphonium bis(trifluoromethylsulfonyl)imide, and trihexyl(tetradecyl)phosphonium dicyanamide, were accurately measured. In this way, we extended our database of partition coefficients in room-temperature ionic liquids previously reported. We employed the solvation parameter model with different probe molecules (the training set) to elucidate the chemical interactions involved in the partition process and discussed the most relevant differences among the three types of ionic liquids. The multiparametric equations obtained with the aforementioned model were used to predict the partition coefficients for compounds (the test set) not present in the training set, most being of biological and pharmacological interest. An excellent agreement between calculated and experimental log P IL/w values was obtained. Thus, the obtained equations can be used to predict, a priori, the extraction efficiency for any compound using these ionic liquids as extraction solvents in liquid-liquid extractions.
Sánchez, Ariel G.; Grieb, Jan Niklas; Salazar-Albornoz, Salvador; ...
2016-09-30
The cosmological information contained in anisotropic galaxy clustering measurements can often be compressed into a small number of parameters whose posterior distribution is well described by a Gaussian. Here, we present a general methodology to combine these estimates into a single set of consensus constraints that encode the total information of the individual measurements, taking into account the full covariance between the different methods. We also illustrate this technique by applying it to combine the results obtained from different clustering analyses, including measurements of the signature of baryon acoustic oscillations and redshift-space distortions, based on a set of mock cataloguesmore » of the final SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS). Our results show that the region of the parameter space allowed by the consensus constraints is smaller than that of the individual methods, highlighting the importance of performing multiple analyses on galaxy surveys even when the measurements are highly correlated. Our paper is part of a set that analyses the final galaxy clustering data set from BOSS. The methodology presented here is used in Alam et al. to produce the final cosmological constraints from BOSS.« less
Automated crystallographic system for high-throughput protein structure determination.
Brunzelle, Joseph S; Shafaee, Padram; Yang, Xiaojing; Weigand, Steve; Ren, Zhong; Anderson, Wayne F
2003-07-01
High-throughput structural genomic efforts require software that is highly automated, distributive and requires minimal user intervention to determine protein structures. Preliminary experiments were set up to test whether automated scripts could utilize a minimum set of input parameters and produce a set of initial protein coordinates. From this starting point, a highly distributive system was developed that could determine macromolecular structures at a high throughput rate, warehouse and harvest the associated data. The system uses a web interface to obtain input data and display results. It utilizes a relational database to store the initial data needed to start the structure-determination process as well as generated data. A distributive program interface administers the crystallographic programs which determine protein structures. Using a test set of 19 protein targets, 79% were determined automatically.
PCAN: Probabilistic Correlation Analysis of Two Non-normal Data Sets
Zoh, Roger S.; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S.; Lampe, Johanna W.; Carroll, Raymond J.
2016-01-01
Summary Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. PMID:27037601
PCAN: Probabilistic correlation analysis of two non-normal data sets.
Zoh, Roger S; Mallick, Bani; Ivanov, Ivan; Baladandayuthapani, Veera; Manyam, Ganiraju; Chapkin, Robert S; Lampe, Johanna W; Carroll, Raymond J
2016-12-01
Most cancer research now involves one or more assays profiling various biological molecules, e.g., messenger RNA and micro RNA, in samples collected on the same individuals. The main interest with these genomic data sets lies in the identification of a subset of features that are active in explaining the dependence between platforms. To quantify the strength of the dependency between two variables, correlation is often preferred. However, expression data obtained from next-generation sequencing platforms are integer with very low counts for some important features. In this case, the sample Pearson correlation is not a valid estimate of the true correlation matrix, because the sample correlation estimate between two features/variables with low counts will often be close to zero, even when the natural parameters of the Poisson distribution are, in actuality, highly correlated. We propose a model-based approach to correlation estimation between two non-normal data sets, via a method we call Probabilistic Correlations ANalysis, or PCAN. PCAN takes into consideration the distributional assumption about both data sets and suggests that correlations estimated at the model natural parameter level are more appropriate than correlations estimated directly on the observed data. We demonstrate through a simulation study that PCAN outperforms other standard approaches in estimating the true correlation between the natural parameters. We then apply PCAN to the joint analysis of a microRNA (miRNA) and a messenger RNA (mRNA) expression data set from a squamous cell lung cancer study, finding a large number of negative correlation pairs when compared to the standard approaches. © 2016, The International Biometric Society.
Ginovart, Marta; Carbó, Rosa; Blanco, Mónica; Portell, Xavier
2017-01-01
Nowadays control of the growth of Saccharomyces to obtain biomass or cellular wall components is crucial for specific industrial applications. The general aim of this contribution is to deal with experimental data obtained from yeast cells and from yeast cultures to attempt the integration of the two levels of information, individual and population, to progress in the control of yeast biotechnological processes by means of the overall analysis of this set of experimental data, and to assist in the improvement of an individual-based model, namely, INDISIM- Saccha . Populations of S. cerevisiae growing in liquid batch culture, in aerobic and microaerophilic conditions, were studied. A set of digital images was taken during the population growth, and a protocol for the treatment and analyses of the images obtained was established. The piecewise linear model of Buchanan was adjusted to the temporal evolutions of the yeast populations to determine the kinetic parameters and changes of growth phases. In parallel, for all the yeast cells analyzed, values of direct morphological parameters, such as area, perimeter, major diameter, minor diameter, and derived ones, such as circularity and elongation, were obtained. Graphical and numerical methods from descriptive statistics were applied to these data to characterize the growth phases and the budding state of the yeast cells in both experimental conditions, and inferential statistical methods were used to compare the diverse groups of data achieved. Oxidative metabolism of yeast in a medium with oxygen available and low initial sugar concentration can be taken into account in order to obtain a greater number of cells or larger cells. Morphological parameters were analyzed statistically to identify which were the most useful for the discrimination of the different states, according to budding and/or growth phase, in aerobic and microaerophilic conditions. The use of the experimental data for subsequent modeling work was then discussed and compared to simulation results generated with INDISIM- Saccha , which allowed us to advance in the development of this yeast model, and illustrated the utility of data at different levels of observation and the needs and logic behind the development of a microbial individual-based model.
Precise determination of time to reach viral load set point after acute HIV-1 infection.
Huang, Xiaojie; Chen, Hui; Li, Wei; Li, Haiying; Jin, Xia; Perelson, Alan S; Fox, Zoe; Zhang, Tong; Xu, Xiaoning; Wu, Hao
2012-12-01
The HIV viral load set point has long been used as a prognostic marker of disease progression and more recently as an end-point parameter in HIV vaccine clinical trials. The definition of set point, however, is variable. Moreover, the earliest time at which the set point is reached after the onset of infection has never been clearly defined. In this study, we obtained sequential plasma viral load data from 60 acutely HIV-infected Chinese patients among a cohort of men who have sex with men, mathematically determined viral load set point levels, and estimated time to attain set point after infection. We also compared the results derived from our models and that obtained from an empirical method. With novel uncomplicated mathematic model, we discovered that set points may vary from 21 to 119 days dependent on the patients' initial viral load trajectory. The viral load set points were 4.28 ± 0.86 and 4.25 ± 0.87 log10 copies per milliliter (P = 0.08), respectively, as determined by our model and an empirical method, suggesting an excellent agreement between the old and new methods. We provide a novel method to estimate viral load set point at the very early stage of HIV infection. Application of this model can accurately and reliably determine the set point, thus providing a new tool for physicians to better monitor early intervention strategies in acutely infected patients and scientists to rationally design preventative vaccine studies.
Critical elements on fitting the Bayesian multivariate Poisson Lognormal model
NASA Astrophysics Data System (ADS)
Zamzuri, Zamira Hasanah binti
2015-10-01
Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.
Research on classified real-time flood forecasting framework based on K-means cluster and rough set.
Xu, Wei; Peng, Yong
2015-01-01
This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods.
OpenMC In Situ Source Convergence Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldrich, Garrett Allen; Dutta, Soumya; Woodring, Jonathan Lee
2016-05-07
We designed and implemented an in situ version of particle source convergence for the OpenMC particle transport simulator. OpenMC is a Monte Carlo based-particle simulator for neutron criticality calculations. For the transport simulation to be accurate, source particles must converge on a spatial distribution. Typically, convergence is obtained by iterating the simulation by a user-settable, fixed number of steps, and it is assumed that convergence is achieved. We instead implement a method to detect convergence, using the stochastic oscillator for identifying convergence of source particles based on their accumulated Shannon Entropy. Using our in situ convergence detection, we are ablemore » to detect and begin tallying results for the full simulation once the proper source distribution has been confirmed. Our method ensures that the simulation is not started too early, by a user setting too optimistic parameters, or too late, by setting too conservative a parameter.« less
NASA Astrophysics Data System (ADS)
Kochanov, R. V.; Gordon, I. E.; Rothman, L. S.; Wcisło, P.; Hill, C.; Wilzewski, J. S.
2016-07-01
The HITRAN Application Programming Interface (HAPI) is presented. HAPI is a free Python library, which extends the capabilities of the HITRANonline interface (www.hitran.org) and can be used to filter and process the structured spectroscopic data. HAPI incorporates a set of tools for spectra simulation accounting for the temperature, pressure, optical path length, and instrument properties. HAPI is aimed to facilitate the spectroscopic data analysis and the spectra simulation based on the line-by-line data, such as from the HITRAN database [JQSRT (2013) 130, 4-50], allowing the usage of the non-Voigt line profile parameters, custom temperature and pressure dependences, and partition sums. The HAPI functions allow the user to control the spectra simulation and data filtering process via a set of the function parameters. HAPI can be obtained at its homepage www.hitran.org/hapi.
NASA Astrophysics Data System (ADS)
Yakub, Eugene; Ronchi, Claudio; Staicu, Dragos
2007-09-01
Results of molecular dynamics (MD) simulation of UO2 in a wide temperature range are presented and discussed. A new approach to the calibration of a partly ionic Busing-Ida-type model is proposed. A potential parameter set is obtained reproducing the experimental density of solid UO2 in a wide range of temperatures. A conventional simulation of the high-temperature stoichiometric UO2 on large MD cells, based on a novel fast method of computation of Coulomb forces, reveals characteristic features of a premelting λ transition at a temperature near to that experimentally observed (Tλ=2670K ). A strong deviation from the Arrhenius behavior of the oxygen self-diffusion coefficient was found in the vicinity of the transition point. Predictions for liquid UO2, based on the same potential parameter set, are in good agreement with existing experimental data and theoretical calculations.
Prediction of kinase-inhibitor binding affinity using energetic parameters
Usha, Singaravelu; Selvaraj, Samuel
2016-01-01
The combination of physicochemical properties and energetic parameters derived from protein-ligand complexes play a vital role in determining the biological activity of a molecule. In the present work, protein-ligand interaction energy along with logP values was used to predict the experimental log (IC50) values of 25 different kinase-inhibitors using multiple regressions which gave a correlation coefficient of 0.93. The regression equation obtained was tested on 93 kinase-inhibitor complexes and an average deviation of 0.92 from the experimental log IC50 values was shown. The same set of descriptors was used to predict binding affinities for a test set of five individual kinase families, with correlation values > 0.9. We show that the protein-ligand interaction energies and partition coefficient values form the major deterministic factors for binding affinity of the ligand for its receptor. PMID:28149052
NASA Astrophysics Data System (ADS)
Flores, Jorge L.; García-Torales, G.; Ponce Ávila, Cristina
2006-08-01
This paper describes an in situ image recognition system designed to inspect the quality standards of the chocolate pops during their production. The essence of the recognition system is the localization of the events (i.e., defects) in the input images that affect the quality standards of pops. To this end, processing modules, based on correlation filter, and segmentation of images are employed with the objective of measuring the quality standards. Therefore, we designed the correlation filter and defined a set of features from the correlation plane. The desired values for these parameters are obtained by exploiting information about objects to be rejected in order to find the optimal discrimination capability of the system. Regarding this set of features, the pop can be correctly classified. The efficacy of the system has been tested thoroughly under laboratory conditions using at least 50 images, containing 3 different types of possible defects.
NASA Astrophysics Data System (ADS)
Bray, Cédric; Cuisset, Arnaud; Hindle, Francis; Bocquet, Robin; Mouret, Gaël; Drouin, Brian J.
2017-03-01
Several previously unmeasured transitions of 12CH3D have been recorded by a terahertz photomixing continuous-wave spectrometer up to QR(10) branch at 2.5 THz. An improved set of rotational constants has been obtained utilizing a THz frequency metrology based on a frequency comb that achieved an averaged frequency position better than 150 kHz on more than fifty ground-state transitions. A detailed analysis of the measured line intensities was undertaken using the multispectrum fitting program and has resulted in a determination of new dipole moment parameters. Measurements at different pressures of the QR(7) transitions provide the first determination of self-broadening coefficients from pure rotational CH3D lines. The THz rotational measurements are consistent with IR rovibrational data but no significant vibrational dependence of self-broadening coefficient may be observed by comparison.
Latent log-linear models for handwritten digit classification.
Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann
2012-06-01
We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Boemer, Jens C.; Vittal, Eknath
The response of low voltage networks with high penetration of PV systems to transmission network faults will, in the future, determine the overall power system performance during certain hours of the year. The WECC distributed PV system model (PVD1) is designed to represent small-scale distribution-connected systems. Although default values are provided by WECC for the model parameters, tuning of those parameters seems to become important in order to accurately estimate the partial loss of distributed PV systems for bulk system studies. The objective of this paper is to describe a new methodology to determine the WECC distributed PV system (PVD1)more » model parameters and to derive parameter sets obtained for six distribution circuits of a Californian investor-owned utility with large amounts of distributed PV systems. The results indicate that the parameters for the partial loss of distributed PV systems may differ significantly from the default values provided by WECC.« less
NASA Astrophysics Data System (ADS)
Jamróz, Dariusz; Niedoba, Tomasz; Surowiak, Agnieszka; Tumidajski, Tadeusz; Szostek, Roman; Gajer, Mirosław
2017-09-01
The application of methods drawing upon multi-parameter visualization of data by transformation of multidimensional space into two-dimensional one allow to show multi-parameter data on computer screen. Thanks to that, it is possible to conduct a qualitative analysis of this data in the most natural way for human being, i.e. by the sense of sight. An example of such method of multi-parameter visualization is multidimensional scaling. This method was used in this paper to present and analyze a set of seven-dimensional data obtained from Janina Mining Plant and Wieczorek Coal Mine. It was decided to examine whether the method of multi-parameter data visualization allows to divide the samples space into areas of various applicability to fluidal gasification process. The "Technological applicability card for coals" was used for this purpose [Sobolewski et al., 2012; 2017], in which the key parameters, important and additional ones affecting the gasification process were described.
Havla, Lukas; Schneider, Moritz J; Thierfelder, Kolja M; Beyer, Sebastian E; Ertl-Wagner, Birgit; Reiser, Maximilian F; Sommer, Wieland H; Dietrich, Olaf
2016-02-01
The purpose of this study was to propose and evaluate a new wavelet-based technique for classification of arterial and venous vessels using time-resolved cerebral CT perfusion data sets. Fourteen consecutive patients (mean age 73 yr, range 17-97) with suspected stroke but no pathology in follow-up MRI were included. A CT perfusion scan with 32 dynamic phases was performed during intravenous bolus contrast-agent application. After rigid-body motion correction, a Paul wavelet (order 1) was used to calculate voxelwise the wavelet power spectrum (WPS) of each attenuation-time course. The angiographic intensity A was defined as the maximum of the WPS, located at the coordinates T (time axis) and W (scale/width axis) within the WPS. Using these three parameters (A, T, W) separately as well as combined by (1) Fisher's linear discriminant analysis (FLDA), (2) logistic regression (LogR) analysis, or (3) support vector machine (SVM) analysis, their potential to classify 18 different arterial and venous vessel segments per subject was evaluated. The best vessel classification was obtained using all three parameters A and T and W [area under the curve (AUC): 0.953 with FLDA and 0.957 with LogR or SVM]. In direct comparison, the wavelet-derived parameters provided performance at least equal to conventional attenuation-time-course parameters. The maximum AUC obtained from the proposed wavelet parameters was slightly (although not statistically significantly) higher than the maximum AUC (0.945) obtained from the conventional parameters. A new method to classify arterial and venous cerebral vessels with high statistical accuracy was introduced based on the time-domain wavelet transform of dynamic CT perfusion data in combination with linear or nonlinear multidimensional classification techniques.
NASA Technical Reports Server (NTRS)
Bebis, George (Inventor); Amayeh, Gholamreza (Inventor)
2015-01-01
Hand-based biometric analysis systems and techniques are described which provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an in put image. Additionally, the analysis utilizes re-use of commonly-seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.
NASA Technical Reports Server (NTRS)
Bebis, George
2013-01-01
Hand-based biometric analysis systems and techniques provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an input image. Additionally, the analysis uses re-use of commonly seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.
NASA Astrophysics Data System (ADS)
Zaborowicz, M.; Przybył, J.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.
2014-04-01
The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.
Anti AIDS drug design with the help of neural networks
NASA Astrophysics Data System (ADS)
Tetko, I. V.; Tanchuk, V. Yu.; Luik, A. I.
1995-04-01
Artificial neural networks were used to analyze and predict the human immunodefiency virus type 1 reverse transcriptase inhibitors. Training and control set included 44 molecules (most of them are well-known substances such as AZT, TIBO, dde, etc.) The biological activities of molecules were taken from literature and rated for two classes: active and inactive compounds according to their values. We used topological indices as molecular parameters. Four most informative parameters (out of 46) were chosen using cluster analysis and original input parameters' estimation procedure and were used to predict activities of both control and new (synthesized in our institute) molecules. We applied pruning network algorithm and network ensembles to obtain the final classifier and avoid chance correlation. The increasing of neural network generalization of the data from the control set was observed, when using the aforementioned methods. The prognosis of new molecules revealed one molecule as possibly active. It was confirmed by further biological tests. The compound was as active as AZT and in order less toxic. The active compound is currently being evaluated in pre clinical trials as possible drug for anti-AIDS therapy.
Galka, Andreas; Siniatchkin, Michael; Stephani, Ulrich; Groening, Kristina; Wolff, Stephan; Bosch-Bayard, Jorge; Ozaki, Tohru
2010-12-01
The analysis of time series obtained by functional magnetic resonance imaging (fMRI) may be approached by fitting predictive parametric models, such as nearest-neighbor autoregressive models with exogeneous input (NNARX). As a part of the modeling procedure, it is possible to apply instantaneous linear transformations to the data. Spatial smoothing, a common preprocessing step, may be interpreted as such a transformation. The autoregressive parameters may be constrained, such that they provide a response behavior that corresponds to the canonical haemodynamic response function (HRF). We present an algorithm for estimating the parameters of the linear transformations and of the HRF within a rigorous maximum-likelihood framework. Using this approach, an optimal amount of both the spatial smoothing and the HRF can be estimated simultaneously for a given fMRI data set. An example from a motor-task experiment is discussed. It is found that, for this data set, weak, but non-zero, spatial smoothing is optimal. Furthermore, it is demonstrated that activated regions can be estimated within the maximum-likelihood framework.
Profound Interfacial Effects in CoFe2O4/Fe3O4 and Fe3O4/CoFe2O4 Core/Shell Nanoparticles
NASA Astrophysics Data System (ADS)
Polishchuk, Dmytro; Nedelko, Natalia; Solopan, Sergii; Ślawska-Waniewska, Anna; Zamorskyi, Vladyslav; Tovstolytkin, Alexandr; Belous, Anatolii
2018-03-01
Two sets of core/shell magnetic nanoparticles, CoFe2O4/Fe3O4 and Fe3O4/CoFe2O4, with a fixed diameter of the core ( 4.1 and 6.3 nm for the former and latter sets, respectively) and thickness of shells up to 2.5 nm were synthesized from metal chlorides in a diethylene glycol solution. The nanoparticles were characterized by X-ray diffraction, transmission electron microscopy, and magnetic measurements. The analysis of the results of magnetic measurements shows that coating of magnetic nanoparticles with the shells results in two simultaneous effects: first, it modifies the parameters of the core-shell interface, and second, it makes the particles acquire combined features of the core and the shell. The first effect becomes especially prominent when the parameters of core and shell strongly differ from each other. The results obtained are useful for optimizing and tailoring the parameters of core/shell spinel ferrite magnetic nanoparticles for their use in various technological and biomedical applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eifler, Tim; Krause, Elisabeth; Dodelson, Scott
2014-05-28
Systematic uncertainties that have been subdominant in past large-scale structure (LSS) surveys are likely to exceed statistical uncertainties of current and future LSS data sets, potentially limiting the extraction of cosmological information. Here we present a general framework (PCA marginalization) to consistently incorporate systematic effects into a likelihood analysis. This technique naturally accounts for degeneracies between nuisance parameters and can substantially reduce the dimension of the parameter space that needs to be sampled. As a practical application, we apply PCA marginalization to account for baryonic physics as an uncertainty in cosmic shear tomography. Specifically, we use CosmoLike to run simulatedmore » likelihood analyses on three independent sets of numerical simulations, each covering a wide range of baryonic scenarios differing in cooling, star formation, and feedback mechanisms. We simulate a Stage III (Dark Energy Survey) and Stage IV (Large Synoptic Survey Telescope/Euclid) survey and find a substantial bias in cosmological constraints if baryonic physics is not accounted for. We then show that PCA marginalization (employing at most 3 to 4 nuisance parameters) removes this bias. Our study demonstrates that it is possible to obtain robust, precise constraints on the dark energy equation of state even in the presence of large levels of systematic uncertainty in astrophysical processes. We conclude that the PCA marginalization technique is a powerful, general tool for addressing many of the challenges facing the precision cosmology program.« less
Reliable evaluation of the quantal determinants of synaptic efficacy using Bayesian analysis
Beato, M.
2013-01-01
Communication between neurones in the central nervous system depends on synaptic transmission. The efficacy of synapses is determined by pre- and postsynaptic factors that can be characterized using quantal parameters such as the probability of neurotransmitter release, number of release sites, and quantal size. Existing methods of estimating the quantal parameters based on multiple probability fluctuation analysis (MPFA) are limited by their requirement for long recordings to acquire substantial data sets. We therefore devised an algorithm, termed Bayesian Quantal Analysis (BQA), that can yield accurate estimates of the quantal parameters from data sets of as small a size as 60 observations for each of only 2 conditions of release probability. Computer simulations are used to compare its performance in accuracy with that of MPFA, while varying the number of observations and the simulated range in release probability. We challenge BQA with realistic complexities characteristic of complex synapses, such as increases in the intra- or intersite variances, and heterogeneity in release probabilities. Finally, we validate the method using experimental data obtained from electrophysiological recordings to show that the effect of an antagonist on postsynaptic receptors is correctly characterized by BQA by a specific reduction in the estimates of quantal size. Since BQA routinely yields reliable estimates of the quantal parameters from small data sets, it is ideally suited to identify the locus of synaptic plasticity for experiments in which repeated manipulations of the recording environment are unfeasible. PMID:23076101
Quantitative Determination of Spring Water Quality Parameters via Electronic Tongue
Carbó, Noèlia; López Carrero, Javier; Garcia-Castillo, F. Javier; Olivas, Estela; Folch, Elisa; Alcañiz Fillol, Miguel; Soto, Juan
2017-01-01
The use of a voltammetric electronic tongue for the quantitative analysis of quality parameters in spring water is proposed here. The electronic voltammetric tongue consisted of a set of four noble electrodes (iridium, rhodium, platinum, and gold) housed inside a stainless steel cylinder. These noble metals have a high durability and are not demanding for maintenance, features required for the development of future automated equipment. A pulse voltammetry study was conducted in 83 spring water samples to determine concentrations of nitrate (range: 6.9–115 mg/L), sulfate (32–472 mg/L), fluoride (0.08–0.26 mg/L), chloride (17–190 mg/L), and sodium (11–94 mg/L) as well as pH (7.3–7.8). These parameters were also determined by routine analytical methods in spring water samples. A partial least squares (PLS) analysis was run to obtain a model to predict these parameter. Orthogonal signal correction (OSC) was applied in the preprocessing step. Calibration (67%) and validation (33%) sets were selected randomly. The electronic tongue showed good predictive power to determine the concentrations of nitrate, sulfate, chloride, and sodium as well as pH and displayed a lower R2 and slope in the validation set for fluoride. Nitrate and fluoride concentrations were estimated with errors lower than 15%, whereas chloride, sulfate, and sodium concentrations as well as pH were estimated with errors below 10%. PMID:29295592
Optimization Methods in Sherpa
NASA Astrophysics Data System (ADS)
Siemiginowska, Aneta; Nguyen, Dan T.; Doe, Stephen M.; Refsdal, Brian L.
2009-09-01
Forward fitting is a standard technique used to model X-ray data. A statistic, usually assumed weighted chi^2 or Poisson likelihood (e.g. Cash), is minimized in the fitting process to obtain a set of the best model parameters. Astronomical models often have complex forms with many parameters that can be correlated (e.g. an absorbed power law). Minimization is not trivial in such setting, as the statistical parameter space becomes multimodal and finding the global minimum is hard. Standard minimization algorithms can be found in many libraries of scientific functions, but they are usually focused on specific functions. However, Sherpa designed as general fitting and modeling application requires very robust optimization methods that can be applied to variety of astronomical data (X-ray spectra, images, timing, optical data etc.). We developed several optimization algorithms in Sherpa targeting a wide range of minimization problems. Two local minimization methods were built: Levenberg-Marquardt algorithm was obtained from MINPACK subroutine LMDIF and modified to achieve the required robustness; and Nelder-Mead simplex method has been implemented in-house based on variations of the algorithm described in the literature. A global search Monte-Carlo method has been implemented following a differential evolution algorithm presented by Storn and Price (1997). We will present the methods in Sherpa and discuss their usage cases. We will focus on the application to Chandra data showing both 1D and 2D examples. This work is supported by NASA contract NAS8-03060 (CXC).
Virtual Ionosonde Construction by using ITS and IRI-2012 models
NASA Astrophysics Data System (ADS)
Kabasakal, Mehmet; Toker, Cenk
2016-07-01
Ionosonde is a kind of radar which is used to examine several properties of the ionosphere, including the electron density and drift velocity. Ionosonde is an expensive device and its installation requires special expertise and a proper area clear of sources of radio interference. In order to overcome the difficulties of installing an ionosonde hardware, the target of this study is to construct a virtual ionosonde based on communication channel models where the model parameters are determined by ray tracing obtained by the PHaRLAP software and the International Reference Ionosphere (IRI-2012) model. Although narrowband high frequency (HF) communication models have been widely used to represent the behaviour of the radio channel, they are applicable to a limited set of actual propagation conditions and wideband models are needed to better understand the HF channel. In 1997, the Institute for Telecommunication Science (ITS) developed a wideband HF ionospheric model, the so-called ITS model, however, it has some restrictions in real life applications. The ITS model parameters are grouped into two parts; the deterministic and the stochastic parameters. The deterministic parameters are the delay time (tau _{c}) of each reflection path based on the penetration frequency (f _{p}), the height (h _{0}) of the maximum electron density and the half thickness (sigma) of the reflective layer. The stochastic parameters, delay spread (sigma _{tau}), delay rise time (sigma _{c}), Doppler spread (sigma _{D}), Doppler shift (f _{s}), are to calculate the impulse response of the channel. These parameters are generally difficult to obtain and are based on the measured data which may not be available in all cases. In order to obtain these parameters, we propose to integrate the PHaRLAP ray tracing toolbox and the IRI-2012 model. When Total Electron Content (TEC) estimates obtained from GNSS measurements are input to IRI-2012, the model generates electron density profiles close to the actual profiles, which are used for ray tracing between the user defined geographical coordinates. Then, ITS model parameters are obtained from both ray tracing and also the IRI-2012 model. Finally, an ionosonde signal waveform is transmitted through the channel obtained from the ITS model to generate the ionogram. As an application, oblique sounding between two points is simulated with ITS channel model. M-sequence, Barker sequence and complementary sequences are used as sounding waveforms. The effects of channel on the oblique ionogram and sounding waveform characteristics are also investigated.
Al-Amri, Mohammad; Al Balushi, Hilal; Mashabi, Abdulrhman
2017-12-01
Self-paced treadmill walking is becoming increasingly popular for the gait assessment and re-education, in both research and clinical settings. Its day-to-day repeatability is yet to be established. This study scrutinised the test-retest repeatability of key gait parameters, obtained from the Gait Real-time Analysis Interactive Lab (GRAIL) system. Twenty-three male able-bodied adults (age: 34.56 ± 5.12 years) completed two separate gait assessments on the GRAIL system, separated by 5 ± 3 days. Key gait kinematic, kinetic, and spatial-temporal parameters were analysed. The Intraclass-Correlation Coefficients (ICC), Standard Error Measurement (SEM), Minimum Detectable Change (MDC), and the 95% limits of agreements were calculated to evaluate the repeatability of these gait parameters. Day-to-day agreements were excellent (ICCs > 0.87) for spatial-temporal parameters with low MDC and SEM values, <0.153 and <0.055, respectively. The repeatability was higher for joint kinetic than kinematic parameters, as reflected in small values of SEM (<0.13 Nm/kg and <3.4°) and MDC (<0.335 Nm/kg and <9.44°). The obtained values of all parameters fell within the 95% limits of agreement. Our findings demonstrate the repeatability of the GRAIL system available in our laboratory. The SEM and MDC values can be used to assist researchers and clinicians to distinguish 'real' changes in gait performance over time.
Asymptotic formulae for likelihood-based tests of new physics
NASA Astrophysics Data System (ADS)
Cowan, Glen; Cranmer, Kyle; Gross, Eilam; Vitells, Ofer
2011-02-01
We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the "Asimov data set", which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation.
Equivalences of the multi-indexed orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odake, Satoru
2014-01-15
Multi-indexed orthogonal polynomials describe eigenfunctions of exactly solvable shape-invariant quantum mechanical systems in one dimension obtained by the method of virtual states deletion. Multi-indexed orthogonal polynomials are labeled by a set of degrees of polynomial parts of virtual state wavefunctions. For multi-indexed orthogonal polynomials of Laguerre, Jacobi, Wilson, and Askey-Wilson types, two different index sets may give equivalent multi-indexed orthogonal polynomials. We clarify these equivalences. Multi-indexed orthogonal polynomials with both type I and II indices are proportional to those of type I indices only (or type II indices only) with shifted parameters.
Research on Turbofan Engine Model above Idle State Based on NARX Modeling Approach
NASA Astrophysics Data System (ADS)
Yu, Bing; Shu, Wenjun
2017-03-01
The nonlinear model for turbofan engine above idle state based on NARX is studied. Above all, the data sets for the JT9D engine from existing model are obtained via simulation. Then, a nonlinear modeling scheme based on NARX is proposed and several models with different parameters are built according to the former data sets. Finally, the simulations have been taken to verify the precise and dynamic performance the models, the results show that the NARX model can well reflect the dynamics characteristic of the turbofan engine with high accuracy.
A rotorcraft flight database for validation of vision-based ranging algorithms
NASA Technical Reports Server (NTRS)
Smith, Phillip N.
1992-01-01
A helicopter flight test experiment was conducted at the NASA Ames Research Center to obtain a database consisting of video imagery and accurate measurements of camera motion, camera calibration parameters, and true range information. The database was developed to allow verification of monocular passive range estimation algorithms for use in the autonomous navigation of rotorcraft during low altitude flight. The helicopter flight experiment is briefly described. Four data sets representative of the different helicopter maneuvers and the visual scenery encountered during the flight test are presented. These data sets will be made available to researchers in the computer vision community.
Mean field treatment of heterogeneous steady state kinetics
NASA Astrophysics Data System (ADS)
Geva, Nadav; Vaissier, Valerie; Shepherd, James; Van Voorhis, Troy
2017-10-01
We propose a method to quickly compute steady state populations of species undergoing a set of chemical reactions whose rate constants are heterogeneous. Using an average environment in place of an explicit nearest neighbor configuration, we obtain a set of equations describing a single fluctuating active site in the presence of an averaged bath. We apply this Mean Field Steady State (MFSS) method to a model of H2 production on a disordered surface for which the activation energy for the reaction varies from site to site. The MFSS populations quantitatively reproduce the KMC results across the range of rate parameters considered.
Millimeter- and submillimeter-wave characterization of various fabrics.
Dunayevskiy, Ilya; Bortnik, Bartosz; Geary, Kevin; Lombardo, Russell; Jack, Michael; Fetterman, Harold
2007-08-20
Transmission measurements of 14 fabrics are presented in the millimeter-wave and submillimeter-wave electromagnetic regions from 130 GHz to 1.2 THz. Three independent sources and experimental set-ups were used to obtain accurate results over a wide spectral range. Reflectivity, a useful parameter for imaging applications, was also measured for a subset of samples in the submillimeter-wave regime along with polarization sensitivity of the transmitted beam and transmission through doubled layers. All of the measurements were performed in free space. Details of these experimental set-ups along with their respective challenges are presented.
ZERODUR: deterministic approach for strength design
NASA Astrophysics Data System (ADS)
Hartmann, Peter
2012-12-01
There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two-parameter Weibull distribution approach and no longer subject to statistical uncertainty.
NASA Astrophysics Data System (ADS)
Abdeljaber, Osama; Avci, Onur; Inman, Daniel J.
2016-05-01
One of the major challenges in civil, mechanical, and aerospace engineering is to develop vibration suppression systems with high efficiency and low cost. Recent studies have shown that high damping performance at broadband frequencies can be achieved by incorporating periodic inserts with tunable dynamic properties as internal resonators in structural systems. Structures featuring these kinds of inserts are referred to as metamaterials inspired structures or metastructures. Chiral lattice inserts exhibit unique characteristics such as frequency bandgaps which can be tuned by varying the parameters that define the lattice topology. Recent analytical and experimental investigations have shown that broadband vibration attenuation can be achieved by including chiral lattices as internal resonators in beam-like structures. However, these studies have suggested that the performance of chiral lattice inserts can be maximized by utilizing an efficient optimization technique to obtain the optimal topology of the inserted lattice. In this study, an automated optimization procedure based on a genetic algorithm is applied to obtain the optimal set of parameters that will result in chiral lattice inserts tuned properly to reduce the global vibration levels of a finite-sized beam. Genetic algorithms are considered in this study due to their capability of dealing with complex and insufficiently understood optimization problems. In the optimization process, the basic parameters that govern the geometry of periodic chiral lattices including the number of circular nodes, the thickness of the ligaments, and the characteristic angle are considered. Additionally, a new set of parameters is introduced to enable the optimization process to explore non-periodic chiral designs. Numerical simulations are carried out to demonstrate the efficiency of the optimization process.
Machining of bone: Analysis of cutting force and surface roughness by turning process.
Noordin, M Y; Jiawkok, N; Ndaruhadi, P Y M W; Kurniawan, D
2015-11-01
There are millions of orthopedic surgeries and dental implantation procedures performed every year globally. Most of them involve machining of bones and cartilage. However, theoretical and analytical study on bone machining is lagging behind its practice and implementation. This study views bone machining as a machining process with bovine bone as the workpiece material. Turning process which makes the basis of the actually used drilling process was experimented. The focus is on evaluating the effects of three machining parameters, that is, cutting speed, feed, and depth of cut, to machining responses, that is, cutting forces and surface roughness resulted by the turning process. Response surface methodology was used to quantify the relation between the machining parameters and the machining responses. The turning process was done at various cutting speeds (29-156 m/min), depths of cut (0.03 -0.37 mm), and feeds (0.023-0.11 mm/rev). Empirical models of the resulted cutting force and surface roughness as the functions of cutting speed, depth of cut, and feed were developed. Observation using the developed empirical models found that within the range of machining parameters evaluated, the most influential machining parameter to the cutting force is depth of cut, followed by feed and cutting speed. The lowest cutting force was obtained at the lowest cutting speed, lowest depth of cut, and highest feed setting. For surface roughness, feed is the most significant machining condition, followed by cutting speed, and with depth of cut showed no effect. The finest surface finish was obtained at the lowest cutting speed and feed setting. © IMechE 2015.
Ensuring the consistancy of Flow Direction Curve reconstructions: the 'quantile solidarity' approach
NASA Astrophysics Data System (ADS)
Poncelet, Carine; Andreassian, Vazken; Oudin, Ludovic
2015-04-01
Flow Duration Curves (FDCs) are a hydrologic tool describing the distribution of streamflows at a catchment outlet. FDCs are usually used for calibration of hydrological models, managing water quality and classifying catchments, among others. For gauged catchments, empirical FDCs can be computed from streamflow records. For ungauged catchments, on the other hand, FDCs cannot be obtained from streamflow records and must therefore be obtained in another manner, for example through reconstructions. Regression-based reconstructions are methods relying on the evaluation of quantiles separately from catchments' attributes (climatic or physical features).The advantage of this category of methods is that it is informative about the processes and it is non-parametric. However, the large number of parameters required can cause unwanted artifacts, typically reconstructions that do not always produce increasing quantiles. In this paper we propose a new approach named Quantile Solidarity (QS), which is applied under strict proxy-basin test conditions (Klemes, 1986) to a set of 600 French catchments. Half of the catchments are considered as gauged and used to calibrate the regression and compute residuals of the regression. The QS approach consists in a three-step regionalization scheme, which first links quantile values to physical descriptors, then reduces the number of regression parameters and finally exploits the spatial correlation of the residuals. The innovation is the utilisation of the parameters continuity across the quantiles to dramatically reduce the number of parameters. The second half of catchment is used as an independent validation set over which we show that the QS approach ensures strictly growing FDC reconstructions in ungauged conditions. Reference: V. KLEMEŠ (1986) Operational testing of hydrological simulation models, Hydrological Sciences Journal, 31:1, 13-24
Maranesi, E; Merlo, A; Fioretti, S; Zemp, D D; Campanini, I; Quadri, P
2016-02-01
Identification of future non-fallers, infrequent and frequent fallers among older people would permit focusing the delivery of prevention programs on selected individuals. Posturographic parameters have been proven to differentiate between non-fallers and frequent fallers, but not between the first group and infrequent fallers. In this study, postural stability with eyes open and closed on both a firm and a compliant surface and while performing a cognitive task was assessed in a consecutive sample of 130 cognitively able elderly, mean age 77(7)years, categorized as non-fallers (N=67), infrequent fallers (one/two falls, N=45) and frequent fallers (more than two falls, N=18) according to their last year fall history. Principal Component Analysis was used to select the most significant features from a set of 17posturographic parameters. Next, variables derived from principal component analysis were used to test, in each task, group differences between the three groups. One parameter based on a combination of a set of Centre of Pressure anterior-posterior variables obtained from the eyes-open on a compliant surface task was statistically different among all groups, thus distinguishing infrequent fallers from both non-fallers (P<0.05) and frequent fallers (P<0.05). For the first time, a method based on posturographic data to retrospectively discriminate infrequent fallers was obtained. The joint use of both the eyes-open on a compliant surface condition and this new parameter could be used, in a future study, to improve the performance of protocols and to verify the ability of this method to identify new-fallers in elderly without cognitive impairment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Teodoro, Tiago Quevedo; Visscher, Lucas; da Silva, Albérico Borges Ferreira; Haiduke, Roberto Luiz Andrade
2017-03-14
The f-block elements are addressed in this third part of a series of prolapse-free basis sets of quadruple-ζ quality (RPF-4Z). Relativistic adapted Gaussian basis sets (RAGBSs) are used as primitive sets of functions while correlating/polarization (C/P) functions are chosen by analyzing energy lowerings upon basis set increments in Dirac-Coulomb multireference configuration interaction calculations with single and double excitations of the valence spinors. These function exponents are obtained by applying the RAGBS parameters in a polynomial expression. Moreover, through the choice of C/P characteristic exponents from functions of lower angular momentum spaces, a reduction in the computational demand is attained in relativistic calculations based on the kinetic balance condition. The present study thus complements the RPF-4Z sets for the whole periodic table (Z ≤ 118). The sets are available as Supporting Information and can also be found at http://basis-sets.iqsc.usp.br .
Elastic, inelastic, and 1-nucleon transfer channels in the 7Li+120Sn system
NASA Astrophysics Data System (ADS)
Kundu, A.; Santra, S.; Pal, A.; Chattopadhyay, D.; Tripathi, R.; Roy, B. J.; Nag, T. N.; Nayak, B. K.; Saxena, A.; Kailas, S.
2017-03-01
Background: Simultaneous description of major outgoing channels for a nuclear reaction by coupled-channels calculations using the same set of potential and coupling parameters is one of the difficult tasks to accomplish in nuclear reaction studies. Purpose: To measure the elastic, inelastic, and transfer cross sections for as many channels as possible in 7Li+120Sn system at different beam energies and simultaneously describe them by a single set of model calculations using fresco. Methods: Projectile-like fragments were detected using six sets of Si-detector telescopes to measure the cross sections for elastic, inelastic, and 1-nucleon transfer channels at two beam energies of 28 and 30 MeV. Optical model analysis of elastic data and coupled-reaction-channels (CRC) calculations that include around 30 reaction channels coupled directly to the entrance channel, with respective structural parameters, were performed to understand the measured cross sections. Results: Structure information available in the literature for some of the identified states did not reproduce the present data. Cross sections obtained from CRC calculations using a modified but single set of potential and coupling parameters were able to describe simultaneously the measured data for all the channels at both the measured energies as well as the existing data for elastic and inelastic cross sections at 44 MeV. Conclusions: Non-reproduction of some of the cross sections using the structure information available in the literature which are extracted from reactions involving different projectiles indicates that such measurements are probe dependent. New structural parameters were assigned for such states as well as for several new transfer states whose spectroscopic factors were not known.
The Kormendy relation of galaxies in the Frontier Fields clusters: Abell S1063 and MACS J1149.5+2223
NASA Astrophysics Data System (ADS)
Tortorelli, Luca; Mercurio, Amata; Paolillo, Maurizio; Rosati, Piero; Gargiulo, Adriana; Gobat, Raphael; Balestra, Italo; Caminha, G. B.; Annunziatella, Marianna; Grillo, Claudio; Lombardi, Marco; Nonino, Mario; Rettura, Alessandro; Sartoris, Barbara; Strazzullo, Veronica
2018-06-01
We analyse the Kormendy relations (KRs) of the two Frontier Fields clusters, Abell S1063, at z = 0.348, and MACS J1149.5+2223, at z = 0.542, exploiting very deep Hubble Space Telescope photometry and Very Large Telescope (VLT)/Multi Unit Spectroscopic Explorer (MUSE) integral field spectroscopy. With this novel data set, we are able to investigate how the KR parameters depend on the cluster galaxy sample selection and how this affects studies of galaxy evolution based on the KR. We define and compare four different galaxy samples according to (a) Sérsic indices: early-type (`ETG'), (b) visual inspection: `ellipticals', (c) colours: `red', (d) spectral properties: `passive'. The classification is performed for a complete sample of galaxies with mF814W ≤ 22.5 ABmag (M* ≳ 1010.0 M⊙). To derive robust galaxy structural parameters, we use two methods: (1) an iterative estimate of structural parameters using images of increasing size, in order to deal with closely separated galaxies and (2) different background estimations, to deal with the intracluster light contamination. The comparison between the KRs obtained from the different samples suggests that the sample selection could affect the estimate of the best-fitting KR parameters. The KR built with ETGs is fully consistent with the one obtained for ellipticals and passive. On the other hand, the KR slope built on the red sample is only marginally consistent with those obtained with the other samples. We also release the photometric catalogue with structural parameters for the galaxies included in the present analysis.
Validation of Bayesian analysis of compartmental kinetic models in medical imaging.
Sitek, Arkadiusz; Li, Quanzheng; El Fakhri, Georges; Alpert, Nathaniel M
2016-10-01
Kinetic compartmental analysis is frequently used to compute physiologically relevant quantitative values from time series of images. In this paper, a new approach based on Bayesian analysis to obtain information about these parameters is presented and validated. The closed-form of the posterior distribution of kinetic parameters is derived with a hierarchical prior to model the standard deviation of normally distributed noise. Markov chain Monte Carlo methods are used for numerical estimation of the posterior distribution. Computer simulations of the kinetics of F18-fluorodeoxyglucose (FDG) are used to demonstrate drawing statistical inferences about kinetic parameters and to validate the theory and implementation. Additionally, point estimates of kinetic parameters and covariance of those estimates are determined using the classical non-linear least squares approach. Posteriors obtained using methods proposed in this work are accurate as no significant deviation from the expected shape of the posterior was found (one-sided P>0.08). It is demonstrated that the results obtained by the standard non-linear least-square methods fail to provide accurate estimation of uncertainty for the same data set (P<0.0001). The results of this work validate new methods for a computer simulations of FDG kinetics. Results show that in situations where the classical approach fails in accurate estimation of uncertainty, Bayesian estimation provides an accurate information about the uncertainties in the parameters. Although a particular example of FDG kinetics was used in the paper, the methods can be extended for different pharmaceuticals and imaging modalities. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
The Logical Problem of Language Change.
1995-07-01
tribution Pi. For the most part we will assume in our simulations that this distribution is uniform on degree-0 ( unembedded ) sentences, exactly as in...following table provides the unembedded (degree- 0) sentences from each of the 8 grammars (languages) obtained by setting the 3 parameters of example 1 to di erent values. The languages are referred to as L1 through L8: 17 18
surrkick: Black-hole kicks from numerical-relativity surrogate models
NASA Astrophysics Data System (ADS)
Gerosa, Davide; Hébert, François; Stein, Leo C.
2018-04-01
surrkick quickly and reliably extract recoils imparted to generic, precessing, black hole binaries. It uses a numerical-relativity surrogate model to obtain the gravitational waveform given a set of binary parameters, and from this waveform directly integrates the gravitational-wave linear momentum flux. This entirely bypasses the need of fitting formulae which are typically used to model black-hole recoils in astrophysical contexts.
Recent developments in INPOP planetary ephemerides
NASA Astrophysics Data System (ADS)
Fienga, Agnes; Viswanathan, Vishnu; Laskar, Jacques; Manche, Hervé; Gastineau, Mickael
2015-08-01
We present here the new version of the INPOP planetary ephemerides based on an update of the observational data sets as well as new results in term of asteroid masses and constraints obtained for General relativity parameters PPN β, γ, J2 and the secular variations of G. New constraints about the hypothetical existence of a super-Earth beyond the Neptune orbit will also be presented.
Standard Reference Material (SRM 1990) for Single Crystal Diffractometer Alignment
Wong-Ng, W.; Siegrist, T.; DeTitta, G.T.; Finger, L.W.; Evans, H.T.; Gabe, E.J.; Enright, G.D.; Armstrong, J.T.; Levenson, M.; Cook, L.P.; Hubbard, C.R.
2001-01-01
An international project was successfully completed which involved two major undertakings: (1) a round-robin to demonstrate the viability of the selected standard and (2) the certification of the lattice parameters of the SRM 1990, a Standard Reference Material?? for single crystal diffractometer alignment. This SRM is a set of ???3500 units of Cr-doped Al2O3, or ruby spheres [(0 420.011 mole fraction % Cr (expanded uncertainty)]. The round-robin consisted of determination of lattice parameters of a pair of crystals' the ruby sphere as a standard, and a zeolite reference to serve as an unknown. Fifty pairs of crystals were dispatched from Hauptman-Woodward Medical Research Institute to volunteers in x-ray laboratories world-wide. A total of 45 sets of data was received from 32 laboratories. The mean unit cell parameters of the ruby spheres was found to be a=4.7608 A?? ?? 0.0062 A??, and c=12.9979 A?? ?? 0.020 A?? (95 % intervals of the laboratory means). The source of errors of outlier data was identified. The SRM project involved the certification of lattice parameters using four well-aligned single crystal diffractometers at (Bell Laboratories) Lucent Technologies and at NRC of Canada (39 ruby spheres), the quantification of the Cr content using a combined microprobe and SEM/EDS technique, and the evaluation of the mosaicity of the ruby spheres using a double-crystal spectrometry method. A confirmation of the lattice parameters was also conducted using a Guinier-Ha??gg camera. Systematic corrections of thermal expansion and refraction corrections were applied. These rubies_ are rhombohedral, with space group R3c. The certified mean unit cell parameters are a=4.76080 ?? 0.00029 A??, and c=12 99568 A?? ?? 0.00087 A?? (expanded uncertainty). These certified lattice parameters fall well within the results of those obtained from the international round-robin study. The Guinier-Ha??gg transmission measurements on five samples of powdered rubies (a=4.7610 A?? ?? 0.0013 A??, and c=12.9954 A?? ?? 0.0034 A??) agreed well with the values obtained from the single crystal spheres.
Standard Reference Material (SRM 1990) For Single Crystal Diffractometer Alignment
Wong-Ng, W.; Siegrist, T.; DeTitta, G. T.; Finger, L. W.; Evans, H. T.; Gabe, E. J.; Enright, G. D.; Armstrong, J. T.; Levenson, M.; Cook, L. P.; Hubbard, C. R.
2001-01-01
An international project was successfully completed which involved two major undertakings: (1) a round-robin to demonstrate the viability of the selected standard and (2) the certification of the lattice parameters of the SRM 1990, a Standard Reference Material® for single crystal diffractometer alignment. This SRM is a set of ≈3500 units of Cr-doped Al2O3, or ruby spheres [(0.420.011 mole fraction % Cr (expanded uncertainty)]. The round-robin consisted of determination of lattice parameters of a pair of crystals: the ruby sphere as a standard, and a zeolite reference to serve as an unknown. Fifty pairs of crystals were dispatched from Hauptman-Woodward Medical Research Institute to volunteers in x-ray laboratories world-wide. A total of 45 sets of data was received from 32 laboratories. The mean unit cell parameters of the ruby spheres was found to be a=4.7608 ű0.0062 Å, and c=12.9979 ű0.020 Å (95 % intervals of the laboratory means). The source of errors of outlier data was identified. The SRM project involved the certification of lattice parameters using four well-aligned single crystal diffractometers at (Bell Laboratories) Lucent Technologies and at NRC of Canada (39 ruby spheres), the quantification of the Cr content using a combined microprobe and SEM/EDS technique, and the evaluation of the mosaicity of the ruby spheres using a double-crystal spectrometry method. A confirmation of the lattice parameters was also conducted using a Guinier-Hägg camera. Systematic corrections of thermal expansion and refraction corrections were applied. These rubies– are rhombohedral, with space group R3¯c. The certified mean unit cell parameters are a=4.76080±0.00029 Å, and c=12.99568 ű0.00087 Å (expanded uncertainty). These certified lattice parameters fall well within the results of those obtained from the international round-robin study. The Guinier-Hägg transmission measurements on five samples of powdered rubies (a=4.7610 ű0.0013 Å, and c = 12.9954 ű0.0034 Å) agreed well with the values obtained from the single crystal spheres. PMID:27500067
Kaiser, W; Faber, T S; Findeis, M
1996-01-01
The authors developed a computer program that detects myocardial infarction (MI) and left ventricular hypertrophy (LVH) in two steps: (1) by extracting parameter values from a 10-second, 12-lead electrocardiogram, and (2) by classifying the extracted parameter values with rule sets. Every disease has its dedicated set of rules. Hence, there are separate rule sets for anterior MI, inferior MI, and LVH. If at least one rule is satisfied, the disease is said to be detected. The computer program automatically develops these rule sets. A database (learning set) of healthy subjects and patients with MI, LVH, and mixed MI+LVH was used. After defining the rule type, initial limits, and expected quality of the rules (positive predictive value, minimum number of patients), the program creates a set of rules by varying the limits. The general rule type is defined as: disease = lim1l < p1 < or = lim1u and lim2l < p2 < or = lim2u and ... limnl < pn < or = limnu. When defining the rule types, only the parameters (p1 ... pn) that are known as clinical electrocardiographic criteria (amplitudes [mV] of Q, R, and T waves and ST-segment; duration [ms] of Q wave; frontal angle [degrees]) were used. This allowed for submitting the learned rule sets to an independent investigator for medical verification. It also allowed the creation of explanatory texts with the rules. These advantages are not offered by the neurons of a neural network. The learned rules were checked against a test set and the following results were obtained: MI: sensitivity 76.2%, positive predictive value 98.6%; LVH: sensitivity 72.3%, positive predictive value 90.9%. The specificity ratings for MI are better than 98%; for LVH, better than 90%.
Quantitative knowledge acquisition for expert systems
NASA Technical Reports Server (NTRS)
Belkin, Brenda L.; Stengel, Robert F.
1991-01-01
A common problem in the design of expert systems is the definition of rules from data obtained in system operation or simulation. While it is relatively easy to collect data and to log the comments of human operators engaged in experiments, generalizing such information to a set of rules has not previously been a direct task. A statistical method is presented for generating rule bases from numerical data, motivated by an example based on aircraft navigation with multiple sensors. The specific objective is to design an expert system that selects a satisfactory suite of measurements from a dissimilar, redundant set, given an arbitrary navigation geometry and possible sensor failures. The systematic development is described of a Navigation Sensor Management (NSM) Expert System from Kalman Filter convariance data. The method invokes two statistical techniques: Analysis of Variance (ANOVA) and the ID3 Algorithm. The ANOVA technique indicates whether variations of problem parameters give statistically different covariance results, and the ID3 algorithms identifies the relationships between the problem parameters using probabilistic knowledge extracted from a simulation example set. Both are detailed.
Vibrational spectra, DFT quantum chemical calculations and conformational analysis of P-iodoanisole.
Arivazhagan, M; Anitha Rexalin, D; Geethapriya, J
2013-09-01
The solid phase FT-IR and FT-Raman spectra of P-iodoanisole (P-IA) have been recorded in the regions 400-4000 and 50-4000 cm(-1), respectively. The spectra were interpreted in terms of fundamentals modes, combination and overtone bands. The structure of the molecule was optimized and the structural characteristics were determined by ab initio (HF) and density functional theory (B3LYP) methods with LanL2DZ as basis set. The potential energy surface scan for the selected dihedral angle of P-IA has been performed to identify stable conformer. The optimized structure parameters and vibrational wavenumbers of stable conformer have been predicted by density functional B3LYP method with LanL2DZ (with effective core potential representations of electrons near the nuclei for post-third row atoms) basis set. The nucleophilic and electrophilic sites obtained from the molecular electrostatic potential (MEP) surface were calculated. The temperature dependence of thermodynamic properties has been analyzed. Several thermodynamic parameters have been calculated using B3LYP with LanL2DZ basis set. Copyright © 2013 Elsevier B.V. All rights reserved.
Beef quality parameters estimation using ultrasound and color images
2015-01-01
Background Beef quality measurement is a complex task with high economic impact. There is high interest in obtaining an automatic quality parameters estimation in live cattle or post mortem. In this paper we set out to obtain beef quality estimates from the analysis of ultrasound (in vivo) and color images (post mortem), with the measurement of various parameters related to tenderness and amount of meat: rib eye area, percentage of intramuscular fat and backfat thickness or subcutaneous fat. Proposal An algorithm based on curve evolution is implemented to calculate the rib eye area. The backfat thickness is estimated from the profile of distances between two curves that limit the steak and the rib eye, previously detected. A model base in Support Vector Regression (SVR) is trained to estimate the intramuscular fat percentage. A series of features extracted on a region of interest, previously detected in both ultrasound and color images, were proposed. In all cases, a complete evaluation was performed with different databases including: color and ultrasound images acquired by a beef industry expert, intramuscular fat estimation obtained by an expert using a commercial software, and chemical analysis. Conclusions The proposed algorithms show good results to calculate the rib eye area and the backfat thickness measure and profile. They are also promising in predicting the percentage of intramuscular fat. PMID:25734452
Cluster state generation in one-dimensional Kitaev honeycomb model via shortcut to adiabaticity
NASA Astrophysics Data System (ADS)
Kyaw, Thi Ha; Kwek, Leong-Chuan
2018-04-01
We propose a mean to obtain computationally useful resource states also known as cluster states, for measurement-based quantum computation, via transitionless quantum driving algorithm. The idea is to cool the system to its unique ground state and tune some control parameters to arrive at computationally useful resource state, which is in one of the degenerate ground states. Even though there is set of conserved quantities already present in the model Hamiltonian, which prevents the instantaneous state to go to any other eigenstate subspaces, one cannot quench the control parameters to get the desired state. In that case, the state will not evolve. With involvement of the shortcut Hamiltonian, we obtain cluster states in fast-forward manner. We elaborate our proposal in the one-dimensional Kitaev honeycomb model, and show that the auxiliary Hamiltonian needed for the counterdiabatic driving is of M-body interaction.
Magnetic anisotropy in the Kitaev model systems Na2IrO3 and RuCl3
NASA Astrophysics Data System (ADS)
Chaloupka, Jiří; Khaliullin, Giniyat
2016-08-01
We study the ordered moment direction in the extended Kitaev-Heisenberg model relevant to honeycomb lattice magnets with strong spin-orbit coupling. We utilize numerical diagonalization and analyze the exact cluster ground states using a particular set of spin-coherent states, obtaining thereby quantum corrections to the magnetic anisotropy beyond conventional perturbative methods. It is found that the quantum fluctuations strongly modify the moment direction obtained at a classical level and are thus crucial for a precise quantification of the interactions. The results show that the moment direction is a sensitive probe of the model parameters in real materials. Focusing on the experimentally relevant zigzag phases of the model, we analyze the currently available neutron-diffraction and resonant x-ray-diffraction data on Na2IrO3 and RuCl3 and discuss the parameter regimes plausible in these Kitaev-Heisenberg model systems.
NASA Astrophysics Data System (ADS)
Xie, Gui-long; Zhang, Yong-hong; Huang, Shi-ping
2012-04-01
Using coarse-grained molecular dynamics simulations based on Gay-Berne potential model, we have simulated the cooling process of liquid n-butanol. A new set of GB parameters are obtained by fitting the results of density functional theory calculations. The simulations are carried out in the range of 290-50 K with temperature decrements of 10 K. The cooling characteristics are determined on the basis of the variations of the density, the potential energy and orientational order parameter with temperature, whose slopes all show discontinuity. Both the radial distribution function curves and the second-rank orientational correlation function curves exhibit splitting in the second peak. Using the discontinuous change of these thermodynamic and structure properties, we obtain the glass transition at an estimate of temperature Tg=120±10 K, which is in good agreement with experimental results 110±1 K.
A new method to calculate the beam charge for an integrating current transformer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu Yuchi; Han Dan; Zhu Bin
2012-09-15
The integrating current transformer (ICT) is a magnetic sensor widely used to precisely measure the charge of an ultra-short-pulse charged particle beam generated by traditional accelerators and new laser-plasma particle accelerators. In this paper, we present a new method to calculate the beam charge in an ICT based on circuit analysis. The output transfer function shows an invariable signal profile for an ultra-short electron bunch, so the function can be used to evaluate the signal quality and calculate the beam charge through signal fitting. We obtain a set of parameters in the output function from a standard signal generated bymore » an ultra-short electron bunch (about 1 ps in duration) at a radio frequency linear electron accelerator at Tsinghua University. These parameters can be used to obtain the beam charge by signal fitting with excellent accuracy.« less
NASA Astrophysics Data System (ADS)
Pignon, Baptiste; Sobotka, Vincent; Boyard, Nicolas; Delaunay, Didier
2017-10-01
Two different analytical models were presented to determine cycle parameters of thermoplastics injection process. The aim of these models was to provide quickly a first set of data for mold temperature and cooling time. The first model is specific to amorphous polymers and the second one is dedicated to semi-crystalline polymers taking the crystallization into account. In both cases, the nature of the contact between the polymer and the mold could be considered as perfect or not (thermal contact resistance was considered). Results from models are compared with experimental data obtained with an instrumented mold for an acrylonitrile butadiene styrene (ABS) and a polypropylene (PP). Good agreements were obtained for mold temperature variation and for heat flux. In the case of the PP, the analytical crystallization times were compared with those given by a coupled model between heat transfer and crystallization kinetics.
Isoplanatic patch of the human eye for arbitrary wavelengths
NASA Astrophysics Data System (ADS)
Han, Guoqing; Cao, Zhaoliang; Mu, Quanquan; Wang, Yukun; Li, Dayu; Wang, Shaoxin; Xu, Zihao; Wu, Daosheng; Hu, Lifa; Xuan, Li
2018-03-01
The isoplanatic patch of the human eye is a key parameter for the adaptive optics system (AOS) designed for retinal imaging. The field of view (FOV) usually sets to the same size as the isoplanatic patch to obtain high resolution images. However, it has only been measured at a specific wavelength. Here we investigate the wavelength dependence of this important parameter. An optical setup is initially designed and established in a laboratory to measure the isoplanatic patch at various wavelengths (655 nm, 730 nm and 808 nm). We established the Navarro wide-angle eye model in Zemax software to further validate our results, which suggested high consistency between the two. The isoplanatic patch as a function of wavelength was obtained within the range of visible to near-infrared, which can be expressed as: θ=0.0028 λ - 0 . 74. This work is beneficial for the AOS design for retinal imaging.
Shape and spin of asteroid 967 Helionape
NASA Astrophysics Data System (ADS)
Apostolovska, G.; Kostov, A.; Donchev, Z.; Bebekovska, E. Vchkova; Kuzmanovska, O.
2018-04-01
Knowledge of the spin and shape parameters of the asteroids is very important for understanding of the conditions during the creation of our planetary system and formation of asteroid populations. The main belt asteroid and Flora family member 967 Helionape was observed during five apparitions. The observations were made at the Bulgarian National Astronomical Observatory (BNAO) Rozhen, since March 2006 to March 2016. Lihtcurve inversion method (Kaasalainen et al. (2001)), applied on 12 relative lightcurves obtained at various geometric conditions of the asteroid, reveals the spin vector, the sense of rotation and the preliminary shape model of the asteroid. Our aim is to contribute in increasing the set of asteroids with known spin and shape parameters. This could be done with dense lightcurves, obtained during small number of apparitions, in combination with sparse data produced by photometric asteroid surveys such as the Gaia satellite (Hanush (2011)).
Analysis of reflection effects in HS 2333+3927
NASA Astrophysics Data System (ADS)
Shimanskii, V. V.; Yakin, D. G.; Borisov, N. V.; Bikmaev, I. F.
2012-11-01
The results of photometric and spectroscopic observations of the pre-cataclysmic variable HS 2333+3927, which is a HW Vir binary system, are analyzed. The parameters of the sdB subdwarf companion ( T eff = 37 500 ± 500 K, log g = 5.7 ± 0.05) and the chemical composition of its atmosphere are refined using a spectrum of the binary system obtained at minimum brightness. Reflection effects can fully explain the observed brightness variations of HS 2333+3927, changes in the HI and HeI line profiles, and distortions of the radial-velocity curve of the primary star. A new method for determining the component-mass ratios in HW Vir binaries, based on their radial-velocity curves and models of irradiated atmospheres, is proposed. The set of parameters obtained for the binary components corresponds to models of horizontal-branch sdB subdwarfs and main-sequence stars.
9Be scattering with microscopic wave functions and the continuum-discretized coupled-channel method
NASA Astrophysics Data System (ADS)
Descouvemont, P.; Itagaki, N.
2018-01-01
We use microscopic 9Be wave functions defined in a α +α +n multicluster model to compute 9Be+target scattering cross sections. The parameter sets describing 9Be are generated in the spirit of the stochastic variational method, and the optimal solution is obtained by superposing Slater determinants and by diagonalizing the Hamiltonian. The 9Be three-body continuum is approximated by square-integral wave functions. The 9Be microscopic wave functions are then used in a continuum-discretized coupled-channel (CDCC) calculation of 9Be+208Pb and of 9Be+27Al elastic scattering. Without any parameter fitting, we obtain a fair agreement with experiment. For a heavy target, the influence of 9Be breakup is important, while it is weaker for light targets. This result confirms previous nonmicroscopic CDCC calculations. One of the main advantages of the microscopic CDCC is that it is based on nucleon-target interactions only; there is no adjustable parameter. The present work represents a first step towards more ambitious calculations involving heavier Be isotopes.
Numerical simulation of turbulent gas flames in tubes.
Salzano, E; Marra, F S; Russo, G; Lee, J H S
2002-12-02
Computational fluid dynamics (CFD) is an emerging technique to predict possible consequences of gas explosion and it is often considered a powerful and accurate tool to obtain detailed results. However, systematic analyses of the reliability of this approach to real-scale industrial configurations are still needed. Furthermore, few experimental data are available for comparison and validation. In this work, a set of well documented experimental data related to the flame acceleration obtained within obstacle-filled tubes filled with flammable gas-air mixtures, has been simulated. In these experiments, terminal steady flame speeds corresponding to different propagation regimes were observed, thus, allowing a clear and prompt characterisation of the numerical results with respect to numerical parameters, as grid definition, geometrical parameters, as blockage ratio and to mixture parameters, as mixture reactivity. The CFD code AutoReagas was used for the simulations. Numerical predictions were compared with available experimental data and some insights into the code accuracy were determined. Computational results are satisfactory for the relatively slower turbulent deflagration regimes and became fair when choking regime is observed, whereas transition to quasi-detonation or Chapman-Jogouet (CJ) were never predicted.
Event generator tunes obtained from underlying event and multiparton scattering measurements
Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; ...
2016-03-17
Here, new sets of parameters (“tunes”) for the underlying-event (UE) modelling of the pythia8, pythia6 and herwig++ Monte Carlo event generators are constructed using different parton distribution functions. Combined fits to CMS UE proton–proton (more » $$\\mathrm {p}\\mathrm {p}$$ ) data at $$\\sqrt{s} = 7\\,\\text {TeV} $$ and to UE proton–antiproton ( $$\\mathrm {p}\\overline{\\mathrm{p}} $$ ) data from the CDF experiment at lower $$\\sqrt{s}$$ , are used to study the UE models and constrain their parameters, providing thereby improved predictions for proton–proton collisions at 13 $$\\,\\text {TeV}$$ . In addition, it is investigated whether the values of the parameters obtained from fits to UE observables are consistent with the values determined from fitting observables sensitive to double-parton scattering processes. Finally, comparisons are presented of the UE tunes to “minimum bias” (MB) events, multijet, and Drell–Yan ( $$ \\mathrm{q} \\overline{\\mathrm{q}} \\rightarrow \\mathrm{Z}/ \\gamma ^* \\rightarrow $$ lepton-antilepton+jets) observables at 7 and 8 $$\\,\\text {TeV}$$ , as well as predictions for MB and UE observables at 13 $$\\,\\text {TeV}$$ .« less
Magenes, G; Bellazzi, R; Malovini, A; Signorini, M G
2016-08-01
The onset of fetal pathologies can be screened during pregnancy by means of Fetal Heart Rate (FHR) monitoring and analysis. Noticeable advances in understanding FHR variations were obtained in the last twenty years, thanks to the introduction of quantitative indices extracted from the FHR signal. This study searches for discriminating Normal and Intra Uterine Growth Restricted (IUGR) fetuses by applying data mining techniques to FHR parameters, obtained from recordings in a population of 122 fetuses (61 healthy and 61 IUGRs), through standard CTG non-stress test. We computed N=12 indices (N=4 related to time domain FHR analysis, N=4 to frequency domain and N=4 to non-linear analysis) and normalized them with respect to the gestational week. We compared, through a 10-fold crossvalidation procedure, 15 data mining techniques in order to select the more reliable approach for identifying IUGR fetuses. The results of this comparison highlight that two techniques (Random Forest and Logistic Regression) show the best classification accuracy and that both outperform the best single parameter in terms of mean AUROC on the test sets.
On the nature of fast sausage waves in coronal loops
NASA Astrophysics Data System (ADS)
Bahari, Karam
2018-05-01
The effect of the parameters of coronal loops on the nature of fast sausage waves are investigated. To do this three models of the coronal loop considered, a simple loop model, a current-carrying loop model and a model with radially structured density called "Inner μ" profile. For all the models the Magnetohydrodynamic (MHD) equations solved analytically in the linear approximation and the restoring forces of oscillations obtained. The ratio of the magnetic tension force to the pressure gradient force obtained as a function of the distance from the axis of the loop. In the simple loop model for all values of the loop parameters the fast sausages wave have a mixed nature of Alfvénic and fast MHD waves, in the current-carrying loop model with thick annulus and low density contrast the fast sausage waves can be considered as purely Alfvénic wave in the core region of the loop, and in the "Inner μ" profile for each set of the parameters of the loop the wave can be considered as a purely Alfvénic wave in some regions of the loop.
Extended Le Chatelier's formula for carbon dioxide dilution effect on flammability limits.
Kondo, Shigeo; Takizawa, Kenji; Takahashi, Akifumi; Tokuhashi, Kazuaki
2006-11-02
Carbon dioxide dilution effect on the flammability limits was measured for various flammable gases. The obtained values were analyzed using the extended Le Chatelier's formula developed in a previous study. As a result, it has been found that the flammability limits of methane, propane, propylene, methyl formate, and 1,1-difluoroethane are adequately explained by the extended Le Chatelier's formula using a common set of parameter values. Ethylene, dimethyl ether, and ammonia behave differently from these compounds. The present result is very consistent with what was obtained in the case of nitrogen dilution.
The Dirac equation in Schwarzschild black hole coupled to a stationary electromagnetic field
NASA Astrophysics Data System (ADS)
Al-Badawi, A.; Owaidat, M. Q.
2017-08-01
We study the Dirac equation in a spacetime that represents the nonlinear superposition of the Schwarzschild solution to an external, stationary electromagnetic field. The set of equations representing the uncharged Dirac particle in the Newman-Penrose formalism is decoupled into a radial and an angular parts. We obtain exact analytical solutions of the angular equations. We manage to obtain the radial wave equations with effective potentials. Finally, we study the potentials by plotting them as a function of radial distance and examine the effect of the twisting parameter and the frequencies on the potentials.
Generalized gas-solid adsorption modeling: Single-component equilibria
Ladshaw, Austin; Yiacoumi, Sotira; Tsouris, Costas; ...
2015-01-07
Over the last several decades, modeling of gas–solid adsorption at equilibrium has generally been accomplished through the use of isotherms such as the Freundlich, Langmuir, Tóth, and other similar models. While these models are relatively easy to adapt for describing experimental data, their simplicity limits their generality to be used with many different sets of data. This limitation forces engineers and scientists to test each different model in order to evaluate which one can best describe their data. Additionally, the parameters of these models all have a different physical interpretation, which may have an effect on how they can bemore » further extended into kinetic, thermodynamic, and/or mass transfer models for engineering applications. Therefore, it is paramount to adopt not only a more general isotherm model, but also a concise methodology to reliably optimize for and obtain the parameters of that model. A model of particular interest is the Generalized Statistical Thermodynamic Adsorption (GSTA) isotherm. The GSTA isotherm has enormous flexibility, which could potentially be used to describe a variety of different adsorption systems, but utilizing this model can be fairly difficult due to that flexibility. To circumvent this complication, a comprehensive methodology and computer code has been developed that can perform a full equilibrium analysis of adsorption data for any gas-solid system using the GSTA model. The code has been developed in C/C++ and utilizes a Levenberg–Marquardt’s algorithm to handle the non-linear optimization of the model parameters. Since the GSTA model has an adjustable number of parameters, the code iteratively goes through all number of plausible parameters for each data set and then returns the best solution based on a set of scrutiny criteria. Data sets at different temperatures are analyzed serially and then linear correlations with temperature are made for the parameters of the model. The end result is a full set of optimal GSTA parameters, both dimensional and non-dimensional, as well as the corresponding thermodynamic parameters necessary to predict the behavior of the system at temperatures for which data were not available. It will be shown that this code, utilizing the GSTA model, was able to describe a wide variety of gas-solid adsorption systems at equilibrium.In addition, a physical interpretation of these results will be provided, as well as an alternate derivation of the GSTA model, which intends to reaffirm the physical meaning.« less
Rüdt, Matthias; Gillet, Florian; Heege, Stefanie; Hitzler, Julian; Kalbfuss, Bernd; Guélat, Bertrand
2015-09-25
Application of model-based design is appealing to support the development of protein chromatography in the biopharmaceutical industry. However, the required efforts for parameter estimation are frequently perceived as time-consuming and expensive. In order to speed-up this work, a new parameter estimation approach for modelling ion-exchange chromatography in linear conditions was developed. It aims at reducing the time and protein demand for the model calibration. The method combines the estimation of kinetic and thermodynamic parameters based on the simultaneous variation of the gradient slope and the residence time in a set of five linear gradient elutions. The parameters are estimated from a Yamamoto plot and a gradient-adjusted Van Deemter plot. The combined approach increases the information extracted per experiment compared to the individual methods. As a proof of concept, the combined approach was successfully applied for a monoclonal antibody on a cation-exchanger and for a Fc-fusion protein on an anion-exchange resin. The individual parameter estimations for the mAb confirmed that the new approach maintained the accuracy of the usual Yamamoto and Van Deemter plots. In the second case, offline size-exclusion chromatography was performed in order to estimate the thermodynamic parameters of an impurity (high molecular weight species) simultaneously with the main product. Finally, the parameters obtained from the combined approach were used in a lumped kinetic model to simulate the chromatography runs. The simulated chromatograms obtained for a wide range of gradient lengths and residence times showed only small deviations compared to the experimental data. Copyright © 2015 Elsevier B.V. All rights reserved.
Number of independent parameters in the potentiometric titration of humic substances.
Lenoir, Thomas; Manceau, Alain
2010-03-16
With the advent of high-precision automatic titrators operating in pH stat mode, measuring the mass balance of protons in solid-solution mixtures against the pH of natural and synthetic polyelectrolytes is now routine. However, titration curves of complex molecules typically lack obvious inflection points, which complicates their analysis despite the high-precision measurements. The calculation of site densities and median proton affinity constants (pK) from such data can lead to considerable covariance between fit parameters. Knowing the number of independent parameters that can be freely varied during the least-squares minimization of a model fit to titration data is necessary to improve the model's applicability. This number was calculated for natural organic matter by applying principal component analysis (PCA) to a reference data set of 47 independent titration curves from fulvic and humic acids measured at I = 0.1 M. The complete data set was reconstructed statistically from pH 3.5 to 9.8 with only six parameters, compared to seven or eight generally adjusted with common semi-empirical speciation models for organic matter, and explains correlations that occur with the higher number of parameters. Existing proton-binding models are not necessarily overparametrized, but instead titration data lack the sensitivity needed to quantify the full set of binding properties of humic materials. Model-independent conditional pK values can be obtained directly from the derivative of titration data, and this approach is the most conservative. The apparent proton-binding constants of the 23 fulvic acids (FA) and 24 humic acids (HA) derived from a high-quality polynomial parametrization of the data set are pK(H,COOH)(FA) = 4.18 +/- 0.21, pK(H,Ph-OH)(FA) = 9.29 +/- 0.33, pK(H,COOH)(HA) = 4.49 +/- 0.18, and pK(H,Ph-OH)(HA) = 9.29 +/- 0.38. Their values at other ionic strengths are more reliably calculated with the empirical Davies equation than any existing model fit.
2015-01-01
The Mass, Metabolism and Length Explanation (MMLE) was advanced in 1984 to explain the relationship between metabolic rate and body mass for birds and mammals. This paper reports on a modernized version of MMLE. MMLE deterministically computes the absolute value of Basal Metabolic Rate (BMR) and body mass for individual animals. MMLE is thus distinct from other examinations of these topics that use species-averaged data to estimate the parameters in a statistically best fit power law relationship such as BMR = a(bodymass)b. Beginning with the proposition that BMR is proportional to the number of mitochondria in an animal, two primary equations are derived that compute BMR and body mass as functions of an individual animal’s characteristic length and sturdiness factor. The characteristic length is a measureable skeletal length associated with an animal’s means of propulsion. The sturdiness factor expresses how sturdy or gracile an animal is. Eight other parameters occur in the equations that vary little among animals in the same phylogenetic group. The present paper modernizes MMLE by explicitly treating Froude and Strouhal dynamic similarity of mammals’ skeletal musculature, revising the treatment of BMR and using new data to estimate numerical values for the parameters that occur in the equations. A mass and length data set with 575 entries from the orders Rodentia, Chiroptera, Artiodactyla, Carnivora, Perissodactyla and Proboscidea is used. A BMR and mass data set with 436 entries from the orders Rodentia, Chiroptera, Artiodactyla and Carnivora is also used. With the estimated parameter values MMLE can calculate characteristic length and sturdiness factor values so that every BMR and mass datum from the BMR and mass data set can be computed exactly. Furthermore MMLE can calculate characteristic length and sturdiness factor values so that every body mass and length datum from the mass and length data set can be computed exactly. Whether or not MMLE can calculate a sturdiness factor value so that an individual animal’s BMR and body mass can be simultaneously computed given its characteristic length awaits analysis of a data set that simultaneously reports all three of these items for individual animals. However for many of the addressed MMLE homogeneous groups, MMLE can predict the exponent obtained by regression analysis of the BMR and mass data using the exponent obtained by regression analysis of the mass and length data. This argues that MMLE may be able to accurately simultaneously compute BMR and mass for an individual animal. PMID:26355655
Large scale study of multiple-molecule queries
2009-01-01
Background In ligand-based screening, as well as in other chemoinformatics applications, one seeks to effectively search large repositories of molecules in order to retrieve molecules that are similar typically to a single molecule lead. However, in some case, multiple molecules from the same family are available to seed the query and search for other members of the same family. Multiple-molecule query methods have been less studied than single-molecule query methods. Furthermore, the previous studies have relied on proprietary data and sometimes have not used proper cross-validation methods to assess the results. In contrast, here we develop and compare multiple-molecule query methods using several large publicly available data sets and background. We also create a framework based on a strict cross-validation protocol to allow unbiased benchmarking for direct comparison in future studies across several performance metrics. Results Fourteen different multiple-molecule query methods were defined and benchmarked using: (1) 41 publicly available data sets of related molecules with similar biological activity; and (2) publicly available background data sets consisting of up to 175,000 molecules randomly extracted from the ChemDB database and other sources. Eight of the fourteen methods were parameter free, and six of them fit one or two free parameters to the data using a careful cross-validation protocol. All the methods were assessed and compared for their ability to retrieve members of the same family against the background data set by using several performance metrics including the Area Under the Accumulation Curve (AUAC), Area Under the Curve (AUC), F1-measure, and BEDROC metrics. Consistent with the previous literature, the best parameter-free methods are the MAX-SIM and MIN-RANK methods, which score a molecule to a family by the maximum similarity, or minimum ranking, obtained across the family. One new parameterized method introduced in this study and two previously defined methods, the Exponential Tanimoto Discriminant (ETD), the Tanimoto Power Discriminant (TPD), and the Binary Kernel Discriminant (BKD), outperform most other methods but are more complex, requiring one or two parameters to be fit to the data. Conclusion Fourteen methods for multiple-molecule querying of chemical databases, including novel methods, (ETD) and (TPD), are validated using publicly available data sets, standard cross-validation protocols, and established metrics. The best results are obtained with ETD, TPD, BKD, MAX-SIM, and MIN-RANK. These results can be replicated and compared with the results of future studies using data freely downloadable from http://cdb.ics.uci.edu/. PMID:20298525
Expert-guided optimization for 3D printing of soft and liquid materials.
Abdollahi, Sara; Davis, Alexander; Miller, John H; Feinberg, Adam W
2018-01-01
Additive manufacturing (AM) has rapidly emerged as a disruptive technology to build mechanical parts, enabling increased design complexity, low-cost customization and an ever-increasing range of materials. Yet these capabilities have also created an immense challenge in optimizing the large number of process parameters in order achieve a high-performance part. This is especially true for AM of soft, deformable materials and for liquid-like resins that require experimental printing methods. Here, we developed an expert-guided optimization (EGO) strategy to provide structure in exploring and improving the 3D printing of liquid polydimethylsiloxane (PDMS) elastomer resin. EGO uses three steps, starting first with expert screening to select the parameter space, factors, and factor levels. Second is a hill-climbing algorithm to search the parameter space defined by the expert for the best set of parameters. Third is expert decision making to try new factors or a new parameter space to improve on the best current solution. We applied the algorithm to two calibration objects, a hollow cylinder and a five-sided hollow cube that were evaluated based on a multi-factor scoring system. The optimum print settings were then used to print complex PDMS and epoxy 3D objects, including a twisted vase, water drop, toe, and ear, at a level of detail and fidelity previously not obtained.
Expert-guided optimization for 3D printing of soft and liquid materials
Abdollahi, Sara; Davis, Alexander; Miller, John H.
2018-01-01
Additive manufacturing (AM) has rapidly emerged as a disruptive technology to build mechanical parts, enabling increased design complexity, low-cost customization and an ever-increasing range of materials. Yet these capabilities have also created an immense challenge in optimizing the large number of process parameters in order achieve a high-performance part. This is especially true for AM of soft, deformable materials and for liquid-like resins that require experimental printing methods. Here, we developed an expert-guided optimization (EGO) strategy to provide structure in exploring and improving the 3D printing of liquid polydimethylsiloxane (PDMS) elastomer resin. EGO uses three steps, starting first with expert screening to select the parameter space, factors, and factor levels. Second is a hill-climbing algorithm to search the parameter space defined by the expert for the best set of parameters. Third is expert decision making to try new factors or a new parameter space to improve on the best current solution. We applied the algorithm to two calibration objects, a hollow cylinder and a five-sided hollow cube that were evaluated based on a multi-factor scoring system. The optimum print settings were then used to print complex PDMS and epoxy 3D objects, including a twisted vase, water drop, toe, and ear, at a level of detail and fidelity previously not obtained. PMID:29621286
Caracterisation mecanique dynamique de materiaux poro-visco-elastiques
NASA Astrophysics Data System (ADS)
Renault, Amelie
Poro-viscoelastic materials are well modelled with Biot-Allard equations. This model needs a number of geometrical parameters in order to describe the macroscopic geometry of the material and elastic parameters in order to describe the elastic properties of the material skeleton. Several characterisation methods of viscoelastic parameters of porous materials are studied in this thesis. Firstly, quasistatic and resonant characterization methods are described and analyzed. Secondly, a new inverse dynamic characterization of the same modulus is developed. The latter involves a two layers metal-porous beam, which is excited at the center. The input mobility is measured. The set-up is simplified compared to previous methods. The parameters are obtained via an inversion procedure based on the minimisation of the cost function comparing the measured and calculated frequency response functions (FRF). The calculation is done with a general laminate model. A parametric study identifies the optimal beam dimensions for maximum sensitivity of the inversion model. The advantage of using a code which is not taking into account fluid-structure interactions is the low computation time. For most materials, the effect of this interaction on the elastic properties is negligible. Several materials are tested to demonstrate the performance of the method compared to the classical quasi-static approaches, and set its limitations and range of validity. Finally, conclusions about their utilisation are given. Keywords. Elastic parameters, porous materials, anisotropy, vibration.
NASA Astrophysics Data System (ADS)
Zheng, Jing; Lu, Jiren; Peng, Suping; Jiang, Tianqi
2018-02-01
The conventional arrival pick-up algorithms cannot avoid the manual modification of the parameters for the simultaneous identification of multiple events under different signal-to-noise ratios (SNRs). Therefore, in order to automatically obtain the arrivals of multiple events with high precision under different SNRs, in this study an algorithm was proposed which had the ability to pick up the arrival of microseismic or acoustic emission events based on deep recurrent neural networks. The arrival identification was performed using two important steps, which included a training phase and a testing phase. The training process was mathematically modelled by deep recurrent neural networks using Long Short-Term Memory architecture. During the testing phase, the learned weights were utilized to identify the arrivals through the microseismic/acoustic emission data sets. The data sets were obtained by rock physics experiments of the acoustic emission. In order to obtain the data sets under different SNRs, this study added random noise to the raw experiments' data sets. The results showed that the outcome of the proposed method was able to attain an above 80 per cent hit-rate at SNR 0 dB, and an approximately 70 per cent hit-rate at SNR -5 dB, with an absolute error in 10 sampling points. These results indicated that the proposed method had high selection precision and robustness.
Otani, Kyoko; Nakazono, Akemi; Salgo, Ivan S; Lang, Roberto M; Takeuchi, Masaaki
2016-10-01
Echocardiographic determination of left heart chamber volumetric parameters by using manual tracings during multiple beats is tedious in atrial fibrillation (AF). The aim of this study was to determine the usefulness of fully automated left chamber quantification software with single-beat three-dimensional transthoracic echocardiographic data sets in patients with AF. Single-beat full-volume three-dimensional transthoracic echocardiographic data sets were prospectively acquired during consecutive multiple cardiac beats (≥10 beats) in 88 patients with AF. In protocol 1, left ventricular volumes, left ventricular ejection fraction, and maximal left atrial volume were validated using automated quantification against the manual tracing method in identical beats in 10 patients. In protocol 2, automated quantification-derived averaged values from multiple beats were compared with the corresponding values obtained from the indexed beat in all patients. Excellent correlations of left chamber parameters between automated quantification and the manual method were observed (r = 0.88-0.98) in protocol 1. The time required for the analysis with the automated quantification method (5 min) was significantly less compared with the manual method (27 min) (P < .0001). In protocol 2, there were excellent linear correlations between the averaged left chamber parameters and the corresponding values obtained from the indexed beat (r = 0.94-0.99), and test-retest variability of left chamber parameters was low (3.5%-4.8%). Three-dimensional transthoracic echocardiography with fully automated quantification software is a rapid and reliable way to measure averaged values of left heart chamber parameters during multiple consecutive beats. Thus, it is a potential new approach for left chamber quantification in patients with AF in daily routine practice. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
An application of the Krylov-FSP-SSA method to parameter fitting with maximum likelihood
NASA Astrophysics Data System (ADS)
Dinh, Khanh N.; Sidje, Roger B.
2017-12-01
Monte Carlo methods such as the stochastic simulation algorithm (SSA) have traditionally been employed in gene regulation problems. However, there has been increasing interest to directly obtain the probability distribution of the molecules involved by solving the chemical master equation (CME). This requires addressing the curse of dimensionality that is inherent in most gene regulation problems. The finite state projection (FSP) seeks to address the challenge and there have been variants that further reduce the size of the projection or that accelerate the resulting matrix exponential. The Krylov-FSP-SSA variant has proved numerically efficient by combining, on one hand, the SSA to adaptively drive the FSP, and on the other hand, adaptive Krylov techniques to evaluate the matrix exponential. Here we apply this Krylov-FSP-SSA to a mutual inhibitory gene network synthetically engineered in Saccharomyces cerevisiae, in which bimodality arises. We show numerically that the approach can efficiently approximate the transient probability distribution, and this has important implications for parameter fitting, where the CME has to be solved for many different parameter sets. The fitting scheme amounts to an optimization problem of finding the parameter set so that the transient probability distributions fit the observations with maximum likelihood. We compare five optimization schemes for this difficult problem, thereby providing further insights into this approach of parameter estimation that is often applied to models in systems biology where there is a need to calibrate free parameters. Work supported by NSF grant DMS-1320849.
Giesen, E B W; Ding, M; Dalstra, M; van Eijden, T M G J
2003-09-01
As several morphological parameters of cancellous bone express more or less the same architectural measure, we applied principal components analysis to group these measures and correlated these to the mechanical properties. Cylindrical specimens (n = 24) were obtained in different orientations from embalmed mandibular condyles; the angle of the first principal direction and the axis of the specimen, expressing the orientation of the trabeculae, ranged from 10 degrees to 87 degrees. Morphological parameters were determined by a method based on Archimedes' principle and by micro-CT scanning, and the mechanical properties were obtained by mechanical testing. The principal components analysis was used to obtain a set of independent components to describe the morphology. This set was entered into linear regression analyses for explaining the variance in mechanical properties. The principal components analysis revealed four components: amount of bone, number of trabeculae, trabecular orientation, and miscellaneous. They accounted for about 90% of the variance in the morphological variables. The component loadings indicated that a higher amount of bone was primarily associated with more plate-like trabeculae, and not with more or thicker trabeculae. The trabecular orientation was most determinative (about 50%) in explaining stiffness, strength, and failure energy. The amount of bone was second most determinative and increased the explained variance to about 72%. These results suggest that trabecular orientation and amount of bone are important in explaining the anisotropic mechanical properties of the cancellous bone of the mandibular condyle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Z. W., E-mail: zhuzhiwen@tju.edu.cn; Li, X. M., E-mail: lixinmiaotju@163.com; Xu, J., E-mail: xujia-ld@163.com
A kind of magnetic shape memory alloy (MSMA) microgripper is proposed in this paper, and its nonlinear dynamic characteristics are studied when the stochastic perturbation is considered. Nonlinear differential items are introduced to explain the hysteretic phenomena of MSMA, and the constructive relationships among strain, stress, and magnetic field intensity are obtained by the partial least-square regression method. The nonlinear dynamic model of a MSMA microgripper subjected to in-plane stochastic excitation is developed. The stationary probability density function of the system’s response is obtained, the transition sets of the system are determined, and the conditions of stochastic bifurcation are obtained.more » The homoclinic and heteroclinic orbits of the system are given, and the boundary of the system’s safe basin is obtained by stochastic Melnikov integral method. The numerical and experimental results show that the system’s motion depends on its parameters, and stochastic Hopf bifurcation appears in the variation of the parameters; the area of the safe basin decreases with the increase of the stochastic excitation, and the boundary of the safe basin becomes fractal. The results of this paper are helpful for the application of MSMA microgripper in engineering fields.« less
Speaker verification system using acoustic data and non-acoustic data
Gable, Todd J [Walnut Creek, CA; Ng, Lawrence C [Danville, CA; Holzrichter, John F [Berkeley, CA; Burnett, Greg C [Livermore, CA
2006-03-21
A method and system for speech characterization. One embodiment includes a method for speaker verification which includes collecting data from a speaker, wherein the data comprises acoustic data and non-acoustic data. The data is used to generate a template that includes a first set of "template" parameters. The method further includes receiving a real-time identity claim from a claimant, and using acoustic data and non-acoustic data from the identity claim to generate a second set of parameters. The method further includes comparing the first set of parameters to the set of parameters to determine whether the claimant is the speaker. The first set of parameters and the second set of parameters include at least one purely non-acoustic parameter, including a non-acoustic glottal shape parameter derived from averaging multiple glottal cycle waveforms.
NASA Astrophysics Data System (ADS)
Li, Xia; Welch, E. Brian; Arlinghaus, Lori R.; Bapsi Chakravarthy, A.; Xu, Lei; Farley, Jaime; Loveless, Mary E.; Mayer, Ingrid A.; Kelley, Mark C.; Meszoely, Ingrid M.; Means-Powell, Julie A.; Abramson, Vandana G.; Grau, Ana M.; Gore, John C.; Yankeelov, Thomas E.
2011-09-01
Quantitative analysis of dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) data requires the accurate determination of the arterial input function (AIF). A novel method for obtaining the AIF is presented here and pharmacokinetic parameters derived from individual and population-based AIFs are then compared. A Philips 3.0 T Achieva MR scanner was used to obtain 20 DCE-MRI data sets from ten breast cancer patients prior to and after one cycle of chemotherapy. Using a semi-automated method to estimate the AIF from the axillary artery, we obtain the AIF for each patient, AIFind, and compute a population-averaged AIF, AIFpop. The extended standard model is used to estimate the physiological parameters using the two types of AIFs. The mean concordance correlation coefficient (CCC) for the AIFs segmented manually and by the proposed AIF tracking approach is 0.96, indicating accurate and automatic tracking of an AIF in DCE-MRI data of the breast is possible. Regarding the kinetic parameters, the CCC values for Ktrans, vp and ve as estimated by AIFind and AIFpop are 0.65, 0.74 and 0.31, respectively, based on the region of interest analysis. The average CCC values for the voxel-by-voxel analysis are 0.76, 0.84 and 0.68 for Ktrans, vp and ve, respectively. This work indicates that Ktrans and vp show good agreement between AIFpop and AIFind while there is a weak agreement on ve.
Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm
NASA Technical Reports Server (NTRS)
Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.
1991-01-01
The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.
Using Optimisation Techniques to Granulise Rough Set Partitions
NASA Astrophysics Data System (ADS)
Crossingham, Bodie; Marwala, Tshilidzi
2007-11-01
This paper presents an approach to optimise rough set partition sizes using various optimisation techniques. Three optimisation techniques are implemented to perform the granularisation process, namely, genetic algorithm (GA), hill climbing (HC) and simulated annealing (SA). These optimisation methods maximise the classification accuracy of the rough sets. The proposed rough set partition method is tested on a set of demographic properties of individuals obtained from the South African antenatal survey. The three techniques are compared in terms of their computational time, accuracy and number of rules produced when applied to the Human Immunodeficiency Virus (HIV) data set. The optimised methods results are compared to a well known non-optimised discretisation method, equal-width-bin partitioning (EWB). The accuracies achieved after optimising the partitions using GA, HC and SA are 66.89%, 65.84% and 65.48% respectively, compared to the accuracy of EWB of 59.86%. In addition to rough sets providing the plausabilities of the estimated HIV status, they also provide the linguistic rules describing how the demographic parameters drive the risk of HIV.
Development of a distributed-parameter mathematical model for simulation of cryogenic wind tunnels
NASA Technical Reports Server (NTRS)
Tripp, J. S.
1983-01-01
A one-dimensional distributed-parameter dynamic model of a cryogenic wind tunnel was developed which accounts for internal and external heat transfer, viscous momentum losses, and slotted-test-section dynamics. Boundary conditions imposed by liquid-nitrogen injection, gas venting, and the tunnel fan were included. A time-dependent numerical solution to the resultant set of partial differential equations was obtained on a CDC CYBER 203 vector-processing digital computer at a usable computational rate. Preliminary computational studies were performed by using parameters of the Langley 0.3-Meter Transonic Cryogenic Tunnel. Studies were performed by using parameters from the National Transonic Facility (NTF). The NTF wind-tunnel model was used in the design of control loops for Mach number, total temperature, and total pressure and for determining interactions between the control loops. It was employed in the application of optimal linear-regulator theory and eigenvalue-placement techniques to develop Mach number control laws.
NASA Astrophysics Data System (ADS)
Shrivastava, Akash; Mohanty, A. R.
2018-03-01
This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.
Standing shocks in magnetized dissipative accretion flow around black holes
NASA Astrophysics Data System (ADS)
Sarkar, Biplob; Das, Santabrata
2018-02-01
We explore the global structure of the accretion flow around a Schwarzschild black hole where the accretion disc is threaded by toroidal magnetic fields. The accretion flow is optically thin and advection dominated. The synchrotron radiation is considered to be the active cooling mechanism in the flow. With this, we obtain the global transonic accretion solutions and show that centrifugal barrier in the rotating magnetized accretion flow causes a discontinuous transition of the flow variables in the form of shock waves. The shock properties and the dynamics of the post-shock corona are affected by the flow parameters such as viscosity, cooling rate and strength of the magnetic fields. The shock properties are investigated against these flow parameters. We further show that for a given set of boundary parameters at the outer edge of the disc, accretion flow around a black hole admits shock when the flow parameters are tuned for a considerable range.
Free Convection Nanofluid Flow in the Stagnation-Point Region of a Three-Dimensional Body
Farooq, Umer
2014-01-01
Analytical results are presented for a steady three-dimensional free convection flow in the stagnation point region over a general curved isothermal surface placed in a nanofluid. The momentum equations in x- and y-directions, energy balance equation, and nanoparticle concentration equation are reduced to a set of four fully coupled nonlinear differential equations under appropriate similarity transformations. The well known technique optimal homotopy analysis method (OHAM) is used to obtain the exact solution explicitly, whose convergence is then checked in detail. Besides, the effects of the physical parameters, such as the Lewis number, the Brownian motion parameter, the thermophoresis parameter, and the buoyancy ratio on the profiles of velocities, temperature, and concentration, are studied and discussed. Furthermore the local skin friction coefficients in x- and y-directions, the local Nusselt number, and the local Sherwood number are examined for various values of the physical parameters. PMID:25114954
Determination of service standard time for liquid waste parameter in certification institution
NASA Astrophysics Data System (ADS)
Sembiring, M. T.; Kusumawaty, D.
2018-02-01
Baristand Industry Medan is a technical implementation unit under the Industrial and Research and Development Agency, the Ministry of Industry. One of the services often used in Baristand Industry Medan is liquid waste testing service. The company set the standard of service 9 working days for testing services. At 2015, 89.66% on testing services liquid waste does not meet the specified standard of services company. The purpose of this research is to specify the standard time of each parameter in testing services liquid waste. The method used is the stopwatch time study. There are 45 test parameters in liquid waste laboratory. The measurement of the time done 4 samples per test parameters using the stopwatch. From the measurement results obtained standard time that the standard Minimum Service test of liquid waste is 13 working days if there is testing E. coli.
NASA Astrophysics Data System (ADS)
Teixidor, D.; Ferrer, I.; Ciurana, J.
2012-04-01
This paper reports the characterization of laser machining (milling) process to manufacture micro-channels in order to understand the incidence of process parameters on the final features. Selection of process operational parameters is highly critical for successful laser micromachining. A set of designed experiments is carried out in a pulsed Nd:YAG laser system using AISI H13 hardened tool steel as work material. Several micro-channels have been manufactured as micro-mold cavities varying parameters such as scanning speed (SS), pulse intensity (PI) and pulse frequency (PF). Results are obtained by evaluating the dimensions and the surface finish of the micro-channel. The dimensions and shape of the micro-channels produced with laser-micro-milling process exhibit variations. In general the use of low scanning speeds increases the quality of the feature in both surface finishing and dimensional.
A comparative study of electrochemical machining process parameters by using GA and Taguchi method
NASA Astrophysics Data System (ADS)
Soni, S. K.; Thomas, B.
2017-11-01
In electrochemical machining quality of machined surface strongly depend on the selection of optimal parameter settings. This work deals with the application of Taguchi method and genetic algorithm using MATLAB to maximize the metal removal rate and minimize the surface roughness and overcut. In this paper a comparative study is presented for drilling of LM6 AL/B4C composites by comparing the significant impact of numerous machining process parameters such as, electrolyte concentration (g/l),machining voltage (v),frequency (hz) on the response parameters (surface roughness, material removal rate and over cut). Taguchi L27 orthogonal array was chosen in Minitab 17 software, for the investigation of experimental results and also multiobjective optimization done by genetic algorithm is employed by using MATLAB. After obtaining optimized results from Taguchi method and genetic algorithm, a comparative results are presented.
The Total Gaussian Class of Quasiprobabilities and its Relation to Squeezed-State Excitations
NASA Technical Reports Server (NTRS)
Wuensche, Alfred
1996-01-01
The class of quasiprobabilities obtainable from the Wigner quasiprobability by convolutions with the general class of Gaussian functions is investigated. It can be described by a three-dimensional, in general, complex vector parameter with the property of additivity when composing convolutions. The diagonal representation of this class of quasiprobabilities is connected with a generalization of the displaced Fock states in direction of squeezing. The subclass with real vector parameter is considered more in detail. It is related to the most important kinds of boson operator ordering. The properties of a specific set of discrete excitations of squeezed coherent states are given.
NASA Astrophysics Data System (ADS)
Ganesh Kumar, K.; Rizwan-ul-Haq; Rudraswamy, N. G.; Gireesha, B. J.
The present study addresses the three-dimensional flow of a Prandtl fluid over a Riga plate in the presence of chemical reaction and convective condition. The converted set of boundary layer equations are solved numerically by RKF four-fifth method. Obtained numerical results for flow and mass transfer characteristics are discussed for various physical parameters. Additionally, the skin friction coefficient and Sherwood number are also presented. It is found that, the momentum boundary layer thickness is dominant for higher values of α and solutal boundary layer is low for higher Schmidt number and chemical reaction parameter.
Crustal dynamics project data analysis, 1991: VLBI geodetic results, 1979 - 1990
NASA Technical Reports Server (NTRS)
Ma, C.; Ryan, J. W.; Caprette, D. S.
1992-01-01
The Goddard VLBI group reports the results of analyzing 1412 Mark II data sets acquired from fixed and mobile observing sites through the end of 1990 and available to the Crustal Dynamics Project. Three large solutions were used to obtain Earth rotation parameters, nutation offsets, global source positions, site velocities, and baseline evolution. Site positions are tabulated on a yearly basis from 1979 through 1992. Site velocities are presented in both geocentric Cartesian coordinates and topocentric coordinates. Baseline evolution is plotted for 175 baselines. Rates are computed for earth rotation and nutation parameters. Included are 104 sources, 88 fixed stations and mobile sites, and 688 baselines.
NASA Astrophysics Data System (ADS)
Harshan, S.; Roth, M.; Velasco, E.
2014-12-01
Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.
Pinching parameters for open (super) strings
NASA Astrophysics Data System (ADS)
Playle, Sam; Sciuto, Stefano
2018-02-01
We present an approach to the parametrization of (super) Schottky space obtained by sewing together three-punctured discs with strips. Different cubic ribbon graphs classify distinct sets of pinching parameters; we show how they are mapped onto each other. The parametrization is particularly well-suited to describing the region within (super) moduli space where open bosonic or Neveu-Schwarz string propagators become very long and thin, which dominates the IR behaviour of string theories. We show how worldsheet objects such as the Green's function converge to graph theoretic objects such as the Symanzik polynomials in the α ' → 0 limit, allowing us to see how string theory reproduces the sum over Feynman graphs. The (super) string measure takes on a simple and elegant form when expressed in terms of these parameters.
A parameter estimation algorithm for spatial sine testing - Theory and evaluation
NASA Technical Reports Server (NTRS)
Rost, R. W.; Deblauwe, F.
1992-01-01
This paper presents the theory and an evaluation of a spatial sine testing parameter estimation algorithm that uses directly the measured forced mode of vibration and the measured force vector. The parameter estimation algorithm uses an ARMA model and a recursive QR algorithm is applied for data reduction. In this first evaluation, the algorithm has been applied to a frequency response matrix (which is a particular set of forced mode of vibration) using a sliding frequency window. The objective of the sliding frequency window is to execute the analysis simultaneously with the data acquisition. Since the pole values and the modal density are obtained from this analysis during the acquisition, the analysis information can be used to help determine the forcing vectors during the experimental data acquisition.
Performance analysis of wideband data and television channels. [space shuttle communications
NASA Technical Reports Server (NTRS)
Geist, J. M.
1975-01-01
Several aspects are discussed of space shuttle communications, including the return link (shuttle-to-ground) relayed through a satellite repeater (TDRS). The repeater exhibits nonlinear amplification and an amplitude-dependent phase shift. Models were developed for various link configurations, and computer simulation programs based on these models are described. Certain analytical results on system performance were also obtained. For the system parameters assumed, the results indicate approximately 1 db degradation relative to a link employing a linear repeater. While this degradation is dependent upon the repeater, filter bandwidths, and modulation parameters used, the programs can accommodate changes to any of these quantities. Thus the programs can be applied to determine the performance with any given set of parameters, or used as an aid in link design.
Singlet fermionic dark matter with Veltman conditions
NASA Astrophysics Data System (ADS)
Kim, Yeong Gyun; Lee, Kang Young; Nam, Soo-hyeon
2018-07-01
We reexamine a renormalizable model of a fermionic dark matter with a gauge singlet Dirac fermion and a real singlet scalar which can ameliorate the scalar mass hierarchy problem of the Standard Model (SM). Our model setup is the minimal extension of the SM for which a realistic dark matter (DM) candidate is provided and the cancellation of one-loop quadratic divergence to the scalar masses can be achieved by the Veltman condition (VC) simultaneously. This model extension, although renormalizable, can be considered as an effective low-energy theory valid up to cut-off energies about 10 TeV. We calculate the one-loop quadratic divergence contributions of the new scalar and fermionic DM singlets, and constrain the model parameters using the VC and the perturbative unitarity conditions. Taking into account the invisible Higgs decay measurement, we show the allowed region of new physics parameters satisfying the recent measurement of relic abundance. With the obtained parameter set, we predict the elastic scattering cross section of the new singlet fermion into target nuclei for a direct detection of the dark matter. We also perform the full analysis with arbitrary set of parameters without the VC as a comparison, and discuss the implication of the constraints by the VC in detail.
Analyzing chromatographic data using multilevel modeling.
Wiczling, Paweł
2018-06-01
It is relatively easy to collect chromatographic measurements for a large number of analytes, especially with gradient chromatographic methods coupled with mass spectrometry detection. Such data often have a hierarchical or clustered structure. For example, analytes with similar hydrophobicity and dissociation constant tend to be more alike in their retention than a randomly chosen set of analytes. Multilevel models recognize the existence of such data structures by assigning a model for each parameter, with its parameters also estimated from data. In this work, a multilevel model is proposed to describe retention time data obtained from a series of wide linear organic modifier gradients of different gradient duration and different mobile phase pH for a large set of acids and bases. The multilevel model consists of (1) the same deterministic equation describing the relationship between retention time and analyte-specific and instrument-specific parameters, (2) covariance relationships relating various physicochemical properties of the analyte to chromatographically specific parameters through quantitative structure-retention relationship based equations, and (3) stochastic components of intra-analyte and interanalyte variability. The model was implemented in Stan, which provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods. Graphical abstract Relationships between log k and MeOH content for acidic, basic, and neutral compounds with different log P. CI credible interval, PSA polar surface area.
The Quantum Approximation Optimization Algorithm for MaxCut: A Fermionic View
NASA Technical Reports Server (NTRS)
Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.
2017-01-01
Farhi et al. recently proposed a class of quantum algorithms, the Quantum Approximate Optimization Algorithm (QAOA), for approximately solving combinatorial optimization problems. A level-p QAOA circuit consists of steps in which a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2p times for which these two Hamiltonians are applied are the parameters of the algorithm. As p increases, however, the parameter search space grows quickly. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here, we analytically and numerically study parameter setting for QAOA applied to MAXCUT. For level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MAXCUT, the Ring of Disagrees, or the 1D antiferromagnetic ring, we provide an analysis for arbitrarily high level. Using a Fermionic representation, the evolution of the system under QAOA translates into quantum optimal control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of QAOA for any p. It also greatly simplifies numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional sub-manifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Prediction of pump cavitation performance
NASA Technical Reports Server (NTRS)
Moore, R. D.
1974-01-01
A method for predicting pump cavitation performance with various liquids, liquid temperatures, and rotative speeds is presented. Use of the method requires that two sets of test data be available for the pump of interest. Good agreement between predicted and experimental results of cavitation performance was obtained for several pumps operated in liquids which exhibit a wide range of properties. Two cavitation parameters which qualitatively evaluate pump cavitation performance are also presented.
Attitude dynamic of spin-stabilized satellites with flexible appendages
NASA Technical Reports Server (NTRS)
Renard, M. L.
1973-01-01
Equations of motion and computer programs have been developed for analyzing the motion of a spin-stabilized spacecraft having long, flexible appendages. Stability charts were derived, or can be redrawn with the desired accuracy for any particular set of design parameters. Simulation graphs of variables of interest are readily obtainable on line using program FLEXAT. Finally, applications to actual satellites, such as UK-4 and IMP-1 have been considered.
Meteor astronomy using a forward scatter set-up
NASA Astrophysics Data System (ADS)
Wislez, Jean-Marc
2006-08-01
An overview of the classical theory of the reflection of radio waves off meteor trails is given: the reflection conditions and mechanisms are discussed, and typical (t,A)-profiles of radio meteors are derived. Various configurations of the receive station(s) are proposed. The goal is to give the radio observer more insight in the possibilities, limitations and relevant parameters of forward scattering, and on how to obtain these through observations.
Determination of Watershed Lag Equation for Philippine Hydrology
NASA Astrophysics Data System (ADS)
Cipriano, F. R.; Lagmay, A. M. F. A.; Uichanco, C.; Mendoza, J.; Sabio, G.; Punay, K. N.; Oquindo, M. R.; Horritt, M.
2014-12-01
Widespread flooding is a major problem in the Philippines. The country experiences heavy amount of rainfall throughout the year and several areas are prone to flood hazards because of its unique topography. Human casualties and destruction of infrastructure are some of the damages caused by flooding and the country's government has undertaken various efforts to mitigate these hazards. One of the solutions was to create flood hazard maps of different floodplains and use them to predict the possible catastrophic results of different rain scenarios. To produce these maps, different types of data were needed and part of that is calculating hydrological components to come up with an accurate output. This paper presents how an important parameter, the time-to-peak of the watershed (Tp) was calculated. Time-to-peak is defined as the time at which the largest discharge of the watershed occurs. This is computed by using a lag time equation that was developed specifically for the Philippine setting. The equation involves three measurable parameters, namely, watershed length (L), maximum potential retention (S), and watershed slope (Y). This approach is based on a similar method developed by CH2M Hill and Horritt for Taiwan, which has a similar set of meteorological and hydrological parameters with the Philippines. Data from fourteen water level sensors covering 67 storms from all the regions in the country were used to estimate the time-to-peak. These sensors were chosen by using a screening process that considers the distance of the sensors from the sea, the availability of recorded data, and the catchment size. Values of Tp from the different sensors were generated from the general lag time equation based on the Natural Resource Conservation Management handbook by the US Department of Agriculture. The calculated Tp values were plotted against the values obtained from the equation L0.8(S+1)0.7/Y0.5. Regression analysis was used to obtain the final equation that would be used to calculate the time-to-peak specifically for rivers in the Philippine setting. The calculated values could then be used as a parameter for modeling different flood scenarios in the country.
Spérandio, Mathieu; Pocquet, Mathieu; Guo, Lisha; Ni, Bing-Jie; Vanrolleghem, Peter A; Yuan, Zhiguo
2016-03-01
Five activated sludge models describing N2O production by ammonium oxidising bacteria (AOB) were compared to four different long-term process data sets. Each model considers one of the two known N2O production pathways by AOB, namely the AOB denitrification pathway and the hydroxylamine oxidation pathway, with specific kinetic expressions. Satisfactory calibration could be obtained in most cases, but none of the models was able to describe all the N2O data obtained in the different systems with a similar parameter set. Variability of the parameters can be related to difficulties related to undescribed local concentration heterogeneities, physiological adaptation of micro-organisms, a microbial population switch, or regulation between multiple AOB pathways. This variability could be due to a dependence of the N2O production pathways on the nitrite (or free nitrous acid-FNA) concentrations and other operational conditions in different systems. This work gives an overview of the potentialities and limits of single AOB pathway models. Indicating in which condition each single pathway model is likely to explain the experimental observations, this work will also facilitate future work on models in which the two main N2O pathways active in AOB are represented together.
Demonstration of a vectorial optical field generator with adaptive close loop control.
Chen, Jian; Kong, Lingjiang; Zhan, Qiwen
2017-12-01
We experimentally demonstrate a vectorial optical field generator (VOF-Gen) with an adaptive close loop control. The close loop control capability is illustrated with the calibration of polarization modulation of the system. To calibrate the polarization ratio modulation, we generate 45° linearly polarized beam and make it propagate through a linear analyzer whose transmission axis is orthogonal to the incident beam. For the retardation calibration, circularly polarized beam is employed and a circular polarization analyzer with the opposite chirality is placed in front of the CCD as the detector. In both cases, the close loop control automatically changes the value of the corresponding calibration parameters in the pre-set ranges to generate the phase patterns applied to the spatial light modulators and records the intensity distribution of the output beam by the CCD camera. The optimized calibration parameters are determined corresponding to the minimum total intensity in each case. Several typical kinds of vectorial optical beams are created with and without the obtained calibration parameters, and the full Stokes parameter measurements are carried out to quantitatively analyze the polarization distribution of the generated beams. The comparisons among these results clearly show that the obtained calibration parameters could remarkably improve the accuracy of the polarization modulation of the VOF-Gen, especially for generating elliptically polarized beam with large ellipticity, indicating the significance of the presented close loop in enhancing the performance of the VOF-Gen.
NASA Astrophysics Data System (ADS)
EL-Kalaawy, O. H.; Moawad, S. M.; Wael, Shrouk
The propagation of nonlinear waves in unmagnetized strongly coupled dusty plasma with Boltzmann distributed electrons, iso-nonthermal distributed ions and negatively charged dust grains is considered. The basic set of fluid equations is reduced to the Schamel Kadomtsev-Petviashvili (S-KP) equation by using the reductive perturbation method. The variational principle and conservation laws of S-KP equation are obtained. It is shown that the S-KP equation is non-integrable using Painlevé analysis. A set of new exact solutions are obtained by auto-Bäcklund transformations. The stability analysis is discussed for the existence of dust acoustic solitary waves (DASWs) and it is found that the physical parameters have strong effects on the stability criterion. In additional to, the electric field and the true Mach number of this solution are investigated. Finally, we will study the physical meanings of solutions.
Detection of the toughest: Pedestrian injury risk as a smooth function of age.
Niebuhr, Tobias; Junge, Mirko
2017-07-04
Though it is common to refer to age-specific groups (e.g., children, adults, elderly), smooth trends conditional on age are mainly ignored in the literature. The present study examines the pedestrian injury risk in full-frontal pedestrian-to-passenger car accidents and incorporates age-in addition to collision speed and injury severity-as a plug-in parameter. Recent work introduced a model for pedestrian injury risk functions using explicit formulae with easily interpretable model parameters. This model is expanded by pedestrian age as another model parameter. Using the German In-Depth Accident Study (GIDAS) to obtain age-specific risk proportions, the model parameters are fitted to the raw data and then smoothed by broken-line regression. The approach supplies explicit probabilities for pedestrian injury risk conditional on pedestrian age, collision speed, and injury severity under investigation. All results yield consistency to each other in the sense that risks for more severe injuries are less probable than those for less severe injuries. As a side product, the approach indicates specific ages at which the risk behavior fundamentally changes. These threshold values can be interpreted as the most robust ages for pedestrians. The obtained age-wise risk functions can be aggregated and adapted to any population. The presented approach is formulated in such general terms that in can be directly used for other data sets or additional parameters; for example, the pedestrian's sex. Thus far, no other study using age as a plug-in parameter can be found.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tolls, J.; Sijm, D.T.H.M.
1995-10-01
A statistical analysis was done of the relationship between hydrophobicity and bioconcentration parameters (uptake and elimination rate constants and bioconcentration factor) predicted by the diffusive mass-transfer (DMT) concept of bioconcentration developed previously. The authors employed polychlorinated biphenyls and benzenes (PCB/zs) as model compounds and the octanol/water partition coefficient as hydrophobicity parameter. They conclude that the model is consistent with the data. Subsequently, they applied the DMT concept to a set of preliminary bioconcentration data for surfactants using the critical micelle concentration (CMC) as hydrophobicity parameter. The obtained relationships qualitatively agree with the DMT concept, indicating that hydrophobicity is of greatmore » influence on surfactant bioconcentration. Finally, they investigated the hydrophobicity-bioconcentration relationships of surfactants and PCB/zs using aqueous solubility as common hydrophobicity parameter and found the relationships between the bioconcentration parameters and hydrophobicity to agree with the DMT concept. These findings are based on total radiolabel data. Therefore, they need to be confirmed using compound-specific surfactant bioconcentration data.« less
The Structural Parameters of the Globular Clusters in M31 with PAndAS
NASA Astrophysics Data System (ADS)
Woodley, Kristin; Pan-Andromeda Archaeological Survey (PAndAS)
2012-05-01
The Pan-Andromeda Archaeological Survey (PAndAS) has obtained images with the Canada France Hawaii Telescope using the instrument MegaCam, covering over 400 square degrees in the sky and extending beyond 150 kpc in radius from the center of M31. With this extensive data set, we have measured the structural parameters of all confirmed globular clusters in M31 as well as for a large fraction of the candidate globular clusters in the Revised Bologna Catalog V.4 (Galleti et al. 2004, A&A, 416, 917). In this paper, we present their parameters, including their core-, effective (half-light)-, and tidal radii, as well as their ellipticities measured in a homogeneous manner with ISHAPE (Larsen 1999, A&AS, 139, 393). We examine these parameters as functions of radial position, luminosity, color, metallicity, and age. We also use our measurements as an additional parameter to help constrain the candidacy of the unconfirmed globular clusters.
CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.
Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos
2013-12-31
Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.
Parameter space of experimental chaotic circuits with high-precision control parameters.
de Sousa, Francisco F G; Rubinger, Rero M; Sartorelli, José C; Albuquerque, Holokx A; Baptista, Murilo S
2016-08-01
We report high-resolution measurements that experimentally confirm a spiral cascade structure and a scaling relationship of shrimps in the Chua's circuit. Circuits constructed using this component allow for a comprehensive characterization of the circuit behaviors through high resolution parameter spaces. To illustrate the power of our technological development for the creation and the study of chaotic circuits, we constructed a Chua circuit and study its high resolution parameter space. The reliability and stability of the designed component allowed us to obtain data for long periods of time (∼21 weeks), a data set from which an accurate estimation of Lyapunov exponents for the circuit characterization was possible. Moreover, this data, rigorously characterized by the Lyapunov exponents, allows us to reassure experimentally that the shrimps, stable islands embedded in a domain of chaos in the parameter spaces, can be observed in the laboratory. Finally, we confirm that their sizes decay exponentially with the period of the attractor, a result expected to be found in maps of the quadratic family.
Wahman, David G.; Wulfeck-Kleier, Karen A.; Pressman, Jonathan G.
2009-01-01
Monochloramine disinfection kinetics were determined for the pure-culture ammonia-oxidizing bacterium Nitrosomonas europaea (ATCC 19718) by two culture-independent methods, namely, Live/Dead BacLight (LD) and propidium monoazide quantitative PCR (PMA-qPCR). Both methods were first verified with mixtures of heat-killed (nonviable) and non-heat-killed (viable) cells before a series of batch disinfection experiments with stationary-phase cultures (batch grown for 7 days) at pH 8.0, 25°C, and 5, 10, and 20 mg Cl2/liter monochloramine. Two data sets were generated based on the viability method used, either (i) LD or (ii) PMA-qPCR. These two data sets were used to estimate kinetic parameters for the delayed Chick-Watson disinfection model through a Bayesian analysis implemented in WinBUGS. This analysis provided parameter estimates of 490 mg Cl2-min/liter for the lag coefficient (b) and 1.6 × 10−3 to 4.0 × 10−3 liter/mg Cl2-min for the Chick-Watson disinfection rate constant (k). While estimates of b were similar for both data sets, the LD data set resulted in a greater k estimate than that obtained with the PMA-qPCR data set, implying that the PMA-qPCR viability measure was more conservative than LD. For N. europaea, the lag phase was not previously reported for culture-independent methods and may have implications for nitrification in drinking water distribution systems. This is the first published application of a PMA-qPCR method for disinfection kinetic model parameter estimation as well as its application to N. europaea or monochloramine. Ultimately, this PMA-qPCR method will allow evaluation of monochloramine disinfection kinetics for mixed-culture bacteria in drinking water distribution systems. PMID:19561179
Constraining Galactic cosmic-ray parameters with Z ≤ 2 nuclei
NASA Astrophysics Data System (ADS)
Coste, B.; Derome, L.; Maurin, D.; Putze, A.
2012-03-01
Context. The secondary-to-primary B/C ratio is widely used for studying Galactic cosmic-ray propagation processes. The 2H/4He and 3He/4He ratios probe a different Z/A regime, which provides a test for the "universality" of propagation. Aims: We revisit the constraints on diffusion-model parameters set by the quartet (1H, 2H, 3He, 4He), using the most recent data as well as updated formulae for the inelastic and production cross-sections. Methods: Our analysis relies on the USINE propagation package and a Markov Chain Monte Carlo technique to estimate the probability density functions of the parameters. Simulated data were also used to validate analysis strategies. Results: The fragmentation of CNO cosmic rays (resp. NeMgSiFe) on the interstellar medium during their propagation contributes to 20% (resp. 20%) of the 2H and 15% (resp. 10%) of the 3He flux at high energy. The C to Fe elements are also responsible for up to 10% of the 4He flux measured at 1 GeV/n. The analysis of 3He/4He (and to a lesser extent 2H/4He) data shows that the transport parameters are consistent with those from the B/C analysis: the diffusion model with δ ~ 0.7 (diffusion slope), Vc ~ 20 km s-1 (galactic wind), Va ~ 40 km s-1 (reacceleration) is favoured, but the combination δ ~ 0.2, Vc ~ 0, and Va ~ 80 km s-1 is a close second. The confidence intervals on the parameters show that the constraints set by the quartet data can compete with those derived from the B/C data. These constraints are tighter when adding the 3He (or 2H) flux measurements, and the tightest when the He flux is added as well. For the latter, the analysis of simulated and real data shows an increased sensitivity to biases. Using the secondary-to-primary ratio along with a loose prior on the source parameters is recommended to obtain the most robust constraints on the transport parameters. Conclusions: Light nuclei should be systematically considered in the analysis of transport parameters. They provide independent constraints that can compete with those obtained from the B/C analysis.
An unbiased Hessian representation for Monte Carlo PDFs.
Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Latorre, José Ignacio; Rojo, Juan
We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (MC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the MC-H PDF set.
Simulation of medical Q-switch flash-pumped Er:YAG laser
NASA Astrophysics Data System (ADS)
-Yan-lin, Wang; Huang-Chuyun; Yao-Yucheng; Xiaolin, Zou
2011-01-01
Er: YAG laser, the wavelength is 2940nm, can be absorbed strongly by water. The absorption coefficient is as high as 13000 cm-1. As the water strong absorption, Erbium laser can bring shallow penetration depth and smaller surrounding tissue injury in most soft tissue and hard tissue. At the same time, the interaction between 2940nm radiation and biological tissue saturated with water is equivalent to instantaneous heating within limited volume, thus resulting in the phenomenon of micro-explosion to removal organization. Different parameters can be set up to cut enamel, dentin, caries and soft tissue. For the development and optimization of laser system, it is a practical choice to use laser modeling to predict the influence of various parameters for laser performance. Aim at the status of low Erbium laser output power, flash-pumped Er: YAG laser performance was simulated to obtain optical output in theory. the rate equation model was obtained and used to predict the change of population densities in various manifolds and use the technology of Q-switch the simulate laser output for different design parameters and results showed that Er: YAG laser output energy can achieve the maximum average output power of 9.8W under the given parameters. The model can be used to find the potential laser systems that meet application requirements.
NASA Astrophysics Data System (ADS)
Zhao, Yun-wei; Zhu, Zi-qiang; Lu, Guang-yin; Han, Bo
2018-03-01
The sine and cosine transforms implemented with digital filters have been used in the Transient electromagnetic methods for a few decades. Kong (2007) proposed a method of obtaining filter coefficients, which are computed in the sample domain by Hankel transform pair. However, the curve shape of Hankel transform pair changes with a parameter, which usually is set to be 1 or 3 in the process of obtaining the digital filter coefficients of sine and cosine transforms. First, this study investigates the influence of the parameter on the digital filter algorithm of sine and cosine transforms based on the digital filter algorithm of Hankel transform and the relationship between the sine, cosine function and the ±1/2 order Bessel function of the first kind. The results show that the selection of the parameter highly influences the precision of digital filter algorithm. Second, upon the optimal selection of the parameter, it is found that an optimal sampling interval s also exists to achieve the best precision of digital filter algorithm. Finally, this study proposes four groups of sine and cosine transform digital filter coefficients with different length, which may help to develop the digital filter algorithm of sine and cosine transforms, and promote its application.
NASA Astrophysics Data System (ADS)
Vance, Fredrick W.; Slone, Robert V.; Stern, Charlotte L.; Hupp, Joseph T.
2000-03-01
Electroabsorption or Stark spectroscopy has been used to evaluate the systems (NC) 5M II-CN-Ru III(NH 3) 51- and (NC) 5M II-CN-Ru III(NH 3) 4py 1-, where M II=Fe II or Ru II. When a pyridine ligand is present in the axial position on the Ru III acceptor, the effective optical electron transfer distance - as measured by the change in dipole moment, |Δ μ| - is increased by more than 35% relative to the ammine substituted counterpart. Comparison of the charge transfer distances to the crystal structure of Na[(CN) 5Fe-CN-Ru(NH 3) 4py] · 6H 2O reveals that the Stark derived distances are ˜50% to ˜90% of the geometric separation of the metal centers. The differences result in an upward revision in the Hush delocalization parameter, c b2, and of the electronic coupling matrix element, H ab, relative to those parameters obtained exclusively from electronic absorption measurements. The revised parameters are compared to those, which are obtained via electrochemical techniques and found to be in only fair agreement. We conclude that the absorption/electroabsorption analysis likely yields a more reliable set of mixing and coupling parameters.
NASA Astrophysics Data System (ADS)
Luo, Ning; Illman, Walter A.
2016-09-01
Analyses are presented of long-term hydrographs perturbed by variable pumping/injection events in a confined aquifer at a municipal water-supply well field in the Region of Waterloo, Ontario (Canada). Such records are typically not considered for aquifer test analysis. Here, the water-level variations are fingerprinted to pumping/injection rate changes using the Theis model implemented in the WELLS code coupled with PEST. Analyses of these records yield a set of transmissivity ( T) and storativity ( S) estimates between each monitoring and production borehole. These individual estimates are found to poorly predict water-level variations at nearby monitoring boreholes not used in the calibration effort. On the other hand, the geometric means of the individual T and S estimates are similar to those obtained from previous pumping tests conducted at the same site and adequately predict water-level variations in other boreholes. The analyses reveal that long-term municipal water-level records are amenable to analyses using a simple analytical solution to estimate aquifer parameters. However, uniform parameters estimated with analytical solutions should be considered as first rough estimates. More accurate hydraulic parameters should be obtained by calibrating a three-dimensional numerical model that rigorously captures the complexities of the site with these data.
Optimization of processing parameters of amaranth grits before grinding into flour
NASA Astrophysics Data System (ADS)
Zharkova, I. M.; Safonova, Yu A.; Slepokurova, Yu I.
2018-05-01
There are the results of experimental studies about the influence of infrared treatment (IR processing) parameters of the amaranth grits before their grinding into flour on the composition and properties of the received product. Using the method called as regressionfactor analysis, the optimal conditions of the thermal processing to the amaranth grits were obtained: the belt speed of the conveyor – 0.049 m/s; temperature of amaranth grits in the tempering silo – 65.4 °C the thickness of the layer of amaranth grits on the belt is 3 - 5 mm and the lamp power is 69.2 kW/m2. The conducted researches confirmed that thermal effect to the amaranth grains in the IR setting allows getting flour with a smaller size of starch grains, with the increased water-holding ability, and with a changed value of its glycemic index. Mathematical processing of experimental data allowed establishing the dependence of the structural and technological characteristics of the amaranth flour on the IR processing parameters of amaranth grits. The obtained results are quite consistent with the experimental ones that proves the effectiveness of optimization based on mathematical planning of the experiment to determine the influence of heat treatment optimal parameters of the amaranth grits on the functional and technological properties of the flour received from it.
Solar system expansion and strong equivalence principle as seen by the NASA MESSENGER mission.
Genova, Antonio; Mazarico, Erwan; Goossens, Sander; Lemoine, Frank G; Neumann, Gregory A; Smith, David E; Zuber, Maria T
2018-01-18
The NASA MESSENGER mission explored the innermost planet of the solar system and obtained a rich data set of range measurements for the determination of Mercury's ephemeris. Here we use these precise data collected over 7 years to estimate parameters related to general relativity and the evolution of the Sun. These results confirm the validity of the strong equivalence principle with a significantly refined uncertainty of the Nordtvedt parameter η = (-6.6 ± 7.2) × 10 -5 . By assuming a metric theory of gravitation, we retrieved the post-Newtonian parameter β = 1 + (-1.6 ± 1.8) × 10 -5 and the Sun's gravitational oblateness, [Formula: see text] = (2.246 ± 0.022) × 10 -7 . Finally, we obtain an estimate of the time variation of the Sun gravitational parameter, [Formula: see text] = (-6.13 ± 1.47) × 10 -14 , which is consistent with the expected solar mass loss due to the solar wind and interior processes. This measurement allows us to constrain [Formula: see text] to be <4 × 10 -14 per year.
NASA Astrophysics Data System (ADS)
Tubino, Federica
2018-03-01
The effect of human-structure interaction in the vertical direction for footbridges is studied based on a probabilistic approach. The bridge is modeled as a continuous dynamic system, while pedestrians are schematized as moving single-degree-of-freedom systems with random dynamic properties. The non-dimensional form of the equations of motion allows us to obtain results that can be applied in a very wide set of cases. An extensive Monte Carlo simulation campaign is performed, varying the main non-dimensional parameters identified, and the mean values and coefficients of variation of the damping ratio and of the non-dimensional natural frequency of the coupled system are reported. The results obtained can be interpreted from two different points of view. If the characterization of pedestrians' equivalent dynamic parameters is assumed as uncertain, as revealed from a current literature review, then the paper provides a range of possible variations of the coupled system damping ratio and natural frequency as a function of pedestrians' parameters. Assuming that a reliable characterization of pedestrians' dynamic parameters is available (which is not the case at present, but could be in the future), the results presented can be adopted to estimate the damping ratio and natural frequency of the coupled footbridge-pedestrian system for a very wide range of real structures.
Dynamical structure of magnetized dissipative accretion flow around black holes
NASA Astrophysics Data System (ADS)
Sarkar, Biplob; Das, Santabrata
2016-09-01
We study the global structure of optically thin, advection dominated, magnetized accretion flow around black holes. We consider the magnetic field to be turbulent in nature and dominated by the toroidal component. With this, we obtain the complete set of accretion solutions for dissipative flows where bremsstrahlung process is regarded as the dominant cooling mechanism. We show that rotating magnetized accretion flow experiences virtual barrier around black hole due to centrifugal repulsion that can trigger the discontinuous transition of the flow variables in the form of shock waves. We examine the properties of the shock waves and find that the dynamics of the post-shock corona (PSC) is controlled by the flow parameters, namely viscosity, cooling rate and strength of the magnetic field, respectively. We separate the effective region of the parameter space for standing shock and observe that shock can form for wide range of flow parameters. We obtain the critical viscosity parameter that allows global accretion solutions including shocks. We estimate the energy dissipation at the PSC from where a part of the accreting matter can deflect as outflows and jets. We compare the maximum energy that could be extracted from the PSC and the observed radio luminosity values for several supermassive black hole sources and the observational implications of our present analysis are discussed.
Dausman, Alyssa M.; Doherty, John; Langevin, Christian D.
2010-01-01
Pilot points for parameter estimation were creatively used to address heterogeneity at both the well field and regional scales in a variable-density groundwater flow and solute transport model designed to test multiple hypotheses for upward migration of fresh effluent injected into a highly transmissive saline carbonate aquifer. Two sets of pilot points were used within in multiple model layers, with one set of inner pilot points (totaling 158) having high spatial density to represent hydraulic conductivity at the site, while a second set of outer points (totaling 36) of lower spatial density was used to represent hydraulic conductivity further from the site. Use of a lower spatial density outside the site allowed (1) the total number of pilot points to be reduced while maintaining flexibility to accommodate heterogeneity at different scales, and (2) development of a model with greater areal extent in order to simulate proper boundary conditions that have a limited effect on the area of interest. The parameters associated with the inner pilot points were log transformed hydraulic conductivity multipliers of the conductivity field obtained by interpolation from outer pilot points. The use of this dual inner-outer scale parameterization (with inner parameters constituting multipliers for outer parameters) allowed smooth transition of hydraulic conductivity from the site scale, where greater spatial variability of hydraulic properties exists, to the regional scale where less spatial variability was necessary for model calibration. While the model is highly parameterized to accommodate potential aquifer heterogeneity, the total number of pilot points is kept at a minimum to enable reasonable calibration run times.
NASA Astrophysics Data System (ADS)
Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.
2017-05-01
Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.
Parameter extraction with neural networks
NASA Astrophysics Data System (ADS)
Cazzanti, Luca; Khan, Mumit; Cerrina, Franco
1998-06-01
In semiconductor processing, the modeling of the process is becoming more and more important. While the ultimate goal is that of developing a set of tools for designing a complete process (Technology CAD), it is also necessary to have modules to simulate the various technologies and, in particular, to optimize specific steps. This need is particularly acute in lithography, where the continuous decrease in CD forces the technologies to operate near their limits. In the development of a 'model' for a physical process, we face several levels of challenges. First, it is necessary to develop a 'physical model,' i.e. a rational description of the process itself on the basis of know physical laws. Second, we need an 'algorithmic model' to represent in a virtual environment the behavior of the 'physical model.' After a 'complete' model has been developed and verified, it becomes possible to do performance analysis. In many cases the input parameters are poorly known or not accessible directly to experiment. It would be extremely useful to obtain the values of these 'hidden' parameters from experimental results by comparing model to data. This is particularly severe, because the complexity and costs associated with semiconductor processing make a simple 'trial-and-error' approach infeasible and cost- inefficient. Even when computer models of the process already exists, obtaining data through simulations may be time consuming. Neural networks (NN) are powerful computational tools to predict the behavior of a system from an existing data set. They are able to adaptively 'learn' input/output mappings and to act as universal function approximators. In this paper we use artificial neural networks to build a mapping from the input parameters of the process to output parameters which are indicative of the performance of the process. Once the NN has been 'trained,' it is also possible to observe the process 'in reverse,' and to extract the values of the inputs which yield outputs with desired characteristics. Using this method, we can extract optimum values for the parameters and determine the process latitude very quickly.