Development and system identification of a light unmanned aircraft for flying qualities research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, M.E.; Andrisani, D. II
This paper describes the design, construction, flight testing and system identification of a light weight remotely piloted aircraft and its use in studying flying qualities in the longitudinal axis. The short period approximation to the longitudinal dynamics of the aircraft was used. Parameters in this model were determined a priori using various empirical estimators. These parameters were then estimated from flight data using a maximum likelihood parameter identification method. A comparison of the parameter values revealed that the stability derivatives obtained from the empirical estimators were reasonably close to the flight test results. However, the control derivatives determined by themore » empirical estimators were too large by a factor of two. The aircraft was also flown to determine how the longitudinal flying qualities of light weight remotely piloted aircraft compared to full size manned aircraft. It was shown that light weight remotely piloted aircraft require much faster short period dynamics to achieve level I flying qualities in an up-and-away flight task.« less
Method and system for determining formation porosity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pittman, R.W.; Hermes, C.E.
1977-12-27
The invention discloses a method and/or system for measuring formation porosity from drilling response. It involves measuring a number of drilling parameters and includes determination of tooth dullness as well as determining a reference torque empirically. One of the drilling parameters is the torque applied to the drill string.
Prediction of the Dynamic Yield Strength of Metals Using Two Structural-Temporal Parameters
NASA Astrophysics Data System (ADS)
Selyutina, N. S.; Petrov, Yu. V.
2018-02-01
The behavior of the yield strength of steel and a number of aluminum alloys is investigated in a wide range of strain rates, based on the incubation time criterion of yield and the empirical models of Johnson-Cook and Cowper-Symonds. In this paper, expressions for the parameters of the empirical models are derived through the characteristics of the incubation time criterion; a satisfactory agreement of these data and experimental results is obtained. The parameters of the empirical models can depend on some strain rate. The independence of the characteristics of the incubation time criterion of yield from the loading history and their connection with the structural and temporal features of the plastic deformation process give advantage of the approach based on the concept of incubation time with respect to empirical models and an effective and convenient equation for determining the yield strength in a wider range of strain rates.
Ponec, R; Amat, L; Carbó-Dorca, R
1999-05-01
Since the dawn of quantitative structure-properties relationships (QSPR), empirical parameters related to structural, electronic and hydrophobic molecular properties have been used as molecular descriptors to determine such relationships. Among all these parameters, Hammett sigma constants and the logarithm of the octanol-water partition coefficient, log P, have been massively employed in QSPR studies. In the present paper, a new molecular descriptor, based on quantum similarity measures (QSM), is proposed as a general substitute of these empirical parameters. This work continues previous analyses related to the use of QSM to QSPR, introducing molecular quantum self-similarity measures (MQS-SM) as a single working parameter in some cases. The use of MQS-SM as a molecular descriptor is first confirmed from the correlation with the aforementioned empirical parameters. The Hammett equation has been examined using MQS-SM for a series of substituted carboxylic acids. Then, for a series of aliphatic alcohols and acetic acid esters, log P values have been correlated with the self-similarity measure between density functions in water and octanol of a given molecule. And finally, some examples and applications of MQS-SM to determine QSAR are presented. In all studied cases MQS-SM appeared to be excellent molecular descriptors usable in general QSPR applications of chemical interest.
NASA Astrophysics Data System (ADS)
Ponec, Robert; Amat, Lluís; Carbó-dorca, Ramon
1999-05-01
Since the dawn of quantitative structure-properties relationships (QSPR), empirical parameters related to structural, electronic and hydrophobic molecular properties have been used as molecular descriptors to determine such relationships. Among all these parameters, Hammett σ constants and the logarithm of the octanol- water partition coefficient, log P, have been massively employed in QSPR studies. In the present paper, a new molecular descriptor, based on quantum similarity measures (QSM), is proposed as a general substitute of these empirical parameters. This work continues previous analyses related to the use of QSM to QSPR, introducing molecular quantum self-similarity measures (MQS-SM) as a single working parameter in some cases. The use of MQS-SM as a molecular descriptor is first confirmed from the correlation with the aforementioned empirical parameters. The Hammett equation has been examined using MQS-SM for a series of substituted carboxylic acids. Then, for a series of aliphatic alcohols and acetic acid esters, log P values have been correlated with the self-similarity measure between density functions in water and octanol of a given molecule. And finally, some examples and applications of MQS-SM to determine QSAR are presented. In all studied cases MQS-SM appeared to be excellent molecular descriptors usable in general QSPR applications of chemical interest.
Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model
Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.
2013-01-01
One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874
NASA Technical Reports Server (NTRS)
English, Robert E; Cavicchi, Richard H
1951-01-01
Empirical methods of Ainley and Kochendorfer and Nettles were used to predict performances of nine turbine designs. Measured and predicted performances were compared. Appropriate values of blade-loss parameter were determined for the method of Kochendorfer and Nettles. The measured design-point efficiencies were lower than predicted by as much as 0.09 (Ainley and 0.07 (Kochendorfer and Nettles). For the method of Kochendorfer and Nettles, appropriate values of blade-loss parameter ranged from 0.63 to 0.87 and the off-design performance was accurately predicted.
Empirical flow parameters : a tool for hydraulic model validity
Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.
2013-01-01
The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.
NASA Astrophysics Data System (ADS)
Li, Y. K.; Chen, Y. W.; Cheng, X. W.; Wu, C.; Cheng, B.
2018-05-01
In this paper, the valence electron structure parameters of Zr(x)Ti(x)Hf(x)Nb(x)Mo(x) alloys were calculated based on the empirical electron theory of solids and molecules (EET), and their performance through these parameters were predicted. Subsequently, the alloys with special valence electron structure parameters were prepared byarc melting. The hardness and high-temperature mechanical properties were analyzed to verify the prediction. Research shows that the influence of shared electron number nA on the strongest bond determines the strength of these alloys and the experiments are consistent with the theoretical prediction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Scott; Haslauer, Claus P.; Cirpka, Olaf A.
2017-01-05
The key points of this presentation were to approach the problem of linking breakthrough curve shape (RP-CTRW transition distribution) to structural parameters from a Monte Carlo approach and to use the Monte Carlo analysis to determine any empirical error
Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.
Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S
2012-11-01
One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Stevens, Daniel J.; Stassun, Keivan G.; Gaudi, B. Scott
2017-12-01
We present bolometric fluxes and angular diameters for over 1.6 million stars in the Tycho-2 catalog, determined using previously determined empirical color-temperature and color-flux relations. We vet these relations via full fits to the full broadband spectral energy distributions for a subset of benchmark stars and perform quality checks against the large set of stars for which spectroscopically determined parameters are available from LAMOST, RAVE, and/or APOGEE. We then estimate radii for the 355,502 Tycho-2 stars in our sample whose Gaia DR1 parallaxes are precise to ≲ 10 % . For these stars, we achieve effective temperature, bolometric flux, and angular diameter uncertainties of the order of 1%-2% and radius uncertainties of order 8%, and we explore the effect that imposing spectroscopic effective temperature priors has on these uncertainties. These stellar parameters are shown to be reliable for stars with {T}{eff} ≲ 7000 K. The over half a million bolometric fluxes and angular diameters presented here will serve as an immediate trove of empirical stellar radii with the Gaia second data release, at which point effective temperature uncertainties will dominate the radius uncertainties. Already, dwarf, subgiant, and giant populations are readily identifiable in our purely empirical luminosity-effective temperature (theoretical) Hertzsprung-Russell diagrams.
Multistate modelling extended by behavioural rules: An application to migration.
Klabunde, Anna; Zinn, Sabine; Willekens, Frans; Leuchter, Matthias
2017-10-01
We propose to extend demographic multistate models by adding a behavioural element: behavioural rules explain intentions and thus transitions. Our framework is inspired by the Theory of Planned Behaviour. We exemplify our approach with a model of migration from Senegal to France. Model parameters are determined using empirical data where available. Parameters for which no empirical correspondence exists are determined by calibration. Age- and period-specific migration rates are used for model validation. Our approach adds to the toolkit of demographic projection by allowing for shocks and social influence, which alter behaviour in non-linear ways, while sticking to the general framework of multistate modelling. Our simulations yield that higher income growth in Senegal leads to higher emigration rates in the medium term, while a decrease in fertility yields lower emigration rates.
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
Efficient estimation of Pareto model: Some modified percentile estimators.
Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali
2018-01-01
The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.
NASA Astrophysics Data System (ADS)
Golikov, N. S.; Timofeev, I. P.
2018-05-01
Efficiency increase of jaw crushers makes the foundation of rational kinematics and stiffening of the elements of the machine possible. Foundation of rational kinematics includes establishment of connection between operation mode parameters of the crusher and its technical characteristics. The main purpose of this research is just to establish such a connection. Therefore this article shows analytical procedure of getting connection between operation mode parameters of the crusher and its capacity. Theoretical, empirical and semi-empirical methods of capacity determination of a single-toggle jaw crusher are given, taking into account physico-mechanical properties of crushed material and kinematics of the working mechanism. When developing a mathematical model, the method of closed vector polygons by V. A. Zinoviev was used. The expressions obtained in the article give an opportunity to solve important scientific and technical problems, connected with finding the rational kinematics of the jaw crusher mechanism, carrying out a comparative assessment of different crushers and giving the recommendations about updating the available jaw crushers.
Long term pavement performance computed parameter : frost penetration
DOT National Transportation Integrated Search
2008-11-01
As the pavement design process moves toward mechanistic-empirical techniques, knowledge of seasonal changes in pavement structural characteristics becomes critical. Specifically, frost penetration information is necessary for determining the effect o...
Arefin, Md Shamsul
2012-01-01
This work presents a technique for the chirality (n, m) assignment of semiconducting single wall carbon nanotubes by solving a set of empirical equations of the tight binding model parameters. The empirical equations of the nearest neighbor hopping parameters, relating the term (2n− m) with the first and second optical transition energies of the semiconducting single wall carbon nanotubes, are also proposed. They provide almost the same level of accuracy for lower and higher diameter nanotubes. An algorithm is presented to determine the chiral index (n, m) of any unknown semiconducting tube by solving these empirical equations using values of radial breathing mode frequency and the first or second optical transition energy from resonant Raman spectroscopy. In this paper, the chirality of 55 semiconducting nanotubes is assigned using the first and second optical transition energies. Unlike the existing methods of chirality assignment, this technique does not require graphical comparison or pattern recognition between existing experimental and theoretical Kataura plot. PMID:28348319
Empirical evaluation of interest-level criteria
NASA Astrophysics Data System (ADS)
Sahar, Sigal; Mansour, Yishay
1999-02-01
Efficient association rule mining algorithms already exist, however, as the size of databases increases, the number of patterns mined by the algorithms increases to such an extent that their manual evaluation becomes impractical. Automatic evaluation methods are, therefore, required in order to sift through the initial list of rules, which the datamining algorithm outputs. These evaluation methods, or criteria, rank the association rules mined from the dataset. We empirically examined several such statistical criteria: new criteria, as well as previously known ones. The empirical evaluation was conducted using several databases, including a large real-life dataset, acquired from an order-by-phone grocery store, a dataset composed from www proxy logs, and several datasets from the UCI repository. We were interested in discovering whether the ranking performed by the various criteria is similar or easily distinguishable. Our evaluation detected, when significant differences exist, three patterns of behavior in the eight criteria we examined. There is an obvious dilemma in determining how many association rules to choose (in accordance with support and confidence parameters). The tradeoff is between having stringent parameters and, therefore, few rules, or lenient parameters and, thus, a multitude of rules. In many cases, our empirical evaluation revealed that most of the rules found by the comparably strict parameters ranked highly according to the interestingness criteria, when using lax parameters (producing significantly more association rules). Finally, we discuss the association rules that ranked highest, explain why these results are sound, and how they direct future research.
Rotational characterization of methyl methacrylate: Internal dynamics and structure determination
NASA Astrophysics Data System (ADS)
Herbers, Sven; Wachsmuth, Dennis; Obenchain, Daniel A.; Grabow, Jens-Uwe
2018-01-01
Rotational constants, Watson's S centrifugal distortion coefficients, and internal rotation parameters of the two most stable conformers of methyl methacrylate were retrieved from the microwave spectrum. Splittings of rotational energy levels were caused by two non equivalent methyl tops. Constraining the centrifugal distortion coefficients and internal rotation parameters to the values of the main isotopologues, the rotational constants of all single substituted 13C and 18O isotopologues were determined. From these rotational constants the substitution structures and semi-empirical zero point structures of both conformers were precisely determined.
Unleashing Empirical Equations with "Nonlinear Fitting" and "GUM Tree Calculator"
NASA Astrophysics Data System (ADS)
Lovell-Smith, J. W.; Saunders, P.; Feistel, R.
2017-10-01
Empirical equations having large numbers of fitted parameters, such as the international standard reference equations published by the International Association for the Properties of Water and Steam (IAPWS), which form the basis of the "Thermodynamic Equation of Seawater—2010" (TEOS-10), provide the means to calculate many quantities very accurately. The parameters of these equations are found by least-squares fitting to large bodies of measurement data. However, the usefulness of these equations is limited since uncertainties are not readily available for most of the quantities able to be calculated, the covariance of the measurement data is not considered, and further propagation of the uncertainty in the calculated result is restricted since the covariance of calculated quantities is unknown. In this paper, we present two tools developed at MSL that are particularly useful in unleashing the full power of such empirical equations. "Nonlinear Fitting" enables propagation of the covariance of the measurement data into the parameters using generalized least-squares methods. The parameter covariance then may be published along with the equations. Then, when using these large, complex equations, "GUM Tree Calculator" enables the simultaneous calculation of any derived quantity and its uncertainty, by automatic propagation of the parameter covariance into the calculated quantity. We demonstrate these tools in exploratory work to determine and propagate uncertainties associated with the IAPWS-95 parameters.
NASA Astrophysics Data System (ADS)
Tel, E.; Aydın, A.; Kaplan, A.; Şarer, B.
2008-09-01
In the hybrid reactor, tritium self-sufficiency must be maintained for a commercial power plant. For self-sustaining (D-T) fusion driver tritium breeding ratio should be greater than 1.05. Working out the systematics of ( n, t) reaction cross-sections are of great importance for the definition of the excitation function character for the given reaction taking place on various nuclei at energies up to 20 MeV. In this study we have investigated asymmetry term effect for the ( n, t) reaction cross-sections at 14-15 neutron incident energy. It has been discussed the odd-even effect and the pairing effect considering binding energy systematic of the nuclear shell model for the new experimental data and new cross-sections formulas ( n, t) reactions developed by Tel et al. We have determined a different parameter groups by the classification of nuclei into even-even, even-odd and odd-even for ( n, t) reactions cross-sections. The obtained empirical and semi-empirical formulas by fitting two parameter for ( n, t) reactions were given. All calculated results have been compared with the experimental data and the other semi-empirical formulas.
Nedorezov, Lev V; Löhr, Bernhard L; Sadykova, Dinara L
2008-10-07
The applicability of discrete mathematical models for the description of diamondback moth (DBM) (Plutella xylostella L.) population dynamics was investigated. The parameter values for several well-known discrete time models (Skellam, Moran-Ricker, Hassell, Maynard Smith-Slatkin, and discrete logistic models) were estimated for an experimental time series from a highland cabbage-growing area in eastern Kenya. For all sets of parameters, boundaries of confidence domains were determined. Maximum calculated birth rates varied between 1.086 and 1.359 when empirical values were used for parameter estimation. After fitting of the models to the empirical trajectory, all birth rate values resulted considerably higher (1.742-3.526). The carrying capacity was determined between 13.0 and 39.9DBM/plant, after fitting of the models these values declined to 6.48-9.3, all values well within the range encountered empirically. The application of the Durbin-Watson criteria for comparison of theoretical and experimental population trajectories produced negative correlations with all models. A test of residual value groupings for randomness showed that their distribution is non-stochastic. In consequence, we conclude that DBM dynamics cannot be explained as a result of intra-population self-regulative mechanisms only (=by any of the models tested) and that more comprehensive models are required for the explanation of DBM population dynamics.
Modified empirical Solar Radiation Pressure model for IRNSS constellation
NASA Astrophysics Data System (ADS)
Rajaiah, K.; Manamohan, K.; Nirmala, S.; Ratnakara, S. C.
2017-11-01
Navigation with Indian Constellation (NAVIC) also known as Indian Regional Navigation Satellite System (IRNSS) is India's regional navigation system designed to provide position accuracy better than 20 m over India and the region extending to 1500 km around India. The reduced dynamic precise orbit estimation is utilized to determine the orbit broadcast parameters for IRNSS constellation. The estimation is mainly affected by the parameterization of dynamic models especially Solar Radiation Pressure (SRP) model which is a non-gravitational force depending on shape and attitude dynamics of the spacecraft. An empirical nine parameter solar radiation pressure model is developed for IRNSS constellation, using two-way range measurements from IRNSS C-band ranging system. The paper addresses the development of modified SRP empirical model for IRNSS (IRNSS SRP Empirical Model, ISEM). The performance of the ISEM was assessed based on overlap consistency, long term prediction, Satellite Laser Ranging (SLR) residuals and compared with ECOM9, ECOM5 and new-ECOM9 models developed by Center for Orbit Determination in Europe (CODE). For IRNSS Geostationary Earth Orbit (GEO) and Inclined Geosynchronous Orbit (IGSO) satellites, ISEM has shown promising results with overlap RMS error better than 5.3 m and 3.5 m respectively. Long term orbit prediction using numerical integration has improved with error better than 80%, 26% and 7.8% in comparison to ECOM9, ECOM5 and new-ECOM9 respectively. Further, SLR based orbit determination with ISEM shows 70%, 47% and 39% improvement over 10 days orbit prediction in comparison to ECOM9, ECOM5 and new-ECOM9 respectively and also highlights the importance of wide baseline tracking network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagayama, K., E-mail: nagayama@aero.kyushu-u.ac.jp
The dimensionless material parameter R introduced by Wu and Jing into the Rice-Walsh equation of state (EOS) has been deduced from the LASL shock Hugoniot data for porous Al and Cu. It was found that the parameter R/p decays smoothly with shock pressure p and displays small experimental scatter in the high pressure region. This finding led to the conclusion that the parameter has only a weak temperature dependence and is well approximated by a function of pressure alone, and the Grüneisen parameter should be temperature dependent under compression. The thermodynamic formulation of the Rice-Walsh EOS for Al and Cumore » was realized using the empirically determined function R(p) for each material and their known shock Hugoniot. It was then possible to reproduce porous shock Hugoniot for these metals. For most degrees of porosity, agreement between the porous data and the calculated Hugoniots using the empirical function described was very good. However, slight discrepancies were seen for Hugoniots with very high porosity. Two new thermal variables were introduced after further analysis, which enabled the calculation of the cold compression curve for these metals. The Grüneisen parameters along full-density and porous Hugoniot curve were calculated using a thermodynamic identity connecting R and the Grüneisen parameter. It was shown that the Grüneisen parameter is strongly temperature dependent. The present analysis suggested that the Rice-Walsh type EOS is a preferable choice for the analysis with its simple form, pressure-dependent empirical Wu-Jing parameter, and its compatibility with porous shock data.« less
Fire spread characteristics determined in the laboratory
Richard C. Rothermel; Hal E. Anderson
1966-01-01
Fuel beds of ponderosa pine needles and white pine needles were burned under controlled environmental conditions to determine the effects of fuel moisture and windspeed upon the rate of fire spread. Empirical formulas are presented to show the effect of these parameters. A discussion of rate of spread and some simple experiments show how fuel may be preheated before...
Determination of coefficient of thermal expansion effects on Louisiana's PCC pavement design.
DOT National Transportation Integrated Search
2011-12-01
With the development of the Mechanistic Empirical Pavement Design Guide (MEPDG) as a new pavement design tool, the : coefficient of thermal expansion (CTE) is now considered a more important design parameter in estimating pavement : performance inclu...
NASA Astrophysics Data System (ADS)
Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca
2018-02-01
Employing recently proposed metamodeling for the nucleonic matter equation of state, we analyze neutron star global properties such as masses, radii, momentum of inertia, and others. The impact of the uncertainty on empirical parameters on these global properties is analyzed in a Bayesian statistical approach. Physical constraints, such as causality and stability, are imposed on the equation of state and different hypotheses for the direct Urca (dUrca) process are investigated. In addition, only metamodels with maximum masses above 2 M⊙ are selected. Our main results are the following: the equation of state exhibits a universal behavior against the dUrca hypothesis under the condition of charge neutrality and β equilibrium; neutron stars, if composed exclusively of nucleons and leptons, have a radius of 12.7 ±0.4 km for masses ranging from 1 up to 2 M⊙ ; a small radius lower than 11 km is very marginally compatible with our present knowledge of the nuclear empirical parameters; and finally, the most important empirical parameters which are still affected by large uncertainties and play an important role in determining the radius of neutrons stars are the slope and curvature of the symmetry energy (Lsym and Ksym) and, to a lower extent, the skewness parameters (Qsat /sym).
2014-01-01
To describe flow or transport phenomena in porous media, relations between aquifer hydraulic conductivity and effective porosity can prove useful, avoiding the need to perform expensive and time consuming measurements. The practical applications generally require the determination of this parameter at field scale, while most of the empirical and semiempirical formulas, based on grain size analysis and allowing determination of the hydraulic conductivity from the porosity, are related to the laboratory scale and thus are not representative of the aquifer volumes to which one refers. Therefore, following the grain size distribution methodology, a new experimental relation between hydraulic conductivity and effective porosity, representative of aquifer volumes at field scale, is given for a confined aquifer. The experimental values used to determine this law were obtained for both parameters using only field measurements methods. The experimental results found, also if in the strict sense valid only for the investigated aquifer, can give useful suggestions for other alluvial aquifers with analogous characteristics of grain-size distribution. Limited to the investigated range, a useful comparison with the best known empirical formulas based on grain size analysis was carried out. The experimental data allowed also investigation of the existence of a scaling behaviour for both parameters considered. PMID:25180202
Systematics of capture and fusion dynamics in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Wang, Bing; Wen, Kai; Zhao, Wei-Juan; Zhao, En-Guang; Zhou, Shan-Gui
2017-03-01
We perform a systematic study of capture excitation functions by using an empirical coupled-channel (ECC) model. In this model, a barrier distribution is used to take effectively into account the effects of couplings between the relative motion and intrinsic degrees of freedom. The shape of the barrier distribution is of an asymmetric Gaussian form. The effect of neutron transfer channels is also included in the barrier distribution. Based on the interaction potential between the projectile and the target, empirical formulas are proposed to determine the parameters of the barrier distribution. Theoretical estimates for barrier distributions and calculated capture cross sections together with experimental cross sections of 220 reaction systems with 182 ⩽ZPZT ⩽ 1640 are tabulated. The results show that the ECC model together with the empirical formulas for parameters of the barrier distribution work quite well in the energy region around the Coulomb barrier. This ECC model can provide prediction of capture cross sections for the synthesis of superheavy nuclei as well as valuable information on capture and fusion dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayhurst, Thomas Laine
1980-08-06
Techniques for applying ab-initio calculations to the is of atomic spectra are investigated, along with the relationship between the semi-empirical and ab-initio forms of Slater-Condon theory. Slater-Condon theory is reviewed with a focus on the essential features that lead to the effective Hamiltonians associated with the semi-empirical form of the theory. Ab-initio spectroscopic parameters are calculated from wavefunctions obtained via self-consistent field methods, while multi-configuration Hamiltonian matrices are constructed and diagonalized with computer codes written by Robert Cowan of Los Alamos Scientific Laboratory. Group theoretical analysis demonstrates that wavefunctions more general than Slater determinants (i.e. wavefunctions with radial correlations betweenmore » electrons) lead to essentially the same parameterization of effective Hamiltonians. In the spirit of this analysis, a strategy is developed for adjusting ab-initio values of the spectroscopic parameters, reproducing parameters obtained by fitting the corresponding effective Hamiltonian. Secondary parameters are used to "screen" the calculated (primary) spectroscopic parameters, their values determined by least squares. Extrapolations of the secondary parameters determined from analyzed spectra are attempted to correct calculations of atoms and ions without experimental levels. The adjustment strategy and extrapolations are tested on the K I sequence from K 0+ through Fe 7+, fitting to experimental levels for V 4+, and Cr 5+; unobserved levels and spectra are predicted for several members of the sequence. A related problem is also discussed: Energy levels of the Uranium hexahalide complexes, (UX 6) 2- for X= F, Cl, Br, and I, are fit to an effective Hamiltonian (the f 2 configuration in O h symmetry) with corrections proposed by Brian Judd.« less
Optimal design criteria - prediction vs. parameter estimation
NASA Astrophysics Data System (ADS)
Waldl, Helmut
2014-05-01
G-optimality is a popular design criterion for optimal prediction, it tries to minimize the kriging variance over the whole design region. A G-optimal design minimizes the maximum variance of all predicted values. If we use kriging methods for prediction it is self-evident to use the kriging variance as a measure of uncertainty for the estimates. Though the computation of the kriging variance and even more the computation of the empirical kriging variance is computationally very costly and finding the maximum kriging variance in high-dimensional regions can be time demanding such that we cannot really find the G-optimal design with nowadays available computer equipment in practice. We cannot always avoid this problem by using space-filling designs because small designs that minimize the empirical kriging variance are often non-space-filling. D-optimality is the design criterion related to parameter estimation. A D-optimal design maximizes the determinant of the information matrix of the estimates. D-optimality in terms of trend parameter estimation and D-optimality in terms of covariance parameter estimation yield basically different designs. The Pareto frontier of these two competing determinant criteria corresponds with designs that perform well under both criteria. Under certain conditions searching the G-optimal design on the above Pareto frontier yields almost as good results as searching the G-optimal design in the whole design region. In doing so the maximum of the empirical kriging variance has to be computed only a few times though. The method is demonstrated by means of a computer simulation experiment based on data provided by the Belgian institute Management Unit of the North Sea Mathematical Models (MUMM) that describe the evolution of inorganic and organic carbon and nutrients, phytoplankton, bacteria and zooplankton in the Southern Bight of the North Sea.
Impact of orbit modeling on DORIS station position and Earth rotation estimates
NASA Astrophysics Data System (ADS)
Štěpánek, Petr; Rodriguez-Solano, Carlos Javier; Hugentobler, Urs; Filler, Vratislav
2014-04-01
The high precision of estimated station coordinates and Earth rotation parameters (ERP) obtained from satellite geodetic techniques is based on the precise determination of the satellite orbit. This paper focuses on the analysis of the impact of different orbit parameterizations on the accuracy of station coordinates and the ERPs derived from DORIS observations. In a series of experiments the DORIS data from the complete year 2011 were processed with different orbit model settings. First, the impact of precise modeling of the non-conservative forces on geodetic parameters was compared with results obtained with an empirical-stochastic modeling approach. Second, the temporal spacing of drag scaling parameters was tested. Third, the impact of estimating once-per-revolution harmonic accelerations in cross-track direction was analyzed. And fourth, two different approaches for solar radiation pressure (SRP) handling were compared, namely adjusting SRP scaling parameter or fixing it on pre-defined values. Our analyses confirm that the empirical-stochastic orbit modeling approach, which does not require satellite attitude information and macro models, results for most of the monitored station parameters in comparable accuracy as the dynamical model that employs precise non-conservative force modeling. However, the dynamical orbit model leads to a reduction of the RMS values for the estimated rotation pole coordinates by 17% for x-pole and 12% for y-pole. The experiments show that adjusting atmospheric drag scaling parameters each 30 min is appropriate for DORIS solutions. Moreover, it was shown that the adjustment of cross-track once-per-revolution empirical parameter increases the RMS of the estimated Earth rotation pole coordinates. With recent data it was however not possible to confirm the previously known high annual variation in the estimated geocenter z-translation series as well as its mitigation by fixing the SRP parameters on pre-defined values.
GPS-Based Reduced Dynamic Orbit Determination Using Accelerometer Data
NASA Technical Reports Server (NTRS)
VanHelleputte, Tom; Visser, Pieter
2007-01-01
Currently two gravity field satellite missions, CHAMP and GRACE, are equipped with high sensitivity electrostatic accelerometers, measuring the non-conservative forces acting on the spacecraft in three orthogonal directions. During the gravity field recovery these measurements help to separate gravitational and non-gravitational contributions in the observed orbit perturbations. For precise orbit determination purposes all these missions have a dual-frequency GPS receiver on board. The reduced dynamic technique combines the dense and accurate GPS observations with physical models of the forces acting on the spacecraft, complemented by empirical accelerations, which are stochastic parameters adjusted in the orbit determination process. When the spacecraft carries an accelerometer, these measured accelerations can be used to replace the models of the non-conservative forces, such as air drag and solar radiation pressure. This approach is implemented in a batch least-squares estimator of the GPS High Precision Orbit Determination Software Tools (GHOST), developed at DLR/GSOC and DEOS. It is extensively tested with data of the CHAMP and GRACE satellites. As accelerometer observations typically can be affected by an unknown scale factor and bias in each measurement direction, they require calibration during processing. Therefore the estimated state vector is augmented with six parameters: a scale and bias factor for the three axes. In order to converge efficiently to a good solution, reasonable a priori values for the bias factor are necessary. These are calculated by combining the mean value of the accelerometer observations with the mean value of the non-conservative force models and empirical accelerations, estimated when using these models. When replacing the non-conservative force models with accelerometer observations and still estimating empirical accelerations, a good orbit precision is achieved. 100 days of GRACE B data processing results in a mean orbit fit of a few centimeters with respect to high-quality JPL reference orbits. This shows a slightly better consistency compared to the case when using force models. A purely dynamic orbit, without estimating empirical accelerations thus only adjusting six state parameters and the bias and scale factors, gives an orbit fit for the GRACE B test case below the decimeter level. The in orbit calibrated accelerometer observations can be used to validate the modelled accelerations and estimated empirical accelerations computed with the GHOST tools. In along track direction they show the best resemblance, with a mean correlation coefficient of 93% for the same period. In radial and normal direction the correlation is smaller. During days of high solar activity the benefit of using accelerometer observations is clearly visible. The observations during these days show fluctuations which the modelled and empirical accelerations can not follow.
NASA Astrophysics Data System (ADS)
Pitts, James Daniel
Rotary ultrasonic machining (RUM), a hybrid process combining ultrasonic machining and diamond grinding, was created to increase material removal rates for the fabrication of hard and brittle workpieces. The objective of this research was to experimentally derive empirical equations for the prediction of multiple machined surface roughness parameters for helically pocketed rotary ultrasonic machined Zerodur glass-ceramic workpieces by means of a systematic statistical experimental approach. A Taguchi parametric screening design of experiments was employed to systematically determine the RUM process parameters with the largest effect on mean surface roughness. Next empirically determined equations for the seven common surface quality metrics were developed via Box-Behnken surface response experimental trials. Validation trials were conducted resulting in predicted and experimental surface roughness in varying levels of agreement. The reductions in cutting force and tool wear associated with RUM, reported by previous researchers, was experimentally verified to also extended to helical pocketing of Zerodur glass-ceramic.
A method and data for video monitor sizing. [human CRT viewing requirements
NASA Technical Reports Server (NTRS)
Kirkpatrick, M., III; Shields, N. L., Jr.; Malone, T. B.; Guerin, E. G.
1976-01-01
The paper outlines an approach consisting of using analytical methods and empirical data to determine monitor size constraints based on the human operator's CRT viewing requirements in a context where panel space and volume considerations for the Space Shuttle aft cabin constrain the size of the monitor to be used. Two cases are examined: remote scene imaging and alphanumeric character display. The central parameter used to constrain monitor size is the ratio M/L where M is the monitor dimension and L the viewing distance. The study is restricted largely to 525 line video systems having an SNR of 32 db and bandwidth of 4.5 MHz. Degradation in these parameters would require changes in the empirically determined visual angle constants presented. The data and methods described are considered to apply to cases where operators are required to view via TV target objects which are well differentiated from the background and where the background is relatively sparse. It is also necessary to identify the critical target dimensions and cues.
Modeling depth from motion parallax with the motion/pursuit ratio
Nawrot, Mark; Ratzlaff, Michael; Leonard, Zachary; Stroyan, Keith
2014-01-01
The perception of unambiguous scaled depth from motion parallax relies on both retinal image motion and an extra-retinal pursuit eye movement signal. The motion/pursuit ratio represents a dynamic geometric model linking these two proximal cues to the ratio of depth to viewing distance. An important step in understanding the visual mechanisms serving the perception of depth from motion parallax is to determine the relationship between these stimulus parameters and empirically determined perceived depth magnitude. Observers compared perceived depth magnitude of dynamic motion parallax stimuli to static binocular disparity comparison stimuli at three different viewing distances, in both head-moving and head-stationary conditions. A stereo-viewing system provided ocular separation for stereo stimuli and monocular viewing of parallax stimuli. For each motion parallax stimulus, a point of subjective equality (PSE) was estimated for the amount of binocular disparity that generates the equivalent magnitude of perceived depth from motion parallax. Similar to previous results, perceived depth from motion parallax had significant foreshortening. Head-moving conditions produced even greater foreshortening due to the differences in the compensatory eye movement signal. An empirical version of the motion/pursuit law, termed the empirical motion/pursuit ratio, which models perceived depth magnitude from these stimulus parameters, is proposed. PMID:25339926
Waiting time distribution in public health care: empirics and theory.
Dimakou, Sofia; Dimakou, Ourania; Basso, Henrique S
2015-12-01
Excessive waiting times for elective surgery have been a long-standing concern in many national healthcare systems in the OECD. How do the hospital admission patterns that generate waiting lists affect different patients? What are the hospitals characteristics that determine waiting times? By developing a model of healthcare provision and analysing empirically the entire waiting time distribution we attempt to shed some light on those issues. We first build a theoretical model that describes the optimal waiting time distribution for capacity constraint hospitals. Secondly, employing duration analysis, we obtain empirical representations of that distribution across hospitals in the UK from 1997-2005. We observe important differences on the 'scale' and on the 'shape' of admission rates. Scale refers to how quickly patients are treated and shape represents trade-offs across duration-treatment profiles. By fitting the theoretical to the empirical distributions we estimate the main structural parameters of the model and are able to closely identify the main drivers of these empirical differences. We find that the level of resources allocated to elective surgery (budget and physical capacity), which determines how constrained the hospital is, explains differences in scale. Changes in benefits and costs structures of healthcare provision, which relate, respectively, to the desire to prioritise patients by duration and the reduction in costs due to delayed treatment, determine the shape, affecting short and long duration patients differently. JEL Classification I11; I18; H51.
Mori, J.; Frankel, A.
1990-01-01
Using small events as empirical Green functions, source parameters were estimated for 25 ML 3.4 to 4.4 events associated with the 1986 North Palm Springs earthquake. The static stress drops ranged from 3 to 80 bars, for moments of 0.7 to 11 ?? 1021 dyne-cm. There was a spatial pattern to the stress drops of the aftershocks which showed increasing values along the fault plane toward the northwest compared to relatively low values near the hypocenter of the mainshock. The highest values were outside the main area of slip, and are believed to reflect a loaded area of the fault that still has an higher level of stress which was not released during the main shock. -from Authors
An empirical model for dissolution profile and its application to floating dosage forms.
Weiss, Michael; Kriangkrai, Worawut; Sungthongjeen, Srisagul
2014-06-02
A sum of two inverse Gaussian functions is proposed as a highly flexible empirical model for fitting of in vitro dissolution profiles. The model was applied to quantitatively describe theophylline release from effervescent multi-layer coated floating tablets containing different amounts of the anti-tacking agents talc or glyceryl monostearate. Model parameters were estimated by nonlinear regression (mixed-effects modeling). The estimated parameters were used to determine the mean dissolution time, as well as to reconstruct the time course of release rate for each formulation, whereby the fractional release rate can serve as a diagnostic tool for classification of dissolution processes. The approach allows quantification of dissolution behavior and could provide additional insights into the underlying processes. Copyright © 2014 Elsevier B.V. All rights reserved.
Koštrun, Sanja; Munic Kos, Vesna; Matanović Škugor, Maja; Palej Jakopović, Ivana; Malnar, Ivica; Dragojević, Snježana; Ralić, Jovica; Alihodžić, Sulejman
2017-06-16
The aim of this study was to investigate lipophilicity and cellular accumulation of rationally designed azithromycin and clarithromycin derivatives at the molecular level. The effect of substitution site and substituent properties on a global physico-chemical profile and cellular accumulation of investigated compounds was studied using calculated structural parameters as well as experimentally determined lipophilicity. In silico models based on the 3D structure of molecules were generated to investigate conformational effect on studied properties and to enable prediction of lipophilicity and cellular accumulation for this class of molecules based on non-empirical parameters. The applicability of developed models was explored on a validation and test sets and compared with previously developed empirical models. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Erica A. Swenson; Amanda E. Rosenberger; Philip J. Howell
2007-01-01
Fish maturity status, sex ratio, and age and size at first maturity are important parameters in population assessments and life history studies. In most empirical studies of these variables, fish are sacrificed and dissected to obtain data. However, maturity status and the sex of mature individuals can be determined by inserting an endoscope through a small incision in...
NASA Astrophysics Data System (ADS)
Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.
2018-06-01
Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.
Yilmaz, A Erdem; Boncukcuoğlu, Recep; Kocakerim, M Muhtar
2007-06-01
In this study, it was investigated parameters affecting energy consumption in boron removal from boron containing wastewaters prepared synthetically, via electrocoagulation method. The solution pH, initial boron concentration, dose of supporting electrolyte, current density and temperature of solution were selected as experimental parameters affecting energy consumption. The obtained experimental results showed that boron removal efficiency reached up to 99% under optimum conditions, in which solution pH was 8.0, current density 6.0 mA/cm(2), initial boron concentration 100mg/L and solution temperature 293 K. The current density was an important parameter affecting energy consumption too. High current density applied to electrocoagulation cell increased energy consumption. Increasing solution temperature caused to decrease energy consumption that high temperature decreased potential applied under constant current density. That increasing initial boron concentration and dose of supporting electrolyte caused to increase specific conductivity of solution decreased energy consumption. As a result, it was seen that energy consumption for boron removal via electrocoagulation method could be minimized at optimum conditions. An empirical model was predicted by statistically. Experimentally obtained values were fitted with values predicted from empirical model being as following; [formula in text]. Unfortunately, the conditions obtained for optimum boron removal were not the conditions obtained for minimum energy consumption. It was determined that support electrolyte must be used for increase boron removal and decrease electrical energy consumption.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-05-01
Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.
NASA Astrophysics Data System (ADS)
Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan
2016-08-01
In the present research, three artificial intelligence methods including Gene Expression Programming (GEP), Artificial Neural Networks (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) as well as, 48 empirical equations (10, 12 and 26 equations were temperature-based, sunshine-based and meteorological parameters-based, respectively) were used to estimate daily solar radiation in Kerman, Iran in the period of 1992-2009. To develop the GEP, ANN and ANFIS models, depending on the used empirical equations, various combinations of minimum air temperature, maximum air temperature, mean air temperature, extraterrestrial radiation, actual sunshine duration, maximum possible sunshine duration, sunshine duration ratio, relative humidity and precipitation were considered as inputs in the mentioned intelligent methods. To compare the accuracy of empirical equations and intelligent models, root mean square error (RMSE), mean absolute error (MAE), mean absolute relative error (MARE) and determination coefficient (R2) indices were used. The results showed that in general, sunshine-based and meteorological parameters-based scenarios in ANN and ANFIS models presented high accuracy than mentioned empirical equations. Moreover, the most accurate method in the studied region was ANN11 scenario with five inputs. The values of RMSE, MAE, MARE and R2 indices for the mentioned model were 1.850 MJ m-2 day-1, 1.184 MJ m-2 day-1, 9.58% and 0.935, respectively.
Stability of cosmetic emulsion containing different amount of hemp oil.
Kowalska, M; Ziomek, M; Żbikowska, A
2015-08-01
The aim of the study was to determine the optimal conditions, that is the content of hemp oil and time of homogenization to obtain stable dispersion systems. For this purpose, six emulsions were prepared, their stability was examined empirically and the most correctly formulated emulsion composition was determined using a computer simulation. Variable parameters (oil content and homogenization time) were indicated by the optimization software based on Kleeman's method. Physical properties of the synthesized emulsions were studied by numerous techniques involving particle size analysis, optical microscopy, Turbiscan test and viscosity of emulsions. The emulsion containing 50 g of oil and being homogenized for 6 min had the highest stability. Empirically determined parameters proved to be consistent with the results obtained using the computer software. The computer simulation showed that the most stable emulsion should contain from 30 to 50 g of oil and should be homogenized for 2.5-6 min. The computer software based on Kleeman's method proved to be useful for quick optimization of the composition and production parameters of stable emulsion systems. Moreover, obtaining an emulsion system with proper stability justifies further research extended with sensory analysis, which will allow the application of such systems (containing hemp oil, beneficial for skin) in the cosmetic industry. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Depth-related gradients of viral activity in Lake Pavin.
Colombet, J; Sime-Ngando, T; Cauchie, H M; Fonty, G; Hoffmann, L; Demeure, G
2006-06-01
High-resolution vertical sampling and determination of viral and prokaryotic parameters in a deep volcanic lake shows that in the absence of thermal stratification but within light, oxygen, and chlorophyll gradients, host availability empirically is prevalent over the physical and chemical environments and favors lytic over lysogenic "viral life cycles."
Resistivity of liquid metals on Veljkovic-Slavic pseudopotential
NASA Astrophysics Data System (ADS)
Abdel-Azez, Khalef
1996-04-01
An empirical form of screened model pseudopotential, proposed by Veljkovic and Slavic, is exploited for the calculation of resistivity of seven liquid metals through the correct re- determination of its parameters. The model derives qualitative support from the close agreement obtained between the computed results and the experiment.
Kuehnapfel, Andreas; Ahnert, Peter; Loeffler, Markus; Scholz, Markus
2017-02-01
Body surface area is a physiological quantity relevant for many medical applications. In clinical practice, it is determined by empirical formulae. 3D laser-based anthropometry provides an easy and effective way to measure body surface area but is not ubiquitously available. We used data from laser-based anthropometry from a population-based study to assess validity of published and commonly used empirical formulae. We performed a large population-based study on adults collecting classical anthropometric measurements and 3D body surface assessments (N = 1435). We determined reliability of the 3D body surface assessment and validity of 18 different empirical formulae proposed in the literature. The performance of these formulae is studied in subsets of sex and BMI. Finally, improvements of parameter settings of formulae and adjustments for sex and BMI were considered. 3D body surface measurements show excellent intra- and inter-rater reliability of 0.998 (overall concordance correlation coefficient, OCCC was used as measure of agreement). Empirical formulae of Fujimoto and Watanabe, Shuter and Aslani and Sendroy and Cecchini performed best with excellent concordance with OCCC > 0.949 even in subgroups of sex and BMI. Re-parametrization of formulae and adjustment for sex and BMI slightly improved results. In adults, 3D laser-based body surface assessment is a reliable alternative to estimation by empirical formulae. However, there are empirical formulae showing excellent results even in subgroups of sex and BMI with only little room for improvement.
Prediction of compressibility parameters of the soils using artificial neural network.
Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan
2016-01-01
The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.
Estimation of the viscosities of liquid binary alloys
NASA Astrophysics Data System (ADS)
Wu, Min; Su, Xiang-Yu
2018-01-01
As one of the most important physical and chemical properties, viscosity plays a critical role in physics and materials as a key parameter to quantitatively understanding the fluid transport process and reaction kinetics in metallurgical process design. Experimental and theoretical studies on liquid metals are problematic. Today, there are many empirical and semi-empirical models available with which to evaluate the viscosity of liquid metals and alloys. However, the parameter of mixed energy in these models is not easily determined, and most predictive models have been poorly applied. In the present study, a new thermodynamic parameter Δ G is proposed to predict liquid alloy viscosity. The prediction equation depends on basic physical and thermodynamic parameters, namely density, melting temperature, absolute atomic mass, electro-negativity, electron density, molar volume, Pauling radius, and mixing enthalpy. Our results show that the liquid alloy viscosity predicted using the proposed model is closely in line with the experimental values. In addition, if the component radius difference is greater than 0.03 nm at a certain temperature, the atomic size factor has a significant effect on the interaction of the binary liquid metal atoms. The proposed thermodynamic parameter Δ G also facilitates the study of other physical properties of liquid metals.
Systematic approach to developing empirical interatomic potentials for III-N semiconductors
NASA Astrophysics Data System (ADS)
Ito, Tomonori; Akiyama, Toru; Nakamura, Kohji
2016-05-01
A systematic approach to the derivation of empirical interatomic potentials is developed for III-N semiconductors with the aid of ab initio calculations. The parameter values of empirical potential based on bond order potential are determined by reproducing the cohesive energy differences among 3-fold coordinated hexagonal, 4-fold coordinated zinc blende, wurtzite, and 6-fold coordinated rocksalt structures in BN, AlN, GaN, and InN. The bond order p is successfully introduced as a function of the coordination number Z in the form of p = a exp(-bZn ) if Z ≤ 4 and p = (4/Z)α if Z ≥ 4 in empirical interatomic potential. Moreover, the energy difference between wurtzite and zinc blende structures can be successfully evaluated by considering interaction beyond the second-nearest neighbors as a function of ionicity. This approach is feasible for developing empirical interatomic potentials applicable to a system consisting of poorly coordinated atoms at surfaces and interfaces including nanostructures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenn, Mark E.; Driscoll, Charles; Zhou, Qingtao
2015-01-01
Empirical and dynamic biogeochemical modelling are complementary approaches for determining the critical load (CL) of atmospheric nitrogen (N) or other constituent deposition that an ecosystem can tolerate without causing ecological harm. The greatest benefits are obtained when these approaches are used in combination. Confounding environmental factors can complicate the determination of empirical CLs across depositional gradients, while the experimental application of N amendments for estimating the CL does not realistically mimic the effects of chronic atmospheric N deposition. Biogeochemical and vegetation simulation models can provide CL estimates and valuable ecosystem response information, allowing for past and future scenario testing withmore » various combinations of environmental factors, pollutants, pollutant control options, land management, and ecosystem response parameters. Even so, models are fundamentally gross simplifications of the real ecosystems they attempt to simulate. Empirical approaches are vital as a check on simulations and CL estimates, to parameterize models, and to elucidate mechanisms and responses under real world conditions. In this chapter, we provide examples of empirical and modelled N CL approaches in ecosystems from three regions of the United States: mixed conifer forest, desert scrub and pinyon- juniper woodland in California; alpine catchments in the Rocky Mountains; and lakes in the Adirondack region of New York state.« less
Marto, Aminaton; Jahed Armaghani, Danial; Tonnizam Mohamad, Edy; Makhtar, Ahmad Mahir
2014-01-01
Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA) and artificial neural network (ANN). For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP) ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches. PMID:25147856
Marto, Aminaton; Hajihassani, Mohsen; Armaghani, Danial Jahed; Mohamad, Edy Tonnizam; Makhtar, Ahmad Mahir
2014-01-01
Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA) and artificial neural network (ANN). For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP) ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches.
Empirical mass-loss rates for 25 O and early B stars, derived from Copernicus observations
NASA Technical Reports Server (NTRS)
Gathier, R.; Lamers, H. J. G. L. M.; Snow, T. P.
1981-01-01
Ultraviolet line profiles are fitted with theoretical line profiles in the cases of 25 stars covering a spectral type range from O4 to B1, including all luminosity classes. Ion column densities are compared for the determination of wind ionization, and it is found that the O VI/N V ratio is dependent on the mean density of the wind and not on effective temperature value, while the Si IV/N V ratio is temperature-dependent. The column densities are used to derive a mass-loss rate parameter that is empirically correlated against the mass-loss rate by means of standard stars with well-determined rates from IR or radio data. The empirical mass-loss rates obtained are compared with those derived by others and found to vary by as much as a factor of 10, which is shown to be due to uncertainties or errors in the ionization fractions of models used for wind ionization balance prediction.
An automatic and effective parameter optimization method for model tuning
NASA Astrophysics Data System (ADS)
Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.
2015-11-01
Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.
Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach
ERIC Educational Resources Information Center
Rotondi, Michael A.; Donner, Allan
2009-01-01
The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…
Depth-Related Gradients of Viral Activity in Lake Pavin
Colombet, J.; Sime-Ngando, T.; Cauchie, H. M.; Fonty, G.; Hoffmann, L.; Demeure, G.
2006-01-01
High-resolution vertical sampling and determination of viral and prokaryotic parameters in a deep volcanic lake shows that in the absence of thermal stratification but within light, oxygen, and chlorophyll gradients, host availability empirically is prevalent over the physical and chemical environments and favors lytic over lysogenic “viral life cycles.” PMID:16751565
Compounding approach for univariate time series with nonstationary variances
NASA Astrophysics Data System (ADS)
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
Compounding approach for univariate time series with nonstationary variances.
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
An empirical Bayes approach for the Poisson life distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1973-01-01
A smooth empirical Bayes estimator is derived for the intensity parameter (hazard rate) in the Poisson distribution as used in life testing. The reliability function is also estimated either by using the empirical Bayes estimate of the parameter, or by obtaining the expectation of the reliability function. The behavior of the empirical Bayes procedure is studied through Monte Carlo simulation in which estimates of mean-squared errors of the empirical Bayes estimators are compared with those of conventional estimators such as minimum variance unbiased or maximum likelihood. Results indicate a significant reduction in mean-squared error of the empirical Bayes estimators over the conventional variety.
Analysis of Ion Composition Estimation Accuracy for Incoherent Scatter Radars
NASA Astrophysics Data System (ADS)
Martínez Ledesma, M.; Diaz, M. A.
2017-12-01
The Incoherent Scatter Radar (ISR) is one of the most powerful sounding methods developed to estimate the Ionosphere. This radar system determines the plasma parameters by sending powerful electromagnetic pulses to the Ionosphere and analyzing the received backscatter. This analysis provides information about parameters such as electron and ion temperatures, electron densities, ion composition, and ion drift velocities. Nevertheless in some cases the ISR analysis has ambiguities in the determination of the plasma characteristics. It is of particular relevance the ion composition and temperature ambiguity obtained between the F1 and the lower F2 layers. In this case very similar signals are obtained with different mixtures of molecular ions (NO2+ and O2+) and atomic oxygen ions (O+), and consequently it is not possible to completely discriminate between them. The most common solution to solve this problem is the use of empirical or theoretical models of the ionosphere in the fitting of ambiguous data. More recent works take use of parameters estimated from the Plasma Line band of the radar to reduce the number of parameters to determine. In this work we propose to determine the error estimation of the ion composition ambiguity when using Plasma Line electron density measurements. The sensibility of the ion composition estimation has been also calculated depending on the accuracy of the ionospheric model, showing that the correct estimation is highly dependent on the capacity of the model to approximate the real values. Monte Carlo simulations of data fitting at different signal to noise (SNR) ratios have been done to obtain valid and invalid estimation probability curves. This analysis provides a method to determine the probability of erroneous estimation for different signal fluctuations. Also it can be used as an empirical method to compare the efficiency of the different algorithms and methods on when solving the ion composition ambiguity.
Parameter Optimization of PAL-XFEL Injector
NASA Astrophysics Data System (ADS)
Lee, Jaehyun; Ko, In Soo; Han, Jang-Hui; Hong, Juho; Yang, Haeryong; Min, Chang Ki; Kang, Heung-Sik
2018-05-01
A photoinjector is used as the electron source to generate a high peak current and low emittance beam for an X-ray free electron laser (FEL). The beam emittance is one of the critical parameters to determine the FEL performance together with the slice energy spread and the peak current. The Pohang Accelerator Laboratory X-ray Free Electron Laser (PAL-XFEL) was constructed in 2015, and the beam commissioning was carried out in spring 2016. The injector is running routinely for PAL-XFEL user operation. The operational parameters of the injector have been optimized experimentally, and these are somewhat different from the originally designed ones. Therefore, we study numerically the injector parameters based on the empirically optimized parameters and review the present operating condition.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography
NASA Astrophysics Data System (ADS)
Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography.
Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
NASA Astrophysics Data System (ADS)
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.
2005-08-01
The so-called Pareto-Levy or power-law distribution has been successfully used as a model to describe probabilities associated to extreme variations of stock markets indexes worldwide. The selection of the threshold parameter from empirical data and consequently, the determination of the exponent of the distribution, is often done using a simple graphical method based on a log-log scale, where a power-law probability plot shows a straight line with slope equal to the exponent of the power-law distribution. This procedure can be considered subjective, particularly with regard to the choice of the threshold or cutoff parameter. In this work, a more objective procedure based on a statistical measure of discrepancy between the empirical and the Pareto-Levy distribution is presented. The technique is illustrated for data sets from the New York Stock Exchange (DJIA) and the Mexican Stock Market (IPC).
Probabilistic analysis of tsunami hazards
Geist, E.L.; Parsons, T.
2006-01-01
Determining the likelihood of a disaster is a key component of any comprehensive hazard assessment. This is particularly true for tsunamis, even though most tsunami hazard assessments have in the past relied on scenario or deterministic type models. We discuss probabilistic tsunami hazard analysis (PTHA) from the standpoint of integrating computational methods with empirical analysis of past tsunami runup. PTHA is derived from probabilistic seismic hazard analysis (PSHA), with the main difference being that PTHA must account for far-field sources. The computational methods rely on numerical tsunami propagation models rather than empirical attenuation relationships as in PSHA in determining ground motions. Because a number of source parameters affect local tsunami runup height, PTHA can become complex and computationally intensive. Empirical analysis can function in one of two ways, depending on the length and completeness of the tsunami catalog. For site-specific studies where there is sufficient tsunami runup data available, hazard curves can primarily be derived from empirical analysis, with computational methods used to highlight deficiencies in the tsunami catalog. For region-wide analyses and sites where there are little to no tsunami data, a computationally based method such as Monte Carlo simulation is the primary method to establish tsunami hazards. Two case studies that describe how computational and empirical methods can be integrated are presented for Acapulco, Mexico (site-specific) and the U.S. Pacific Northwest coastline (region-wide analysis).
Equation of state for dense nucleonic matter from metamodeling. I. Foundational aspects
NASA Astrophysics Data System (ADS)
Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca
2018-02-01
Metamodeling for the nucleonic equation of state (EOS), inspired from a Taylor expansion around the saturation density of symmetric nuclear matter, is proposed and parameterized in terms of the empirical parameters. The present knowledge of nuclear empirical parameters is first reviewed in order to estimate their average values and associated uncertainties, and thus defining the parameter space of the metamodeling. They are divided into isoscalar and isovector types, and ordered according to their power in the density expansion. The goodness of the metamodeling is analyzed against the predictions of the original models. In addition, since no correlation among the empirical parameters is assumed a priori, all arbitrary density dependences can be explored, which might not be accessible in existing functionals. Spurious correlations due to the assumed functional form are also removed. This meta-EOS allows direct relations between the uncertainties on the empirical parameters and the density dependence of the nuclear equation of state and its derivatives, and the mapping between the two can be done with standard Bayesian techniques. A sensitivity analysis shows that the more influential empirical parameters are the isovector parameters Lsym and Ksym, and that laboratory constraints at supersaturation densities are essential to reduce the present uncertainties. The present metamodeling for the EOS for nuclear matter is proposed for further applications in neutron stars and supernova matter.
ERIC Educational Resources Information Center
Leung, Kim Chau
2015-01-01
Previous meta-analyses of the effects of peer tutoring on academic achievement have been plagued with theoretical and methodological flaws. Specifically, these studies have not adopted both fixed and mixed effects models for analyzing the effect size; they have not evaluated the moderating effect of some commonly used parameters, such as comparing…
NASA Technical Reports Server (NTRS)
Young, Richard D.; Rose, Cheryl A.; Starnes, James H., Jr.
2000-01-01
Results of a geometrically nonlinear finite element parametric study to determine curvature correction factors or bulging factors that account for increased stresses due to curvature for longitudinal and circumferential cracks in unstiffened pressurized cylindrical shells are presented. Geometric parameters varied in the study include the shell radius, the shell wall thickness, and the crack length. The major results are presented in the form of contour plots of the bulging factor as a function of two nondimensional parameters: the shell curvature parameter, lambda, which is a function of the shell geometry, Poisson's ratio, and the crack length; and a loading parameter, eta, which is a function of the shell geometry, material properties, and the applied internal pressure. These plots identify the ranges of the shell curvature and loading parameters for which the effects of geometric nonlinearity are significant. Simple empirical expressions for the bulging factor are then derived from the numerical results and shown to predict accurately the nonlinear response of shells with longitudinal and circumferential cracks. The numerical results are also compared with analytical solutions based on linear shallow shell theory for thin shells, and with some other semi-empirical solutions from the literature, and limitations on the use of these other expressions are suggested.
El-Naas, Muftah H; Alhaija, Manal A; Al-Zuhair, Sulaiman
2017-03-01
The performance of an adsorption column packed with granular activated carbon was evaluated for the removal of phenols from refinery wastewater. The effects of phenol feed concentration (80-182 mg/l), feed flow rate (5-20 ml/min), and activated carbon packing mass (5-15 g) on the breakthrough characteristics of the adsorption system were determined. The continuous adsorption process was simulated using batch data and the parameters for a new empirical model were determined. Different dynamic models such as Adams-Bohart, Wolborsko, Thomas, and Yoon-Nelson models were also fitted to the experimental data for the sake of comparison. The empirical, Yoon-Nelson and Thomas models showed a high degree of fitting at different operation conditions, with the empirical model giving the best fit based on the Akaike information criterion (AIC). At an initial phenol concentration of 175 mg/l, packing mass of 10 g, a flow rate of 10 ml/min and a temperature of 25 °C, the SSE of the new empirical and Thomas models were identical (248.35) and very close to that of the Yoon-Nelson model (259.49). The values were significantly lower than that of the Adams-Bohart model, which was determined to be 19,358.48. The superiority of the new empirical model and the Thomas model was also confirmed from the values of the R 2 and AIC, which were 0.99 and 38.3, respectively, compared to 0.92 and 86.2 for Adams-Bohart model.
USAF (United States Air Force) Stability and Control DATCOM (Data Compendium)
1978-04-01
regression analysis involves the study of a group of variables to determine their effect on a given parameter. Because of the empirical nature of this...regression analysis of mathematical statistics. In general, a regression analysis involves the study of a group of variables to determine their effect on a...Excperiment, OSR TN 58-114, MIT Fluid Dynamics Research Group Rapt. 57-5, 1957. (U) 90. Kennet, H., and Ashley, H.: Review of Unsteady Aerodynamic Studies in
Towards a smart non-invasive fluid loss measurement system.
Suryadevara, N K; Mukhopadhyay, S C; Barrack, L
2015-04-01
In this article, a smart wireless sensing non-invasive system for estimating the amount of fluid loss, a person experiences while physical activity is presented. The system measures three external body parameters, Heart Rate, Galvanic Skin Response (GSR, or skin conductance), and Skin Temperature. These three parameters are entered into an empirically derived formula along with the user's body mass index, and estimation for the amount of fluid lost is determined. The core benefit of the developed system is the affluence usage in combining with smart home monitoring systems to care elderly people in ambient assisted living environments as well in automobiles to monitor the body parameters of a motorist.
NASA Astrophysics Data System (ADS)
Kumar, Pradeep; Dutta, B. K.; Chattopadhyay, J.
2017-04-01
The miniaturized specimens are used to determine mechanical properties of the materials, such as yield stress, ultimate stress, fracture toughness etc. Use of such specimens is essential whenever limited quantity of material is available for testing, such as aged/irradiated materials. The miniaturized small punch test (SPT) is a technique which is widely used to determine change in mechanical properties of the materials. Various empirical correlations are proposed in the literature to determine the value of fracture toughness (JIC) using this technique. bi-axial fracture strain is determined using SPT tests. This parameter is then used to determine JIC using available empirical correlations. The correlations between JIC and biaxial fracture strain quoted in the literature are based on experimental data acquired for large number of materials. There are number of such correlations available in the literature, which are generally not in agreement with each other. In the present work, an attempt has been made to determine the correlation between biaxial fracture strain (εqf) and crack initiation toughness (Ji) numerically. About one hundred materials are digitally generated by varying yield stress, ultimate stress, hardening coefficient and Gurson parameters. Such set of each material is then used to analyze a SPT specimen and a standard TPB specimen. Analysis of SPT specimen generated biaxial fracture strain (εqf) and analysis of TPB specimen generated value of Ji. A graph is then plotted between these two parameters for all the digitally generated materials. The best fit straight line determines the correlation. It has been also observed that it is possible to have variation in Ji for the same value of biaxial fracture strain (εqf) within a limit. Such variation in the value of Ji has been also ascertained using the graph. Experimental SPT data acquired earlier for three materials were then used to get Ji by using newly developed correlation. A reasonable comparison of calculated Ji with the values quoted in literature confirmed usefulness of the correlation.
Determination of a Limited Scope Network's Lightning Detection Efficiency
NASA Technical Reports Server (NTRS)
Rompala, John T.; Blakeslee, R.
2008-01-01
This paper outlines a modeling technique to map lightning detection efficiency variations over a region surveyed by a sparse array of ground based detectors. A reliable flash peak current distribution (PCD) for the region serves as the technique's base. This distribution is recast as an event probability distribution function. The technique then uses the PCD together with information regarding: site signal detection thresholds, type of solution algorithm used, and range attenuation; to formulate the probability that a flash at a specified location will yield a solution. Applying this technique to the full region produces detection efficiency contour maps specific to the parameters employed. These contours facilitate a comparative analysis of each parameter's effect on the network's detection efficiency. In an alternate application, this modeling technique gives an estimate of the number, strength, and distribution of events going undetected. This approach leads to a variety of event density contour maps. This application is also illustrated. The technique's base PCD can be empirical or analytical. A process for formulating an empirical PCD specific to the region and network being studied is presented. A new method for producing an analytical representation of the empirical PCD is also introduced.
Study of Parameters And Methods of LL-Ⅳ Distributed Hydrological Model in DMIP2
NASA Astrophysics Data System (ADS)
Li, L.; Wu, J.; Wang, X.; Yang, C.; Zhao, Y.; Zhou, H.
2008-05-01
: The Physics-based distributed hydrological model is considered as an important developing period from the traditional experience-hydrology to the physical hydrology. The Hydrology Laboratory of the NOAA National Weather Service proposes the first and second phase of the Distributed Model Intercomparison Project (DMIP),that it is a great epoch-making work. LL distributed hydrological model has been developed to the fourth generation since it was established in 1997 on the Fengman-I district reservoir area (11000 km2).The LL-I distributed hydrological model was born with the applications of flood control system in the Fengman-I in China. LL-II was developed under the DMIP-I support, it is combined with GIS, RS, GPS, radar rainfall measurement.LL-III was established along with Applications of LL Distributed Model on Water Resources which was supported by the 973-projects of The Ministry of Science and Technology of the People's Republic of China. LL-Ⅳ was developed to face China's water problem. Combined with Blue River and the Baron Fork River basin of DMIP-II, the convection-diffusion equation of non-saturated and saturated seepage was derived from the soil water dynamics and continuous equation. In view of the technical characteristics of the model, the advantage of using convection-diffusion equation to compute confluence overall is longer period of predictable, saving memory space, fast budgeting, clear physical concepts, etc. The determination of parameters of hydrological model is the key, including experience coefficients and parameters of physical parameters. There are methods of experience, inversion, and the optimization to determine the model parameters, and each has advantages and disadvantages. This paper briefly introduces the LL-Ⅳ distribution hydrological model equations, and particularly introduces methods of parameters determination and simulation results on Blue River and Baron Fork River basin for DMIP-II. The soil moisture diffusion coefficient and coefficient of hydraulic conductivity are involved all through the LL-Ⅳ distribution of runoff and slope convergence model, used mainly empirical formula to determine. It's used optimization methods to calculate the two parameters of evaporation capacity (coefficient of bare land and vegetation land), two parameters of interception and wave velocity of Overland Flow, interflow and groundwater. The approach of determining wave velocity of River Network confluence and diffusion coefficient is: 1. Estimate roughness based mainly on digital information such as land use, soil texture, etc. 2.Establish the empirical formula. Another method is called convection-diffusion numerical inversion.
NASA Astrophysics Data System (ADS)
Piecuch, C. G.; Huybers, P. J.; Tingley, M.
2016-12-01
Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.
Bias-dependent hybrid PKI empirical-neural model of microwave FETs
NASA Astrophysics Data System (ADS)
Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera
2011-10-01
Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.
Process Parameter Optimization for Wobbling Laser Spot Welding of Ti6Al4V Alloy
NASA Astrophysics Data System (ADS)
Vakili-Farahani, F.; Lungershausen, J.; Wasmer, K.
Laser beam welding (LBW) coupled with "wobble effect" (fast oscillation of the laser beam) is very promising for high precision micro-joining industry. For this process, similarly to the conventional LBW, the laser welding process parameters play a very significant role in determining the quality of a weld joint. Consequently, four process parameters (laser power, wobble frequency, number of rotations within a single laser pulse and focused position) and 5 responses (penetration, width, heat affected zone (HAZ), area of the fusion zone, area of HAZ and hardness) were investigated for spot welding of Ti6Al4V alloy (grade 5) using a design of experiments (DoE) approach. This paper presents experimental results showing the effects of variating the considered most important process parameters on the spot weld quality of Ti6Al4V alloy. Semi-empirical mathematical models were developed to correlate laser welding parameters to each of the measured weld responses. Adequacies of the models were then examined by various methods such as ANOVA. These models not only allows a better understanding of the wobble laser welding process and predict the process performance but also determines optimal process parameters. Therefore, optimal combination of process parameters was determined considering certain quality criteria set.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram
Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less
Agent-Based Model with Asymmetric Trading and Herding for Complex Financial Systems
Chen, Jun-Jie; Zheng, Bo; Tan, Lei
2013-01-01
Background For complex financial systems, the negative and positive return-volatility correlations, i.e., the so-called leverage and anti-leverage effects, are particularly important for the understanding of the price dynamics. However, the microscopic origination of the leverage and anti-leverage effects is still not understood, and how to produce these effects in agent-based modeling remains open. On the other hand, in constructing microscopic models, it is a promising conception to determine model parameters from empirical data rather than from statistical fitting of the results. Methods To study the microscopic origination of the return-volatility correlation in financial systems, we take into account the individual and collective behaviors of investors in real markets, and construct an agent-based model. The agents are linked with each other and trade in groups, and particularly, two novel microscopic mechanisms, i.e., investors’ asymmetric trading and herding in bull and bear markets, are introduced. Further, we propose effective methods to determine the key parameters in our model from historical market data. Results With the model parameters determined for six representative stock-market indices in the world, respectively, we obtain the corresponding leverage or anti-leverage effect from the simulation, and the effect is in agreement with the empirical one on amplitude and duration. At the same time, our model produces other features of the real markets, such as the fat-tail distribution of returns and the long-term correlation of volatilities. Conclusions We reveal that for the leverage and anti-leverage effects, both the investors’ asymmetric trading and herding are essential generation mechanisms. Among the six markets, however, the investors’ trading is approximately symmetric for the five markets which exhibit the leverage effect, thus contributing very little. These two microscopic mechanisms and the methods for the determination of the key parameters can be applied to other complex systems with similar asymmetries. PMID:24278146
Agent-based model with asymmetric trading and herding for complex financial systems.
Chen, Jun-Jie; Zheng, Bo; Tan, Lei
2013-01-01
For complex financial systems, the negative and positive return-volatility correlations, i.e., the so-called leverage and anti-leverage effects, are particularly important for the understanding of the price dynamics. However, the microscopic origination of the leverage and anti-leverage effects is still not understood, and how to produce these effects in agent-based modeling remains open. On the other hand, in constructing microscopic models, it is a promising conception to determine model parameters from empirical data rather than from statistical fitting of the results. To study the microscopic origination of the return-volatility correlation in financial systems, we take into account the individual and collective behaviors of investors in real markets, and construct an agent-based model. The agents are linked with each other and trade in groups, and particularly, two novel microscopic mechanisms, i.e., investors' asymmetric trading and herding in bull and bear markets, are introduced. Further, we propose effective methods to determine the key parameters in our model from historical market data. With the model parameters determined for six representative stock-market indices in the world, respectively, we obtain the corresponding leverage or anti-leverage effect from the simulation, and the effect is in agreement with the empirical one on amplitude and duration. At the same time, our model produces other features of the real markets, such as the fat-tail distribution of returns and the long-term correlation of volatilities. We reveal that for the leverage and anti-leverage effects, both the investors' asymmetric trading and herding are essential generation mechanisms. Among the six markets, however, the investors' trading is approximately symmetric for the five markets which exhibit the leverage effect, thus contributing very little. These two microscopic mechanisms and the methods for the determination of the key parameters can be applied to other complex systems with similar asymmetries.
Flow processes in overexpanded chemical rocket nozzles. Part 1: Flow separation
NASA Technical Reports Server (NTRS)
Schmucker, R. H.
1984-01-01
An investigation was made of published nozzle flow separation data in order to determine the parameters which affect the separation conditions. A comparison of experimental data with empirical and theoretical separation prediction methods leads to the selection of suitable equations for the separation criterion. The results were used to predict flow separation of the main space shuttle engine.
Flow processes in overexpanded chemical rocket nozzles. Part 1: Flow separation
NASA Technical Reports Server (NTRS)
Schmucker, R. H.
1973-01-01
An investigation was made of published nozzle flow separation data in order to determine the parameters which affect the separation condition. A comparison of experimental data with empirical and theoretical separation prediction methods leads to the selection of suitable equations for the separation criterion. The results were used to predict flow separation of the main space shuttle engine.
ERIC Educational Resources Information Center
Oyetoro, Oyebode Stephen; Ojo, Oloyede Ezekiel
2017-01-01
The study determined a significant difference in teachers' overall evaluations of six recommended Financial Accounting Textbooks in Southwestern Nigeria. It also assessed the specific evaluation parameters that account for the difference. It adopted the survey research design. The multistage sampling technique was used to select a total of 80…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pei, Zongrui; Stocks, George Malcolm
The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less
Free energy minimization to predict RNA secondary structures and computational RNA design.
Churkin, Alexander; Weinbrand, Lina; Barash, Danny
2015-01-01
Determining the RNA secondary structure from sequence data by computational predictions is a long-standing problem. Its solution has been approached in two distinctive ways. If a multiple sequence alignment of a collection of homologous sequences is available, the comparative method uses phylogeny to determine conserved base pairs that are more likely to form as a result of billions of years of evolution than by chance. In the case of single sequences, recursive algorithms that compute free energy structures by using empirically derived energy parameters have been developed. This latter approach of RNA folding prediction by energy minimization is widely used to predict RNA secondary structure from sequence. For a significant number of RNA molecules, the secondary structure of the RNA molecule is indicative of its function and its computational prediction by minimizing its free energy is important for its functional analysis. A general method for free energy minimization to predict RNA secondary structures is dynamic programming, although other optimization methods have been developed as well along with empirically derived energy parameters. In this chapter, we introduce and illustrate by examples the approach of free energy minimization to predict RNA secondary structures.
Undersampling power-law size distributions: effect on the assessment of extreme natural hazards
Geist, Eric L.; Parsons, Thomas E.
2014-01-01
The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and by attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historic data.
Alternative Approaches to Evaluation in Empirical Microeconomics
ERIC Educational Resources Information Center
Blundell, Richard; Dias, Monica Costa
2009-01-01
This paper reviews some of the most popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, matching, instrumental variables, discontinuity design, and control functions. It discusses identification of traditionally used average parameters and more complex distributional parameters. The adequacy,…
NASA Astrophysics Data System (ADS)
Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.
2017-12-01
This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that using EOF techniques can capture the groundwater flow tendency and detects the correction vector of the simulated error sources. Hence, the established EOF-based methodology can effectively and accurately identify the multiple recharges and hydrogeological parameters.
Estimation and confidence intervals for empirical mixing distributions
Link, W.A.; Sauer, J.R.
1995-01-01
Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.
The First Empirical Determination of the Fe10+ and Fe13+ Freeze-in Distances in the Solar Corona
NASA Astrophysics Data System (ADS)
Boe, Benjamin; Habbal, Shadia; Druckmüller, Miloslav; Landi, Enrico; Kourkchi, Ehsan; Ding, Adalbert; Starha, Pavel; Hutton, Joseph
2018-06-01
Heavy ions are markers of the physical processes responsible for the density and temperature distribution throughout the fine-scale magnetic structures that define the shape of the solar corona. One of their properties, whose empirical determination has remained elusive, is the “freeze-in” distance (R f ) where they reach fixed ionization states that are adhered to during their expansion with the solar wind. We present the first empirical inference of R f for {Fe}}{10+} and {Fe}}{13+} derived from multi-wavelength imaging observations of the corresponding Fe XI ({Fe}}{10+}) 789.2 nm and Fe XIV ({Fe}}{13+}) 530.3 nm emission acquired during the 2015 March 20 total solar eclipse. We find that the two ions freeze-in at different heliocentric distances. In polar coronal holes (CHs) R f is around 1.45 R ⊙ for {Fe}}{10+} and below 1.25 R ⊙ for {Fe}}{13+}. Along open field lines in streamer regions, R f ranges from 1.4 to 2 R ⊙ for {Fe}}{10+} and from 1.5 to 2.2 R ⊙ for {Fe}}{13+}. These first empirical R f values: (1) reflect the differing plasma parameters between CHs and streamers and structures within them, including prominences and coronal mass ejections; (2) are well below the currently quoted values derived from empirical model studies; and (3) place doubt on the reliability of plasma diagnostics based on the assumption of ionization equilibrium beyond 1.2 R ⊙.
The effect of mechanical drawing on optical and structural properties of nylon 6 fibres
NASA Astrophysics Data System (ADS)
El-Bakary, M. A.
2007-09-01
The Pluta polarizing double-refracting interference microscope was attached to a mechanical drawing device to study the effect of cold drawing on the optical and structural properties of nylon 6 fibres. The microscope was used in its two positions for determining the refractive indices and birefringence of fibres. Different applied stresses and strain rates were obtained using the mechanical-drawing device. The effect of the applied stresses on the optical and physical parameters was investigated. The resulting optical parameters were utilized to investigate the polarizability per unit volume, the optical orientation factor, the orientation angle and the average work per chain. The refractive index and birefringence profiles were measured. Relationships between the average work per chain and optical parameters at different strains rates were determined. An empirical formula was deduced for these fibres. Micro-interferograms are given for illustration.
Inamdar, Shaukatali N; Ingole, Pravin P; Haram, Santosh K
2008-12-01
Band structure parameters such as the conduction band edge, the valence band edge and the quasi-particle gap of diffusing CdSe quantum dots (Q-dots) of various sizes were determined using cyclic voltammetry. These parameters are strongly dependent on the size of the Q-dots. The results obtained from voltammetric measurements are compared to spectroscopic and theoretical data. The fit obtained to the reported calculations based on the semi-empirical pseudopotential method (SEPM)-especially in the strong size-confinement region, is the best reported so far, according to our knowledge. For the smallest CdSe Q-dots, the difference between the quasi-particle gap and the optical band gap gives the electron-hole Coulombic interaction energy (J(e1,h1)). Interband states seen in the photoluminescence spectra were verified with cyclic voltammetry measurements.
STEAM: a software tool based on empirical analysis for micro electro mechanical systems
NASA Astrophysics Data System (ADS)
Devasia, Archana; Pasupuleti, Ajay; Sahin, Ferat
2006-03-01
In this research a generalized software framework that enables accurate computer aided design of MEMS devices is developed. The proposed simulation engine utilizes a novel material property estimation technique that generates effective material properties at the microscopic level. The material property models were developed based on empirical analysis and the behavior extraction of standard test structures. A literature review is provided on the physical phenomena that govern the mechanical behavior of thin films materials. This survey indicates that the present day models operate under a wide range of assumptions that may not be applicable to the micro-world. Thus, this methodology is foreseen to be an essential tool for MEMS designers as it would develop empirical models that relate the loading parameters, material properties, and the geometry of the microstructures with its performance characteristics. This process involves learning the relationship between the above parameters using non-parametric learning algorithms such as radial basis function networks and genetic algorithms. The proposed simulation engine has a graphical user interface (GUI) which is very adaptable, flexible, and transparent. The GUI is able to encompass all parameters associated with the determination of the desired material property so as to create models that provide an accurate estimation of the desired property. This technique was verified by fabricating and simulating bilayer cantilevers consisting of aluminum and glass (TEOS oxide) in our previous work. The results obtained were found to be very encouraging.
Semi-empirical master curve concept describing the rate capability of lithium insertion electrodes
NASA Astrophysics Data System (ADS)
Heubner, C.; Seeba, J.; Liebmann, T.; Nickol, A.; Börner, S.; Fritsch, M.; Nikolowski, K.; Wolter, M.; Schneider, M.; Michaelis, A.
2018-03-01
A simple semi-empirical master curve concept, describing the rate capability of porous insertion electrodes for lithium-ion batteries, is proposed. The model is based on the evaluation of the time constants of lithium diffusion in the liquid electrolyte and the solid active material. This theoretical approach is successfully verified by comprehensive experimental investigations of the rate capability of a large number of porous insertion electrodes with various active materials and design parameters. It turns out, that the rate capability of all investigated electrodes follows a simple master curve governed by the time constant of the rate limiting process. We demonstrate that the master curve concept can be used to determine optimum design criteria meeting specific requirements in terms of maximum gravimetric capacity for a desired rate capability. The model further reveals practical limits of the electrode design, attesting the empirically well-known and inevitable tradeoff between energy and power density.
What Drives the Variability of the Mid-Latitude Ionosphere?
NASA Astrophysics Data System (ADS)
Goncharenko, L. P.; Zhang, S.; Erickson, P. J.; Harvey, L.; Spraggs, M. E.; Maute, A. I.
2016-12-01
The state of the ionosphere is determined by the superposition of the regular changes and stochastic variations of the ionospheric parameters. Regular variations are represented by diurnal, seasonal and solar cycle changes, and can be well described by empirical models. Short-term perturbations that vary from a few seconds to a few hours or days can be induced in the ionosphere by solar flares, changes in solar wind, coronal mass ejections, travelling ionospheric disturbances, or meteorological influences. We use over 40 years of observations by the Millstone Hill incoherent scatter radar (42.6oN, 288.5oE) to develop an updated empirical model of ionospheric parameters, and wintertime data collected in 2004-2016 to study variability in ionospheric parameters. We also use NASA MERRA2 atmospheric reanalysis data to examine possible connections between the state of the stratosphere & mesosphere and the upper atmosphere (250-400km). A case of major SSW of January 2013 is selected for in-depth study and reveals large anomalies in ionospheric parameters. Modeling with the NCAR Thermospheric-Ionospheric-Mesospheric-Electrodynamics general Circulation Model (TIME-GCM) nudged by WACCM-GEOS5 simulation indicates that during the 2013 SSW the neutral and ion temperature in the polar through mid-latitude region deviates from the seasonal behavior.
Culture and Social Relationship as Factors of Affecting Communicative Non-verbal Behaviors
NASA Astrophysics Data System (ADS)
Akhter Lipi, Afia; Nakano, Yukiko; Rehm, Mathias
The goal of this paper is to link a bridge between social relationship and cultural variation to predict conversants' non-verbal behaviors. This idea serves as a basis of establishing a parameter based socio-cultural model, which determines non-verbal expressive parameters that specify the shapes of agent's nonverbal behaviors in HAI. As the first step, a comparative corpus analysis is done for two cultures in two specific social relationships. Next, by integrating the cultural and social parameters factors with the empirical data from corpus analysis, we establish a model that predicts posture. The predictions from our model successfully demonstrate that both cultural background and social relationship moderate communicative non-verbal behaviors.
A discrete element method-based approach to predict the breakage of coal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Varun; Sun, Xin; Xu, Wei
Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less
A discrete element method-based approach to predict the breakage of coal
Gupta, Varun; Sun, Xin; Xu, Wei; ...
2017-08-05
Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less
Experimental Evaluation of Equivalent-Fluid Models for Melamine Foam
NASA Technical Reports Server (NTRS)
Allen, Albert R.; Schiller, Noah H.
2016-01-01
Melamine foam is a soft porous material commonly used in noise control applications. Many models exist to represent porous materials at various levels of fidelity. This work focuses on rigid frame equivalent fluid models, which represent the foam as a fluid with a complex speed of sound and density. There are several empirical models available to determine these frequency dependent parameters based on an estimate of the material flow resistivity. Alternatively, these properties can be experimentally educed using an impedance tube setup. Since vibroacoustic models are generally sensitive to these properties, this paper assesses the accuracy of several empirical models relative to impedance tube measurements collected with melamine foam samples. Diffuse field sound absorption measurements collected using large test articles in a laboratory are also compared with absorption predictions determined using model-based and measured foam properties. Melamine foam slabs of various thicknesses are considered.
Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O
2018-01-01
Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.
NASA Astrophysics Data System (ADS)
Abd-Elmotaal, Hussein; Kühtreiber, Norbert
2016-04-01
In the framework of the IAG African Geoid Project, there are a lot of large data gaps in its gravity database. These gaps are filled initially using unequal weight least-squares prediction technique. This technique uses a generalized Hirvonen covariance function model to replace the empirically determined covariance function. The generalized Hirvonen covariance function model has a sensitive parameter which is related to the curvature parameter of the covariance function at the origin. This paper studies the effect of the curvature parameter on the least-squares prediction results, especially in the large data gaps as appearing in the African gravity database. An optimum estimation of the curvature parameter has also been carried out. A wide comparison among the results obtained in this research along with their obtained accuracy is given and thoroughly discussed.
Origin of the sensitivity in modeling the glide behaviour of dislocations
Pei, Zongrui; Stocks, George Malcolm
2018-03-26
The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less
Identification of AR(I)MA processes for modelling temporal correlations of GPS observations
NASA Astrophysics Data System (ADS)
Luo, X.; Mayer, M.; Heck, B.
2009-04-01
In many geodetic applications observations of the Global Positioning System (GPS) are routinely processed by means of the least-squares method. However, this algorithm delivers reliable estimates of unknown parameters und realistic accuracy measures only if both the functional and stochastic models are appropriately defined within GPS data processing. One deficiency of the stochastic model used in many GPS software products consists in neglecting temporal correlations of GPS observations. In practice the knowledge of the temporal stochastic behaviour of GPS observations can be improved by analysing time series of residuals resulting from the least-squares evaluation. This paper presents an approach based on the theory of autoregressive (integrated) moving average (AR(I)MA) processes to model temporal correlations of GPS observations using time series of observation residuals. A practicable integration of AR(I)MA models in GPS data processing requires the determination of the order parameters of AR(I)MA processes at first. In case of GPS, the identification of AR(I)MA processes could be affected by various factors impacting GPS positioning results, e.g. baseline length, multipath effects, observation weighting, or weather variations. The influences of these factors on AR(I)MA identification are empirically analysed based on a large amount of representative residual time series resulting from differential GPS post-processing using 1-Hz observation data collected within the permanent SAPOS® (Satellite Positioning Service of the German State Survey) network. Both short and long time series are modelled by means of AR(I)MA processes. The final order parameters are determined based on the whole residual database; the corresponding empirical distribution functions illustrate that multipath and weather variations seem to affect the identification of AR(I)MA processes much more significantly than baseline length and observation weighting. Additionally, the modelling results of temporal correlations using high-order AR(I)MA processes are compared with those by means of first order autoregressive (AR(1)) processes and empirically estimated autocorrelation functions.
Maximum Entropy for the International Division of Labor.
Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang
2015-01-01
As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country's strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product's complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country's strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter.
Maximum Entropy for the International Division of Labor
Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang
2015-01-01
As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country’s strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product’s complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country’s strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter. PMID:26172052
NASA Astrophysics Data System (ADS)
Elrefai, Ahmed L.; Sasayama, Teruyoshi; Yoshida, Takashi; Enpuku, Keiji
2018-05-01
We studied the magnetization (M-H) curve of immobilized magnetic nanoparticles (MNPs) used for biomedical applications. First, we performed numerical simulation on the DC M-H curve over a wide range of MNPs parameters. Based on the simulation results, we obtained an empirical expression for DC M-H curve. The empirical expression was compared with the measured M-H curves of various MNP samples, and quantitative agreements were obtained between them. We can also estimate the basic parameters of MNP from the comparison. Therefore, the empirical expression is useful for analyzing the M-H curve of immobilized MNPs for specific biomedical applications.
NASA Astrophysics Data System (ADS)
Carozza, D. A.; Bianchi, D.; Galbraith, E. D.
2015-12-01
Environmental change and the exploitation of marine resources have had profound impacts on marine communities, with potential implications for ocean biogeochemistry and food security. In order to study such global-scale problems, it is helpful to have computationally efficient numerical models that predict the first-order features of fish biomass production as a function of the environment, based on empirical and mechanistic understandings of marine ecosystems. Here we describe the ecological module of the BiOeconomic mArine Trophic Size-spectrum (BOATS) model, which takes an Earth-system approach to modeling fish biomass at the global scale. The ecological model is designed to be used on an Earth System model grid, and determines size spectra of fish biomass by explicitly resolving life history as a function of local temperature and net primary production. Biomass production is limited by the availability of photosynthetic energy to upper trophic levels, following empirical trophic efficiency scalings, and by well-established empirical temperature-dependent growth rates. Natural mortality is calculated using an empirical size-based relationship, while reproduction and recruitment depend on both the food availability to larvae from net primary production and the production of eggs by mature adult fish. We describe predicted biomass spectra and compare them to observations, and conduct a sensitivity study to determine how the change as a function of net primary production and temperature. The model relies on a limited number of parameters compared to similar modeling efforts, while retaining realistic representations of biological and ecological processes, and is computationally efficient, allowing extensive parameter-space analyses even when implemented globally. As such, it enables the exploration of the linkages between ocean biogeochemistry, climate, and upper trophic levels at the global scale, as well as a representation of fish biomass for idealized studies of fisheries.
NASA Astrophysics Data System (ADS)
Carozza, David Anthony; Bianchi, Daniele; Galbraith, Eric Douglas
2016-04-01
Environmental change and the exploitation of marine resources have had profound impacts on marine communities, with potential implications for ocean biogeochemistry and food security. In order to study such global-scale problems, it is helpful to have computationally efficient numerical models that predict the first-order features of fish biomass production as a function of the environment, based on empirical and mechanistic understandings of marine ecosystems. Here we describe the ecological module of the BiOeconomic mArine Trophic Size-spectrum (BOATS) model, which takes an Earth-system approach to modelling fish biomass at the global scale. The ecological model is designed to be used on an Earth-system model grid, and determines size spectra of fish biomass by explicitly resolving life history as a function of local temperature and net primary production. Biomass production is limited by the availability of photosynthetic energy to upper trophic levels, following empirical trophic efficiency scalings, and by well-established empirical temperature-dependent growth rates. Natural mortality is calculated using an empirical size-based relationship, while reproduction and recruitment depend on both the food availability to larvae from net primary production and the production of eggs by mature adult fish. We describe predicted biomass spectra and compare them to observations, and conduct a sensitivity study to determine how they change as a function of net primary production and temperature. The model relies on a limited number of parameters compared to similar modelling efforts, while retaining reasonably realistic representations of biological and ecological processes, and is computationally efficient, allowing extensive parameter-space analyses even when implemented globally. As such, it enables the exploration of the linkages between ocean biogeochemistry, climate, and upper trophic levels at the global scale, as well as a representation of fish biomass for idealized studies of fisheries.
Soil Erosion as a stochastic process
NASA Astrophysics Data System (ADS)
Casper, Markus C.
2015-04-01
The main tools to provide estimations concerning risk and amount of erosion are different types of soil erosion models: on the one hand, there are empirically based model concepts on the other hand there are more physically based or process based models. However, both types of models have substantial weak points. All empirical model concepts are only capable of providing rough estimates over larger temporal and spatial scales, they do not account for many driving factors that are in the scope of scenario related analysis. In addition, the physically based models contain important empirical parts and hence, the demand for universality and transferability is not given. As a common feature, we find, that all models rely on parameters and input variables, which are to certain, extend spatially and temporally averaged. A central question is whether the apparent heterogeneity of soil properties or the random nature of driving forces needs to be better considered in our modelling concepts. Traditionally, researchers have attempted to remove spatial and temporal variability through homogenization. However, homogenization has been achieved through physical manipulation of the system, or by statistical averaging procedures. The price for obtaining this homogenized (average) model concepts of soils and soil related processes has often been a failure to recognize the profound importance of heterogeneity in many of the properties and processes that we study. Especially soil infiltrability and the resistance (also called "critical shear stress" or "critical stream power") are the most important empirical factors of physically based erosion models. The erosion resistance is theoretically a substrate specific parameter, but in reality, the threshold where soil erosion begins is determined experimentally. The soil infiltrability is often calculated with empirical relationships (e.g. based on grain size distribution). Consequently, to better fit reality, this value needs to be corrected experimentally. To overcome this disadvantage of our actual models, soil erosion models are needed that are able to use stochastic directly variables and parameter distributions. There are only some minor approaches in this direction. The most advanced is the model "STOSEM" proposed by Sidorchuk in 2005. In this model, only a small part of the soil erosion processes is described, the aggregate detachment and the aggregate transport by flowing water. The concept is highly simplified, for example, many parameters are temporally invariant. Nevertheless, the main problem is that our existing measurements and experiments are not geared to provide stochastic parameters (e.g. as probability density functions); in the best case they deliver a statistical validation of the mean values. Again, we get effective parameters, spatially and temporally averaged. There is an urgent need for laboratory and field experiments on overland flow structure, raindrop effects and erosion rate, which deliver information on spatial and temporal structure of soil and surface properties and processes.
Earth orbital teleoperator systems evaluation
NASA Technical Reports Server (NTRS)
Shields, N. L., Jr.; Slaughter, P. H.; Brye, R. G.; Henderson, D. E.
1979-01-01
The mechanical extension of the human operator to remote and specialized environments poses a series of complex operational questions. A technical and scientific team was organized to investigate these questions through conducting specific laboratory and analytical studies. The intent of the studies was to determine the human operator requirements for remotely manned systems and to determine the particular effects that various system parameters have on human operator performance. In so doing, certain design criteria based on empirically derived data concerning the ultimate control system, the human operator, were added to the Teleoperator Development Program.
NASA Astrophysics Data System (ADS)
Wałęga, Andrzej; Młyński, Dariusz; Wachulec, Katarzyna
2017-12-01
The aim of the study was to assess the applicability of asymptotic functions for determining the value of CN parameter as a function of precipitation depth in mountain and upland catchments. The analyses were carried out in two catchments: the Rudawa, left tributary of the Vistula, and the Kamienica, right tributary of the Dunajec. The input material included data on precipitation and flows for a multi-year period 1980-2012, obtained from IMGW PIB in Warsaw. Two models were used to determine empirical values of CNobs parameter as a function of precipitation depth: standard Hawkins model and 2-CN model allowing for a heterogeneous nature of a catchment area. The study analyses confirmed that asymptotic functions properly described P-CNobs relationship for the entire range of precipitation variability. In the case of high rainfalls, CNobs remained above or below the commonly accepted average antecedent moisture conditions AMCII. The study calculations indicated that the runoff amount calculated according to the original SCS-CN method might be underestimated, and this could adversely affect the values of design flows required for the design of hydraulic engineering projects. In catchments with heterogeneous land cover, the results of CNobs were more accurate when 2-CN model was used instead of the standard Hawkins model. 2-CN model is more precise in accounting for differences in runoff formation depending on retention capacity of the substrate. It was also demonstrated that the commonly accepted initial abstraction coefficient λ = 0.20 yielded too big initial loss of precipitation in the analyzed catchments and, therefore, the computed direct runoff was underestimated. The best results were obtained for λ = 0.05.
NASA Astrophysics Data System (ADS)
Nasruddin, Syaka, Darwin R. B.; Alhamid, M. Idrus
2012-06-01
Various binary mixtures of carbon dioxide and hydrocarbons, especially propane or ethane, as alternative natural refrigerants to Chlorofluorocarbons (CFCs) or Hydro fluorocarbons (HFCs) are presented in this paper. Their environmental performance is friendly, with an ozone depletion potential (ODP) of zero and Global-warming potential (GWP) smaller than 20. The capillary tube performance for the alternative refrigerant HFC HCand mixed refrigerants have been widely studied. However, studies that discuss the performance of the capillary tube to a mixture of natural refrigerants, in particular a mixture of azeotrope carbon dioxide and ethane is still undeveloped. A method of empirical correlation to determine the mass flow rate and pipe length has an important role in the design of the capillary tube for industrial refrigeration. Based on the variables that effect the rate of mass flow of refrigerant in the capillary tube, the Buckingham Pi theorem formulated eight non-dimensional parameters to be developed into an empirical equations correlation. Furthermore, non-linear regression analysis used to determine the co-efficiency and exponent of this empirical correlation based on experimental verification of the results database.
AXIALLY ORIENTED SECTIONS OF NUMMULITIDS: A TOOL TO INTERPRET LARGER BENTHIC FORAMINIFERAL DEPOSITS
Hohenegger, Johann; Briguglio, Antonino
2015-01-01
The “critical shear velocity” and “settling velocity” of foraminiferal shells are important parameters for determining hydrodynamic conditions during deposition of Nummulites banks. These can be estimated by determining the size, shape, and density of nummulitid shells examined in axial sections cut perpendicular to the bedding plane. Shell size and shape can be determined directly from the shell diameter and thickness, but density must be calculated indirectly from the thin section. Calculations using the half-tori method approximate shell densities by equalizing the chamber volume of each half whorl, based on the half whorl’s lumen area and its center of gravity. Results from this method yield the same lumen volumes produced empirically by micro-computed tomography. The derived hydrodynamic parameters help estimate the minimum flow velocities needed to entrain nummulitid tests and provide a potential tool to account for the nature of their accumulations. PMID:26166914
AXIALLY ORIENTED SECTIONS OF NUMMULITIDS: A TOOL TO INTERPRET LARGER BENTHIC FORAMINIFERAL DEPOSITS.
Hohenegger, Johann; Briguglio, Antonino
2012-04-01
The "critical shear velocity" and "settling velocity" of foraminiferal shells are important parameters for determining hydrodynamic conditions during deposition of Nummulites banks. These can be estimated by determining the size, shape, and density of nummulitid shells examined in axial sections cut perpendicular to the bedding plane. Shell size and shape can be determined directly from the shell diameter and thickness, but density must be calculated indirectly from the thin section. Calculations using the half-tori method approximate shell densities by equalizing the chamber volume of each half whorl, based on the half whorl's lumen area and its center of gravity. Results from this method yield the same lumen volumes produced empirically by micro-computed tomography. The derived hydrodynamic parameters help estimate the minimum flow velocities needed to entrain nummulitid tests and provide a potential tool to account for the nature of their accumulations.
NASA Astrophysics Data System (ADS)
Jafari, Fereshteh Sadat; Ahmadi-Shokouh, Javad
2018-02-01
A frequency-selective surface (FSS) structure is proposed for characterization of the permittivity of industrial oil using a transmission/reflection (TR) measurement scheme in the X-band. Moreover, a parameter study is presented to distinguish the dielectric constant and loss characteristics of test materials. To model the loss empirically, we used CuO nanoparticles artificially mixed with an industrial oil. In this study, the resonant frequency of the FSS is the basic parameter used to determine the material characteristics, including resonance properties such as the magnitude of transmission ( S 21), bandwidth, and frequency shift. The results reveal that the proposed FSS structure and setup can act well as a sensor for characterization of the dielectric properties of industrial oil.
NASA Astrophysics Data System (ADS)
Wu, M.; Ahmadein, M.; Kharicha, A.; Ludwig, A.; Li, J. H.; Schumacher, P.
2012-07-01
Empirical knowledge about the formation of the as-cast structure, mostly obtained before 1980s, has revealed two critical issues: one is the origin of the equiaxed crystals; one is the competing growth of the columnar and equiaxed structures, and the columnar-to-equiaxed transition (CET). Unfortunately, the application of empirical knowledge to predict and control the as-cast structure was very limited, as the flow and crystal transport were not considered. Therefore, a 5-phase mixed columnar-equiaxed solidification model was recently proposed by the current authors based on modeling the multiphase transport phenomena. The motivation of the recent work is to determine and evaluate the necessary modeling parameters, and to validate the mixed columnar-equiaxed solidification model by comparison with laboratory castings. In this regard an experimental method was recommended for in-situ determination of the nucleation parameters. Additionally, some classical experiments of the Al-Cu ingots were conducted and the as-cast structural information including distinct columnar and equiaxed zones, macrosegregation, and grain size distribution were analysed. The final simulation results exhibited good agreement with experiments in the case of high pouring temperature, whereas disagreement in the case of low pouring temperature. The reasons for the disagreement are discussed.
Charge Transport in Nonaqueous Liquid Electrolytes: A Paradigm Shift
2015-05-18
that provide inadequate descriptions of experimental data, often using empirical equations whose fitting parameters have no physical significance...provide inadequate descriptions of experimental data, often using empirical equations whose fitting parameters have no physical significance...Ea The hydrodynamic model, utilizing the Stokes equation describes isothermal conductivity, self-diffusion coefficient, and the dielectric
A collinearity diagnosis of the GNSS geocenter determination
NASA Astrophysics Data System (ADS)
Rebischung, Paul; Altamimi, Zuheir; Springer, Tim
2014-01-01
The problem of observing geocenter motion from global navigation satellite system (GNSS) solutions through the network shift approach is addressed from the perspective of collinearity (or multicollinearity) among the parameters of a least-squares regression. A collinearity diagnosis, based on the notion of variance inflation factor, is therefore developed and allows handling several peculiarities of the GNSS geocenter determination problem. Its application reveals that the determination of all three components of geocenter motion with GNSS suffers from serious collinearity issues, with a comparable level as in the problem of determining the terrestrial scale simultaneously with the GNSS satellite phase center offsets. The inability of current GNSS, as opposed to satellite laser ranging, to properly sense geocenter motion is mostly explained by the estimation, in the GNSS case, of epoch-wise station and satellite clock offsets simultaneously with tropospheric parameters. The empirical satellite accelerations, as estimated by most Analysis Centers of the International GNSS Service, slightly amplify the collinearity of the geocenter coordinate, but their role remains secondary.
A semi-empirical model relating micro structure to acoustic properties of bimodal porous material
NASA Astrophysics Data System (ADS)
Mosanenzadeh, Shahrzad Ghaffari; Doutres, Olivier; Naguib, Hani E.; Park, Chul B.; Atalla, Noureddine
2015-01-01
Complex morphology of open cell porous media makes it difficult to link microstructural parameters and acoustic behavior of these materials. While morphology determines the overall sound absorption and noise damping effectiveness of a porous structure, little is known on the influence of microstructural configuration on the macroscopic properties. In the present research, a novel bimodal porous structure was designed and developed solely for modeling purposes. For the developed porous structure, it is possible to have direct control on morphological parameters and avoid complications raised by intricate pore geometries. A semi-empirical model is developed to relate microstructural parameters to macroscopic characteristics of porous material using precise characterization results based on the designed bimodal porous structures. This model specifically links macroscopic parameters including static airflow resistivity ( σ ) , thermal characteristic length ( Λ ' ) , viscous characteristic length ( Λ ) , and dynamic tortuosity ( α ∞ ) to microstructural factors such as cell wall thickness ( 2 t ) and reticulation rate ( R w ) . The developed model makes it possible to design the morphology of porous media to achieve optimum sound absorption performance based on the application in hand. This study makes the base for understanding the role of microstructural geometry and morphological factors on the overall macroscopic parameters of porous materials specifically for acoustic capabilities. The next step is to include other microstructural parameters as well to generalize the developed model. In the present paper, pore size was kept constant for eight categories of bimodal foams to study the effect of secondary porous structure on macroscopic properties and overall acoustic behavior of porous media.
Attenuation and source properties at the Coso Geothermal area, California
Hough, S.E.; Lees, J.M.; Monastero, F.
1999-01-01
We use a multiple-empirical Green's function method to determine source properties of small (M -0.4 to 1.3) earthquakes and P- and S-wave attenuation at the Coso Geothermal Field, California. Source properties of a previously identified set of clustered events from the Coso geothermal region are first analyzed using an empirical Green's function (EGF) method. Stress-drop values of at least 0.5-1 MPa are inferred for all of the events; in many cases, the corner frequency is outside the usable bandwidth, and the stress drop can only be constrained as being higher than 3 MPa. P- and S-wave stress-drop estimates are identical to the resolution limits of the data. These results are indistinguishable from numerous EGF studies of M 2-5 earthquakes, suggesting a similarity in rupture processes that extends to events that are both tiny and induced, providing further support for Byerlee's Law. Whole-path Q estimates for P and S waves are determined using the multiple-empirical Green's function (MEGF) method of Hough (1997), whereby spectra from clusters of colocated events at a given station are inverted for a single attenuation parameter, ??, with source parameters constrained from EGF analysis. The ?? estimates, which we infer to be resolved to within 0.01 sec or better, exhibit almost as much scatter as a function of hypocentral distance as do values from previous single-spectrum studies for which much higher uncertainties in individual ?? estimates are expected. The variability in ?? estimates determined here therefore suggests real lateral variability in Q structure. Although the ray-path coverage is too sparse to yield a complete three-dimensional attenuation tomographic image, we invert the inferred ?? value for three-dimensional structure using a damped least-squares method, and the results do reveal significant lateral variability in Q structure. The inferred attenuation variability corresponds to the heat-flow variations within the geothermal region. A central low-Q region corresponds well with the central high-heat flow region; additional detailed structure is also suggested.
NASA Astrophysics Data System (ADS)
Kant Garg, Girish; Garg, Suman; Sangwan, K. S.
2018-04-01
The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.
NASA Astrophysics Data System (ADS)
Pekkarinen, J.; Kujanpää, V.
This study is focused to determine empirically, which microstructural changes occur in ferritic and duplex stainless steels when heat input is controlled by welding parameters. Test welds were done autogenously bead-on-plate without shielding gas using 5 kW fiber laser. For comparison, some gas tungsten arc welds were made. Used test material were 1.4016 (AISI 430) and 1.4003 (low-carbon ferritic) type steels in ferritic steels group and 1.4162 (low-alloyed duplex, LDX2101) and 1.4462 (AISI 2205) type steels in duplex steels group. Microstructural changes in welds were identified and examined using optical metallographic methods.
Baldoví, José J; Gaita-Ariño, Alejandro; Coronado, Eugenio
2015-07-28
In a previous study, we introduced the Radial Effective Charge (REC) model to study the magnetic properties of lanthanide single ion magnets. Now, we perform an empirical determination of the effective charges (Zi) and radial displacements (Dr) of this model using spectroscopic data. This systematic study allows us to relate Dr and Zi with chemical factors such as the coordination number and the electronegativities of the metal and the donor atoms. This strategy is being used to drastically reduce the number of free parameters in the modeling of the magnetic and spectroscopic properties of f-element complexes.
Xu, Y.; Xia, J.; Miller, R.D.
2006-01-01
Multichannel analysis of surface waves is a developing method widely used in shallow subsurface investigations. The field procedures and related parameters are very important for successful applications. Among these parameters, the source-receiver offset range is seldom discussed in theory and normally determined by empirical or semi-quantitative methods in current practice. This paper discusses the problem from a theoretical perspective. A formula for quantitatively evaluating a layered homogenous elastic model was developed. The analytical results based on simple models and experimental data demonstrate that the formula is correct for surface wave surveys for near-surface applications. ?? 2005 Elsevier B.V. All rights reserved.
Estimating procedure times for surgeries by determining location parameters for the lognormal model.
Spangler, William E; Strum, David P; Vargas, Luis G; May, Jerrold H
2004-05-01
We present an empirical study of methods for estimating the location parameter of the lognormal distribution. Our results identify the best order statistic to use, and indicate that using the best order statistic instead of the median may lead to less frequent incorrect rejection of the lognormal model, more accurate critical value estimates, and higher goodness-of-fit. Using simulation data, we constructed and compared two models for identifying the best order statistic, one based on conventional nonlinear regression and the other using a data mining/machine learning technique. Better surgical procedure time estimates may lead to improved surgical operations.
Empirical scaling laws for coronal heating
NASA Technical Reports Server (NTRS)
Golub, L.
1983-01-01
The origins and uses of scaling laws in studies of stellar outer atmospheres are reviewed with particular emphasis on the properties of coronal loops. Some evidence is presented for a fundamental structuring of the solar corona and the thermodynamics of scaling laws are discussed. It is found that magnetic field-related scaling laws can be obtained by relating coronal pressure, temperature, and magnetic field strength. Available data validate this method. Some parameters of the theory, however, must be treated as adjustable, and it is considered necessary to examine data from other stars in order to determine the validity of the parameters. Using detailed observational data, the applicability of single loop models is examined.
Fractal Theory for Permeability Prediction, Venezuelan and USA Wells
NASA Astrophysics Data System (ADS)
Aldana, Milagrosa; Altamiranda, Dignorah; Cabrera, Ana
2014-05-01
Inferring petrophysical parameters such as permeability, porosity, water saturation, capillary pressure, etc, from the analysis of well logs or other available core data has always been of critical importance in the oil industry. Permeability in particular, which is considered to be a complex parameter, has been inferred using both empirical and theoretical techniques. The main goal of this work is to predict permeability values on different wells using Fractal Theory, based on a method proposed by Pape et al. (1999). This approach uses the relationship between permeability and the geometric form of the pore space of the rock. This method is based on the modified equation of Kozeny-Carman and a fractal pattern, which allows determining permeability as a function of the cementation exponent, porosity and the fractal dimension. Data from wells located in Venezuela and the United States of America are analyzed. Employing data of porosity and permeability obtained from core samples, and applying the Fractal Theory method, we calculated the prediction equations for each well. At the beginning, this was achieved by training with 50% of the data available for each well. Afterwards, these equations were tested inferring over 100% of the data to analyze possible trends in their distribution. This procedure gave excellent results in all the wells in spite of their geographic distance, generating permeability models with the potential to accurately predict permeability logs in the remaining parts of the well for which there are no core samples, using even porority logs. Additionally, empirical models were used to determine permeability and the results were compared with those obtained by applying the fractal method. The results indicated that, although there are empirical equations that give a proper adjustment, the prediction results obtained using fractal theory give a better fit to the core reference data.
NASA Astrophysics Data System (ADS)
Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede
2017-10-01
Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.
NASA Astrophysics Data System (ADS)
Kopacz, Michał
2017-09-01
The paper attempts to assess the impact of variability of selected geological (deposit) parameters on the value and risks of projects in the hard coal mining industry. The study was based on simulated discounted cash flow analysis, while the results were verified for three existing bituminous coal seams. The Monte Carlo simulation was based on nonparametric bootstrap method, while correlations between individual deposit parameters were replicated with use of an empirical copula. The calculations take into account the uncertainty towards the parameters of empirical distributions of the deposit variables. The Net Present Value (NPV) and the Internal Rate of Return (IRR) were selected as the main measures of value and risk, respectively. The impact of volatility and correlation of deposit parameters were analyzed in two aspects, by identifying the overall effect of the correlated variability of the parameters and the indywidual impact of the correlation on the NPV and IRR. For this purpose, a differential approach, allowing determining the value of the possible errors in calculation of these measures in numerical terms, has been used. Based on the study it can be concluded that the mean value of the overall effect of the variability does not exceed 11.8% of NPV and 2.4 percentage points of IRR. Neglecting the correlations results in overestimating the NPV and the IRR by up to 4.4%, and 0.4 percentage point respectively. It should be noted, however, that the differences in NPV and IRR values can vary significantly, while their interpretation depends on the likelihood of implementation. Generalizing the obtained results, based on the average values, the maximum value of the risk premium in the given calculation conditions of the "X" deposit, and the correspondingly large datasets (greater than 2500), should not be higher than 2.4 percentage points. The impact of the analyzed geological parameters on the NPV and IRR depends primarily on their co-existence, which can be measured by the strength of correlation. In the analyzed case, the correlations result in limiting the range of variation of the geological parameters and economics results (the empirical copula reduces the NPV and IRR in probabilistic approach). However, this is due to the adjustment of the calculation under conditions similar to those prevailing in the deposit.
Multifactorial modelling of high-temperature treatment of timber in the saturated water steam medium
NASA Astrophysics Data System (ADS)
Prosvirnikov, D. B.; Safin, R. G.; Ziatdinova, D. F.; Timerbaev, N. F.; Lashkov, V. A.
2016-04-01
The paper analyses experimental data obtained in studies of high-temperature treatment of softwood and hardwood in an environment of saturated water steam. Data were processed in the Curve Expert software for the purpose of statistical modelling of processes and phenomena occurring during this process. The multifactorial modelling resulted in the empirical dependences, allowing determining the main parameters of this type of hydrothermal treatment with high accuracy.
Sorption and reemission of formaldehyde by gypsum wallboard. Report for June 1990-August 1992
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J.C.S.
1993-01-01
The paper gives results of an analysis of the sorption and desorption of formaldehyde by unpainted wallboard, using a mass transfer model based on the Langmuir sorption isotherm. The sorption and desorption rate constants are determined by short-term experimental data. Long-term sorption and desorption curves are developed by the mass transfer model without any adjustable parameters. Compared with other empirically developed models, the mass transfer model has more extensive applicability and provides an elucidation of the sorption and desorption mechanism that empirical models cannot. The mass transfer model is also more feasible and accurate than empirical models for applications suchmore » as scale-up and exposure assessment. For a typical indoor environment, the model predicts that gypsum wallboard is a much stronger sink for formaldehyde than for other indoor air pollutants such as tetrachloroethylene and ethylbenzene. The strong sink effects are reflected by the high equilibrium capacity and slow decay of the desorption curve.« less
Empirical microeconomics action functionals
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Du, Xin; Tanputraman, Winson
2015-06-01
A statistical generalization of microeconomics has been made in Baaquie (2013), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is modeled by an action functional-and the focus of this paper is to empirically determine the action functionals for different commodities. The correlation functions of the model are defined using a Feynman path integral. The model is calibrated using the unequal time correlation of the market commodity prices as well as their cubic and quartic moments using a perturbation expansion. The consistency of the perturbation expansion is verified by a numerical evaluation of the path integral. Nine commodities drawn from the energy, metal and grain sectors are studied and their market behavior is described by the model to an accuracy of over 90% using only six parameters. The paper empirically establishes the existence of the action functional for commodity prices that was postulated to exist in Baaquie (2013).
Inferring Spatial Variations of Microstructural Properties from Macroscopic Mechanical Response
Liu, Tengxiao; Hall, Timothy J.; Barbone, Paul E.; Oberai, Assad A.
2016-01-01
Disease alters tissue microstructure, which in turn affects the macroscopic mechanical properties of tissue. In elasticity imaging, the macroscopic response is measured and is used to infer the spatial distribution of the elastic constitutive parameters. When an empirical constitutive model is used these parameters cannot be linked to the microstructure. However, when the constitutive model is derived from a microstructural representation of the material, it allows for the possibility of inferring the local averages of the spatial distribution of the microstructural parameters. This idea forms the basis of this study. In particular, we first derive a constitutive model by homogenizing the mechanical response of a network of elastic, tortuous fibers. Thereafter, we use this model in an inverse problem to determine the spatial distribution of the microstructural parameters. We solve the inverse problem as a constrained minimization problem, and develop efficient methods for solving it. We apply these methods to displacement fields obtained by deforming gelatin-agar co-gels, and determine the spatial distribution of agar concentration and fiber tortuosity, thereby demonstrating that it is possible to image local averages of microstructural parameters from macroscopic measurements of deformation. PMID:27655420
New approaches in agent-based modeling of complex financial systems
NASA Astrophysics Data System (ADS)
Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei
2017-12-01
Agent-based modeling is a powerful simulation technique to understand the collective behavior and microscopic interaction in complex financial systems. Recently, the concept for determining the key parameters of agent-based models from empirical data instead of setting them artificially was suggested. We first review several agent-based models and the new approaches to determine the key model parameters from historical market data. Based on the agents' behaviors with heterogeneous personal preferences and interactions, these models are successful in explaining the microscopic origination of the temporal and spatial correlations of financial markets. We then present a novel paradigm combining big-data analysis with agent-based modeling. Specifically, from internet query and stock market data, we extract the information driving forces and develop an agent-based model to simulate the dynamic behaviors of complex financial systems.
Low resolution spectroscopic investigation of Am stars using Automated method
NASA Astrophysics Data System (ADS)
Sharma, Kaushal; Joshi, Santosh; Singh, Harinder P.
2018-04-01
The automated method of full spectrum fitting gives reliable estimates of stellar atmospheric parameters (Teff, log g and [Fe/H]) for late A, F, G, and early K type stars. Recently, the technique was further improved in the cooler regime and the validity range was extended up to a spectral type of M6 - M7 (Teff˜ 2900 K). The present study aims to explore the application of this method on the low-resolution spectra of Am stars, a class of chemically peculiar stars, to examine its robustness for these objects. We use ULySS with the Medium-resolution INT Library of Empirical Spectra (MILES) V2 spectral interpolator for parameter determination. The determined Teff and log g values are found to be in good agreement with those obtained from high-resolution spectroscopy.
Artifact interactions retard technological improvement: An empirical study
Magee, Christopher L.
2017-01-01
Empirical research has shown performance improvement of many different technological domains occurs exponentially but with widely varying improvement rates. What causes some technologies to improve faster than others do? Previous quantitative modeling research has identified artifact interactions, where a design change in one component influences others, as an important determinant of improvement rates. The models predict that improvement rate for a domain is proportional to the inverse of the domain’s interaction parameter. However, no empirical research has previously studied and tested the dependence of improvement rates on artifact interactions. A challenge to testing the dependence is that any method for measuring interactions has to be applicable to a wide variety of technologies. Here we propose a novel patent-based method that is both technology domain-agnostic and less costly than alternative methods. We use textual content from patent sets in 27 domains to find the influence of interactions on improvement rates. Qualitative analysis identified six specific keywords that signal artifact interactions. Patent sets from each domain were then examined to determine the total count of these 6 keywords in each domain, giving an estimate of artifact interactions in each domain. It is found that improvement rates are positively correlated with the inverse of the total count of keywords with Pearson correlation coefficient of +0.56 with a p-value of 0.002. The results agree with model predictions, and provide, for the first time, empirical evidence that artifact interactions have a retarding effect on improvement rates of technological domains. PMID:28777798
An Initial Non-Equilibrium Porous-Media Model for CFD Simulation of Stirling Regenerators
NASA Technical Reports Server (NTRS)
Tew, Roy C.; Simon, Terry; Gedeon, David; Ibrahim, Mounir; Rong, Wei
2006-01-01
The objective of this paper is to define empirical parameters for an initial thermal non-equilibrium porous-media model for use in Computational Fluid Dynamics (CFD) codes for simulation of Stirling regenerators. The two codes currently used at Glenn Research Center for Stirling modeling are Fluent and CFD-ACE. The codes porous-media models are equilibrium models, which assume solid matrix and fluid are in thermal equilibrium. This is believed to be a poor assumption for Stirling regenerators; Stirling 1-D regenerator models, used in Stirling design, use non-equilibrium regenerator models and suggest regenerator matrix and gas average temperatures can differ by several degrees at a given axial location and time during the cycle. Experimentally based information was used to define: hydrodynamic dispersion, permeability, inertial coefficient, fluid effective thermal conductivity, and fluid-solid heat transfer coefficient. Solid effective thermal conductivity was also estimated. Determination of model parameters was based on planned use in a CFD model of Infinia's Stirling Technology Demonstration Converter (TDC), which uses a random-fiber regenerator matrix. Emphasis is on use of available data to define empirical parameters needed in a thermal non-equilibrium porous media model for Stirling regenerator simulation. Such a model has not yet been implemented by the authors or their associates.
Empirical scoring functions for advanced protein-ligand docking with PLANTS.
Korb, Oliver; Stützle, Thomas; Exner, Thomas E
2009-01-01
In this paper we present two empirical scoring functions, PLANTS(CHEMPLP) and PLANTS(PLP), designed for our docking algorithm PLANTS (Protein-Ligand ANT System), which is based on ant colony optimization (ACO). They are related, regarding their functional form, to parts of already published scoring functions and force fields. The parametrization procedure described here was able to identify several parameter settings showing an excellent performance for the task of pose prediction on two test sets comprising 298 complexes in total. Up to 87% of the complexes of the Astex diverse set and 77% of the CCDC/Astex clean listnc (noncovalently bound complexes of the clean list) could be reproduced with root-mean-square deviations of less than 2 A with respect to the experimentally determined structures. A comparison with the state-of-the-art docking tool GOLD clearly shows that this is, especially for the druglike Astex diverse set, an improvement in pose prediction performance. Additionally, optimized parameter settings for the search algorithm were identified, which can be used to balance pose prediction reliability and search speed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasylkivska, Veronika S.; Huerta, Nicolas J.
Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog’s inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable withmore » respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.« less
On the Effectiveness of Security Countermeasures for Critical Infrastructures.
Hausken, Kjell; He, Fei
2016-04-01
A game-theoretic model is developed where an infrastructure of N targets is protected against terrorism threats. An original threat score is determined by the terrorist's threat against each target and the government's inherent protection level and original protection. The final threat score is impacted by the government's additional protection. We investigate and verify the effectiveness of countermeasures using empirical data and two methods. The first is to estimate the model's parameter values to minimize the sum of the squared differences between the government's additional resource investment predicted by the model and the empirical data. The second is to develop a multivariate regression model where the final threat score varies approximately linearly relative to the original threat score, sectors, and threat scenarios, and depends nonlinearly on the additional resource investment. The model and method are offered as tools, and as a way of thinking, to determine optimal resource investments across vulnerable targets subject to terrorism threats. © 2014 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Martins, Luciano; Díez-Herrero, Andrés; Bodoque, Jose M.; Bateira, Carlos
2016-04-01
The perception of flood risk by the responsible authorities on the flood management disasters and mitigation strategies should be based on an overall evaluation of the uncertainties associated with the procedures for risk assessment and mapping production. This contribution presents the results of the development of mapping evaluation of the time of concentration (tc). This parameter reflects the time-space at which a watershed responds to rainfall events and is the most frequently utilized time parameter, and is of great importance in many hydrologic analysis. Accurate estimates of the tc are very important, for instance, if tc is under-estimated, the result is an over-estimated peak discharge and vice versa, resulting significant variations on the flooded areas, and could have important consequences in terms of the land use and occupation of territory, as management's own flood risk. The methology used evaluate 20 different empirical, semi-empirical and kinematics equations of tc calculation, due to different cartographic scales (1:200000; 1:100000; 1:25000; LIDAR 5x5m &1x1m) in in two hydrographic basins with distinct dimensions and geomorphological characteristics, located in the Gredos Mountain range (Spain). The results suggest that the changes in the cartographic scale, has not influence as significant as one might expect. The most important variations occur in the characteristics of the fequations, use different morphometricparameters in the calculations. Some just are based on geomorphological criteria and other magnify the hydraulic characteristics of the channels, resulting in very different tc values. However, we highlighting the role of cartographic scale particularly in the application of semi-empirical equations that take into account changes in land use and occupation. In this case, the determination of parameters, such as flow coefficient, curve number and roughness coefficient are very sensitive to cartographic scale. Sensitivity analysis demonstrates that the empirical equations are simpler (e.g Giandotti, Chow, Temez), since it is based only on the geometrical characteristics of the basin and therefore the results tend not to reflect the dynamic range leadings to worse results of tc.The application of these equations based on local parameters should not be applied to other regions that have distinct geomorphological and climatic characteristics, since greatly influences the results.The semi-empirical and kinematics equations (e.g SCS, Kinematic Wave) tc is reflected mainly in the form of the hydrograph, particularly in the Lag-time. Thats seems be an appropriate to the integrated analysis of hydrographic basins. Moreover, these methods are fundamental to understand spatio-temporal dynamics within the basin, even if some parameters are difficult to calculate. The best way to calibrate and evaluate the obtained concentration time values, should be based on known events, calibrated by rating curves records.
NASA Astrophysics Data System (ADS)
Verbeke, C.; Asvestari, E.; Scolini, C.; Pomoell, J.; Poedts, S.; Kilpua, E.
2017-12-01
Coronal Mass Ejections (CMEs) are one of the big influencers on the coronal and interplanetary dynamics. Understanding their origin and evolution from the Sun to the Earth is crucial in order to determine the impact on our Earth and society. One of the key parameters that determine the geo-effectiveness of the coronal mass ejection is its internal magnetic configuration. We present a detailed parameter study of the Gibson-Low flux rope model. We focus on changes in the input parameters and how these changes affect the characteristics of the CME at Earth. Recently, the Gibson-Low flux rope model has been implemented into the inner heliosphere model EUHFORIA, a magnetohydrodynamics forecasting model of large-scale dynamics from 0.1 AU up to 2 AU. Coronagraph observations can be used to constrain the kinematics and morphology of the flux rope. One of the key parameters, the magnetic field, is difficult to determine directly from observations. In this work, we approach the problem by conducting a parameter study in which flux ropes with varying magnetic configurations are simulated. We then use the obtained dataset to look for signatures in imaging observations and in-situ observations in order to find an empirical way of constraining the parameters related to the magnetic field of the flux rope. In particular, we focus on events observed by at least two spacecraft (STEREO + L1) in order to discuss the merits of using observations from multiple viewpoints in constraining the parameters.
An Empirical Bayes Approach to Spatial Analysis
NASA Technical Reports Server (NTRS)
Morris, C. N.; Kostal, H.
1983-01-01
Multi-channel LANDSAT data are collected in several passes over agricultural areas during the growing season. How empirical Bayes modeling can be used to develop crop identification and discrimination techniques that account for spatial correlation in such data is considered. The approach models the unobservable parameters and the data separately, hoping to take advantage of the fact that the bulk of spatial correlation lies in the parameter process. The problem is then framed in terms of estimating posterior probabilities of crop types for each spatial area. Some empirical Bayes spatial estimation methods are used to estimate the logits of these probabilities.
Empirical Prediction of Aircraft Landing Gear Noise
NASA Technical Reports Server (NTRS)
Golub, Robert A. (Technical Monitor); Guo, Yue-Ping
2005-01-01
This report documents a semi-empirical/semi-analytical method for landing gear noise prediction. The method is based on scaling laws of the theory of aerodynamic noise generation and correlation of these scaling laws with current available test data. The former gives the method a sound theoretical foundation and the latter quantitatively determines the relations between the parameters of the landing gear assembly and the far field noise, enabling practical predictions of aircraft landing gear noise, both for parametric trends and for absolute noise levels. The prediction model is validated by wind tunnel test data for an isolated Boeing 737 landing gear and by flight data for the Boeing 777 airplane. In both cases, the predictions agree well with data, both in parametric trends and in absolute noise levels.
Empirical Bayes estimation of proportions with application to cowbird parasitism rates
Link, W.A.; Hahn, D.C.
1996-01-01
Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).
Martínez-López, Brais; Gontard, Nathalie; Peyron, Stéphane
2018-03-01
A reliable prediction of migration levels of plastic additives into food requires a robust estimation of diffusivity. Predictive modelling of diffusivity as recommended by the EU commission is carried out using a semi-empirical equation that relies on two polymer-dependent parameters. These parameters were determined for the polymers most used by packaging industry (LLDPE, HDPE, PP, PET, PS, HIPS) from the diffusivity data available at that time. In the specific case of general purpose polystyrene, the diffusivity data published since then shows that the use of the equation with the original parameters results in systematic underestimation of diffusivity. The goal of this study was therefore, to propose an update of the aforementioned parameters for PS on the basis of up to date diffusivity data, so the equation can be used for a reasoned overestimation of diffusivity.
NASA Astrophysics Data System (ADS)
Ishtiaq, K. S.; Abdul-Aziz, O. I.
2014-12-01
We developed a scaling-based, simple empirical model for spatio-temporally robust prediction of the diurnal cycles of wetland net ecosystem exchange (NEE) by using an extended stochastic harmonic algorithm (ESHA). A reference-time observation from each diurnal cycle was utilized as the scaling parameter to normalize and collapse hourly observed NEE of different days into a single, dimensionless diurnal curve. The modeling concept was tested by parameterizing the unique diurnal curve and predicting hourly NEE of May to October (summer growing and fall seasons) between 2002-12 for diverse wetland ecosystems, as available in the U.S. AmeriFLUX network. As an example, the Taylor Slough short hydroperiod marsh site in the Florida Everglades had data for four consecutive growing seasons from 2009-12; results showed impressive modeling efficiency (coefficient of determination, R2 = 0.66) and accuracy (ratio of root-mean-square-error to the standard deviation of observations, RSR = 0.58). Model validation was performed with an independent year of NEE data, indicating equally impressive performance (R2 = 0.68, RSR = 0.57). The model included a parsimonious set of estimated parameters, which exhibited spatio-temporal robustness by collapsing onto narrow ranges. Model robustness was further investigated by analytically deriving and quantifying parameter sensitivity coefficients and a first-order uncertainty measure. The relatively robust, empirical NEE model can be applied for simulating continuous (e.g., hourly) NEE time-series from a single reference observation (or a set of limited observations) at different wetland sites of comparable hydro-climatology, biogeochemistry, and ecology. The method can also be used for a robust gap-filling of missing data in observed time-series of periodic ecohydrological variables for wetland or other ecosystems.
Optical-model potential for electron and positron elastic scattering by atoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salvat, Francesc
2003-07-01
An optical-model potential for systematic calculations of elastic scattering of electrons and positrons by atoms and positive ions is proposed. The electrostatic interaction is determined from the Dirac-Hartree-Fock self-consistent atomic electron density. In the case of electron projectiles, the exchange interaction is described by means of the local-approximation of Furness and McCarthy. The correlation-polarization potential is obtained by combining the correlation potential derived from the local density approximation with a long-range polarization interaction, which is represented by means of a Buckingham potential with an empirical energy-dependent cutoff parameter. The absorption potential is obtained from the local-density approximation, using the Born-Ochkurmore » approximation and the Lindhard dielectric function to describe the binary collisions with a free-electron gas. The strength of the absorption potential is adjusted by means of an empirical parameter, which has been determined by fitting available absolute elastic differential cross-section data for noble gases and mercury. The Dirac partial-wave analysis with this optical-model potential provides a realistic description of elastic scattering of electrons and positrons with energies in the range from {approx}100 eV up to {approx}5 keV. At higher energies, correlation-polarization and absorption corrections are small and the usual static-exchange approximation is sufficiently accurate for most practical purposes.« less
Efficiencies for production of atomic nitrogen and oxygen by relativistic proton impact in air
NASA Technical Reports Server (NTRS)
Porter, H. S.; Jackman, C. H.; Green, A. E. S.
1976-01-01
Relativistic electron and proton impact cross sections are obtained and represented by analytic forms which span the energy range from threshold to 1 GeV. For ionization processes, the Massey-Mohr continuum generalized oscillator strength surface is parameterized. Parameters are determined by simultaneous fitting to (1) empirical data, (2) the Bethe sum rule, and (3) doubly differential cross sections for ionization. Branching ratios for dissociation and predissociation from important states of N2 and O2 are determined. The efficiency for the production of atomic nitrogen and oxygen by protons with kinetic energy less than 1 GeV is determined using these branching ratio and cross section assignments.
Chaves, Paula; Simões, Daniela; Paço, Maria; Pinho, Francisco; Duarte, José Alberto; Ribeiro, Fernando
2017-12-01
Deep friction massage is one of several physiotherapy interventions suggested for the management of tendinopathy. To determine the prevalence of deep friction massage use in clinical practice, to characterize the application parameters used by physiotherapists, and to identify empirical model-based patterns of deep friction massage application in degenerative tendinopathy. observational, analytical, cross-sectional and national web-based survey. 478 physiotherapists were selected through snow-ball sampling method. The participants completed an online questionnaire about personal and professional characteristics as well as specific questions regarding the use of deep friction massage. Characterization of deep friction massage parameters used by physiotherapists were presented as counts and proportions. Latent class analysis was used to identify the empirical model-based patterns. Crude and adjusted odds ratios and 95% confidence intervals were computed. The use of deep friction massage was reported by 88.1% of the participants; tendinopathy was the clinical condition where it was most frequently used (84.9%) and, from these, 55.9% reported its use in degenerative tendinopathy. The "duration of application" parameters in chronic phase and "frequency of application" in acute and chronic phases are those that diverge most from those recommended by the author of deep friction massage. We found a high prevalence of deep friction massage use, namely in degenerative tendinopathy. Our results have shown that the application parameters are heterogeneous and diverse. This is reflected by the identification of two application patterns, although none is in complete agreement with Cyriax's description. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gravitational-wave cosmography with LISA and the Hubble tension
NASA Astrophysics Data System (ADS)
Kyutoku, Koutarou; Seto, Naoki
2017-04-01
We propose that stellar-mass binary black holes like GW150914 will become a tool to explore the local Universe within ˜100 Mpc in the era of the Laser Interferometer Space Antenna (LISA). High calibration accuracy and annual motion of LISA could enable us to localize up to ≈60 binaries more accurately than the error volume of ≈100 Mpc3 without electromagnetic counterparts under moderately optimistic assumptions. This accuracy will give us a fair chance to determine the host object solely by gravitational waves. By combining the luminosity distance extracted from gravitational waves with the cosmological redshift determined from the host, the local value of the Hubble parameter will be determined up to a few % without relying on the empirically constructed distance ladder. Gravitational-wave cosmography would pave the way for resolution of the disputed Hubble tension, where the local and global measurements disagree in the value of the Hubble parameter at 3.4 σ level, which amounts to ≈9 %.
Wang, Tianmiao; Wu, Yao; Liang, Jianhong; Han, Chenhao; Chen, Jiao; Zhao, Qiteng
2015-04-24
Skid-steering mobile robots are widely used because of their simple mechanism and robustness. However, due to the complex wheel-ground interactions and the kinematic constraints, it is a challenge to understand the kinematics and dynamics of such a robotic platform. In this paper, we develop an analysis and experimental kinematic scheme for a skid-steering wheeled vehicle based-on a laser scanner sensor. The kinematics model is established based on the boundedness of the instantaneous centers of rotation (ICR) of treads on the 2D motion plane. The kinematic parameters (the ICR coefficient , the path curvature variable and robot speed ), including the effect of vehicle dynamics, are introduced to describe the kinematics model. Then, an exact but costly dynamic model is used and the simulation of this model's stationary response for the vehicle shows a qualitative relationship for the specified parameters and . Moreover, the parameters of the kinematic model are determined based-on a laser scanner localization experimental analysis method with a skid-steering robotic platform, Pioneer P3-AT. The relationship between the ICR coefficient and two physical factors is studied, i.e., the radius of the path curvature and the robot speed . An empirical function-based relationship between the ICR coefficient of the robot and the path parameters is derived. To validate the obtained results, it is empirically demonstrated that the proposed kinematics model significantly improves the dead-reckoning performance of this skid-steering robot.
Schade, Alexander; Behme, Nicole; Spange, Stefan
2014-02-17
The four empirical solvent polarity parameters according to the Catalán scale--solvent acidity (SA), solvent basicity (SB), solvent polarizability (SP), and solvent dipolarity (SdP)--of 64 ionic liquids (ILs) were determined by the solvatochromic method. The SA parameter was determined solely by using [Fe(II)(1,10-phenanthroline)2(CN)2] (Fe), the SB parameter by using the pair of structurally comparable dyes 3-(4-amino-3-methylphenyl)-7-phenylbenzo[1,2-b:4,5-b']difuran-2,6-dione (ABF) and 3-(4-N,N-dimethylaminophenyl)-7-phenylbenzo[1,2-b:4,5-b']-difuran-2,6-dione (DMe-ABF), and the SP and SdP parameters by using the homomorphic pair of 4-tert-butyl-2-(dicyanomethylene)-5-[4-(diethylamino)benzylidene]-Δ(3)-thiazoline (Th) and 2-[4-(N,N-dimethylamino)benzylidene]malononitrile (BMN). The separation of SP and SdP for a set of 64 various ILs was performed for the first time. Correlation analyses of SP with physicochemical data related to ionization potentials of anions of ILs as well as with theoretical data show the correctness of the applied method. The found correlations of the Catalán parameters with each other and with the alkyl-chain length of 1-alkyl-3-methylimidazolium-type ILs gives new information about interactions within ILs. An analytical comparison of the determined Catalán parameters with the established Kamlet-Taft parameters and the Gutmann acceptor and donor numbers is also presented. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Kozioł, Michał
2017-10-01
The article presents a parametric model describing the registered distributions spectrum of optical radiation emitted by electrical discharges generated in the systems: the needle- needle, the needleplate and in the system for surface discharges. Generation of electrical discharges and registration of the emitted radiation was carried out in three different electrical insulating oils: fabric new, operated (used) and operated with air bubbles. For registration of optical spectra in the range of ultraviolet, visible and near infrared a high resolution spectrophotometer was. The proposed mathematical model was developed in a regression procedure using gauss-sigmoid type function. The dependent variable was the intensity of the recorded optical signals. In order to estimate the optimal parameters of the model an evolutionary algorithm was used. The optimization procedure was performed in Matlab environment. For determination of the matching quality of theoretical parameters of the regression function to the empirical data determination coefficient R2 was applied.
Reflectometer for pseudo-Brewster angle spectrometry (BAIRS)
NASA Astrophysics Data System (ADS)
Potter, Roy F.
2000-10-01
A simple, robust reflectometer, pre-set for several angles of incidence (AOI), has been designed and used for determining the optical parameters of opaque samples having a specular surface. A single, linear polarizing element permits the measurement of perpendicular(s) and parallel (p) reflectence at each AOI. The BAIRS algorithm determines the empirical optical parameters for the subject surface at the pseudo-Brewster AOI, based on the measurement of p/s at two AOI's and, in turn the optical constants n and k (or (epsilon) 1 and (epsilon) 2). Radiation sources in current use, are a stabilized tungsten-halide lamp or a deuterium lamp for the visible and near UV spectral regions. Silica fiber optics and lenses deliver input and output radiation from the source and to a CCD array scanned diffraction spectrometer. Results for a sample of GaAs will be presented along with a discussion of dispersion features in the optical constant spectra.
NASA Astrophysics Data System (ADS)
Machado, Pablo; Campos, Patrick T.; Lima, Glauber R.; Rosa, Fernanda A.; Flores, Alex F. C.; Bonacorso, Helio G.; Zanatta, Nilo; Martins, Marcos A. P.
2009-01-01
The crystal structures of four novel analgesic agents, methyl 5-hydroxy-3- or 4-methyl-5-trichloro[trifluoro]methyl-4,5-dihydro-1 H-pyrazole-1-carboxylate, have been determined by X-ray diffractometry. The data demonstrated that the molecular packing was stabilized mainly by O sbnd H⋯O hydrogen bonds of the 5-hydroxy and 1-carboxymethyl groups. The 4,5-dihydro-1 H-pyrazole rings were obtained as almost planar structures showing RMS deviation at a range of 0.0052-0.0805 Å. Additionally, computational investigation using semi-empirical AM1 and PM3 methods were performed to find a correlation between experimental and calculated geometrical parameters. The data obtained suggest that the structural data furnished by the AM1 method is in better agreement with those experimentally determined for the above compounds.
Zhao, Xueli; Arsenault, Andre; Lavoie, Kim L; Meloche, Bernard; Bacon, Simon L
2007-01-01
Forearm Endothelial Function (FEF) is a marker that has been shown to discriminate patients with cardiovascular disease (CVD). FEF has been assessed using several parameters: the Rate of Uptake Ratio (RUR), EWUR (Elbow-to-Wrist Uptake Ratio) and EWRUR (Elbow-to-Wrist Relative Uptake Ratio). However, the modeling functions of FEF require more robust models. The present study was designed to compare an empirical method with quantitative modeling techniques to better estimate the physiological parameters and understand the complex dynamic processes. The fitted time activity curves of the forearms, estimating blood and muscle components, were assessed using both an empirical method and a two-compartment model. Although correlational analyses suggested a good correlation between the methods for RUR (r=.90) and EWUR (r=.79), but not EWRUR (r=.34), Altman-Bland plots found poor agreement between the methods for all 3 parameters. These results indicate that there is a large discrepancy between the empirical and computational method for FEF. Further work is needed to establish the physiological and mathematical validity of the 2 modeling methods.
NASA Technical Reports Server (NTRS)
Smith, S. D.; Tevepaugh, J. A.; Penny, M. M.
1975-01-01
The exhaust plumes of the space shuttle solid rocket motors can have a significant effect on the base pressure and base drag of the shuttle vehicle. A parametric analysis was conducted to assess the sensitivity of the initial plume expansion angle of analytical solid rocket motor flow fields to various analytical input parameters and operating conditions. The results of the analysis are presented and conclusions reached regarding the sensitivity of the initial plume expansion angle to each parameter investigated. Operating conditions parametrically varied were chamber pressure, nozzle inlet angle, nozzle throat radius of curvature ratio and propellant particle loading. Empirical particle parameters investigated were mean size, local drag coefficient and local heat transfer coefficient. Sensitivity of the initial plume expansion angle to gas thermochemistry model and local drag coefficient model assumptions were determined.
AAA gunnermodel based on observer theory. [predicting a gunner's tracking response
NASA Technical Reports Server (NTRS)
Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.
1978-01-01
The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.
The momentum transfer of incompressible turbulent separated flow due to cavities with steps
NASA Technical Reports Server (NTRS)
White, R. E.; Norton, D. J.
1977-01-01
An experimental study was conducted using a plate test bed having a turbulent boundary layer to determine the momentum transfer to the faces of step/cavity combinations on the plate. Experimental data were obtained from configurations including an isolated configuration and an array of blocks in tile patterns. A momentum transfer correlation model of pressure forces on an isolated step/cavity was developed with experimental results to relate flow and geometry parameters. Results of the experiments reveal that isolated step/cavity excrecences do not have a unique and unifying parameter group due in part to cavity depth effects and in part to width parameter scale effects. Drag predictions for tile patterns by a kinetic pressure empirical method predict experimental results well. Trends were not, however, predicted by a method of variable roughness density phenomenology.
How market structure drives commodity prices
NASA Astrophysics Data System (ADS)
Li, Bin; Wong, K. Y. Michael; Chan, Amos H. M.; So, Tsz Yan; Heimonen, Hermanni; Wei, Junyi; Saad, David
2017-11-01
We introduce an agent-based model, in which agents set their prices to maximize profit. At steady state the market self-organizes into three groups: excess producers, consumers and balanced agents, with prices determined by their own resource level and a couple of macroscopic parameters that emerge naturally from the analysis, akin to mean-field parameters in statistical mechanics. When resources are scarce prices rise sharply below a turning point that marks the disappearance of excess producers. To compare the model with real empirical data, we study the relationship between commodity prices and stock-to-use ratios in a range of commodities such as agricultural products and metals. By introducing an elasticity parameter to mitigate noise and long-term changes in commodities data, we confirm the trend of rising prices, provide evidence for turning points, and indicate yield points for less essential commodities.
NASA Astrophysics Data System (ADS)
Reaver, N.; Kaplan, D. A.; Jawitz, J. W.
2017-12-01
The Budyko hypothesis states that a catchment's long-term water and energy balances are dependent on two relatively easy to measure quantities: rainfall depth and potential evaporation. This hypothesis is expressed as a simple function, the Budyko equation, which allows for the prediction of a catchment's actual evapotranspiration and discharge from measured rainfall depth and potential evaporation, data which are widely available. However, the two main analytically derived forms of the Budyko equation contain a single unknown watershed parameter, whose value varies across catchments; variation in this parameter has been used to explain the hydrological behavior of different catchments. The watershed parameter is generally thought of as a lumped quantity that represents the influence of all catchment biophysical features (e.g. soil type and depth, vegetation type, timing of rainfall, etc). Previous work has shown that the parameter is statistically correlated with catchment properties, but an explicit expression has been elusive. While the watershed parameter can be determined empirically by fitting the Budyko equation to measured data in gauged catchments where actual evapotranspiration can be estimated, this limits the utility of the framework for predicting impacts to catchment hydrology due to changing climate and land use. In this study, we developed an analytical solution for the lumped catchment parameter for both forms of the Budyko equation. We combined these solutions with a statistical soil moisture model to obtain analytical solutions for the Budyko equation parameter as a function of measurable catchment physical features, including rooting depth, soil porosity, and soil wilting point. We tested the predictive power of these solutions using the U.S. catchments in the MOPEX database. We also compared the Budyko equation parameter estimates generated from our analytical solutions (i.e. predicted parameters) with those obtained through the calibration of the Budyko equation to discharge data (i.e. empirical parameters), and found good agreement. These results suggest that it is possible to predict the Budyko equation watershed parameter directly from physical features, even for ungauged catchments.
Systematic effects in LOD from SLR observations
NASA Astrophysics Data System (ADS)
Bloßfeld, Mathis; Gerstl, Michael; Hugentobler, Urs; Angermann, Detlef; Müller, Horst
2014-09-01
Beside the estimation of station coordinates and the Earth’s gravity field, laser ranging observations to near-Earth satellites can be used to determine the rotation of the Earth. One parameter of this rotation is ΔLOD (excess Length Of Day) which describes the excess revolution time of the Earth w.r.t. 86,400 s. Due to correlations among the different parameter groups, it is difficult to obtain reliable estimates for all parameters. In the official ΔLOD products of the International Earth Rotation and Reference Systems Service (IERS), the ΔLOD information determined from laser ranging observations is excluded from the processing. In this paper, we study the existing correlations between ΔLOD, the orbital node Ω, the even zonal gravity field coefficients, cross-track empirical accelerations and relativistic accelerations caused by the Lense-Thirring and deSitter effect in detail using first order Gaussian perturbation equations. We found discrepancies due to different a priories by using different gravity field models of up to 1.0 ms for polar orbits at an altitude of 500 km and up to 40.0 ms, if the gravity field coefficients are estimated using only observations to LAGEOS 1. If observations to LAGEOS 2 are included, reliable ΔLOD estimates can be achieved. Nevertheless, an impact of the a priori gravity field even on the multi-satellite ΔLOD estimates can be clearly identified. Furthermore, we investigate the effect of empirical cross-track accelerations and the effect of relativistic accelerations of near-Earth satellites on ΔLOD. A total effect of 0.0088 ms is caused by not modeled Lense-Thirring and deSitter terms. The partial derivatives of these accelerations w.r.t. the position and velocity of the satellite cause very small variations (0.1 μs) on ΔLOD.
Empirical Green's function analysis: Taking the next step
Hough, S.E.
1997-01-01
An extension of the empirical Green's function (EGF) method is presented that involves determination of source parameters using standard EGF deconvolution, followed by inversion for a common attenuation parameter for a set of colocated events. Recordings of three or more colocated events can thus be used to constrain a single path attenuation estimate. I apply this method to recordings from the 1995-1996 Ridgecrest, California, earthquake sequence; I analyze four clusters consisting of 13 total events with magnitudes between 2.6 and 4.9. I first obtain corner frequencies, which are used to infer Brune stress drop estimates. I obtain stress drop values of 0.3-53 MPa (with all but one between 0.3 and 11 MPa), with no resolved increase of stress drop with moment. With the corner frequencies constrained, the inferred attenuation parameters are very consistent; they imply an average shear wave quality factor of approximately 20-25 for alluvial sediments within the Indian Wells Valley. Although the resultant spectral fitting (using corner frequency and ??) is good, the residuals are consistent among the clusters analyzed. Their spectral shape is similar to the the theoretical one-dimensional response of a layered low-velocity structure in the valley (an absolute site response cannot be determined by this method, because of an ambiguity between absolute response and source spectral amplitudes). I show that even this subtle site response can significantly bias estimates of corner frequency and ??, if it is ignored in an inversion for only source and path effects. The multiple-EGF method presented in this paper is analogous to a joint inversion for source, path, and site effects; the use of colocated sets of earthquakes appears to offer significant advantages in improving resolution of all three estimates, especially if data are from a single site or sites with similar site response.
A pore-pressure diffusion model for estimating landslide-inducing rainfall
Reid, M.E.
1994-01-01
Many types of landslide movement are induced by large rainstorms, and empirical rainfall intensity/duration thresholds for initiating movement have been determined for various parts of the world. In this paper, I present a simple pressure diffusion model that provides a physically based hydrologic link between rainfall intensity/duration at the ground surface and destabilizing pore-water pressures at depth. The model approximates rainfall infiltration as a sinusoidally varying flux over time and uses physical parameters that can be determined independently. Using a comprehensive data set from an intensively monitored landslide, I demonstrate that the model is capable of distinguishing movement-inducing rainstorms. -Author
NASA Technical Reports Server (NTRS)
Murphy, P. C.
1986-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates.
van der Eerden, M M; Vlaspolder, F; de Graaff, C S; Groot, T; Bronsveld, W; Jansen, H; Boersma, W
2005-01-01
Background: There is much controversy about the ideal approach to the management of community acquired pneumonia (CAP). Recommendations differ from a pathogen directed approach to an empirical strategy with broad spectrum antibiotics. Methods: In a prospective randomised open study performed between 1998 and 2000, a pathogen directed treatment (PDT) approach was compared with an empirical broad spectrum antibiotic treatment (EAT) strategy according to the ATS guidelines of 1993 in 262 hospitalised patients with CAP. Clinical efficacy was primarily determined by the length of hospital stay (LOS). Secondary outcome parameters for clinical efficacy were assessment of therapeutic failure on antibiotics, 30 day mortality, duration of antibiotic treatment, resolution of fever, side effects, and quality of life. Results: Three hundred and three patients were enrolled in the study; 41 were excluded, leaving 262 with results available for analysis. No significant differences were found between the two treatment groups in LOS, 30 day mortality, clinical failure, or resolution of fever. Side effects, although they did not have a significant influence on the outcome parameters, occurred more frequently in patients in the EAT group than in those in the PDT group (60% v 17%, 95% CI –0.5 to –0.3; p<0.001). Conclusions: An EAT strategy with broad spectrum antibiotics for the management of hospitalised patients with CAP has comparable clinical efficacy to a PDT approach. PMID:16061709
Extremes in ecology: Avoiding the misleading effects of sampling variation in summary analyses
Link, W.A.; Sauer, J.R.
1996-01-01
Surveys such as the North American Breeding Bird Survey (BBS) produce large collections of parameter estimates. One's natural inclination when confronted with lists of parameter estimates is to look for the extreme values: in the BBS, these correspond to the species that appear to have the greatest changes in population size through time. Unfortunately, extreme estimates are liable to correspond to the most poorly estimated parameters. Consequently, the most extreme parameters may not match up with the most extreme parameter estimates. The ranking of parameter values on the basis of their estimates are a difficult statistical problem. We use data from the BBS and simulations to illustrate the potential misleading effects of sampling variation in rankings of parameters. We describe empirical Bayes and constrained empirical Bayes procedures which provide partial solutions to the problem of ranking in the presence of sampling variation.
NASA Technical Reports Server (NTRS)
Murphy, Patrick Charles
1985-01-01
An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.
Thermodynamics of concentrated solid solution alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Michael C.; Zhang, C.; Gao, P.
This study reviews the three main approaches for predicting the formation of concentrated solid solution alloys (CSSA) and for modeling their thermodynamic properties, in particular, utilizing the methodologies of empirical thermo-physical parameters, CALPHAD method, and first-principles calculations combined with hybrid Monte Carlo/Molecular Dynamics (MC/MD) simulations. In order to speed up CSSA development, a variety of empirical parameters based on Hume-Rothery rules have been developed. Herein, these parameters have been systematically and critically evaluated for their efficiency in predicting solid solution formation. The phase stability of representative CSSA systems is then illustrated from the perspectives of phase diagrams and nucleation drivingmore » force plots of the σ phase using CALPHAD method. The temperature-dependent total entropies of the FCC, BCC, HCP, and σ phases in equimolar compositions of various systems are presented next, followed by the thermodynamic properties of mixing of the BCC phase in Al-containing and Ti-containing refractory metal systems. First-principles calculations on model FCC, BCC and HCP CSSA reveal the presence of both positive and negative vibrational entropies of mixing, while the calculated electronic entropies of mixing are negligible. Temperature dependent configurational entropy is determined from the atomic structures obtained from MC/MD simulations. Current status and challenges in using these methodologies as they pertain to thermodynamic property analysis and CSSA design are discussed.« less
Thermodynamics of concentrated solid solution alloys
Gao, Michael C.; Zhang, C.; Gao, P.; ...
2017-10-12
This study reviews the three main approaches for predicting the formation of concentrated solid solution alloys (CSSA) and for modeling their thermodynamic properties, in particular, utilizing the methodologies of empirical thermo-physical parameters, CALPHAD method, and first-principles calculations combined with hybrid Monte Carlo/Molecular Dynamics (MC/MD) simulations. In order to speed up CSSA development, a variety of empirical parameters based on Hume-Rothery rules have been developed. Herein, these parameters have been systematically and critically evaluated for their efficiency in predicting solid solution formation. The phase stability of representative CSSA systems is then illustrated from the perspectives of phase diagrams and nucleation drivingmore » force plots of the σ phase using CALPHAD method. The temperature-dependent total entropies of the FCC, BCC, HCP, and σ phases in equimolar compositions of various systems are presented next, followed by the thermodynamic properties of mixing of the BCC phase in Al-containing and Ti-containing refractory metal systems. First-principles calculations on model FCC, BCC and HCP CSSA reveal the presence of both positive and negative vibrational entropies of mixing, while the calculated electronic entropies of mixing are negligible. Temperature dependent configurational entropy is determined from the atomic structures obtained from MC/MD simulations. Current status and challenges in using these methodologies as they pertain to thermodynamic property analysis and CSSA design are discussed.« less
NASA Astrophysics Data System (ADS)
Markov, M.; Levin, V.; Markova, I.
2018-02-01
The paper presents an approach to determine the effective electromagnetic parameters of suspensions of ellipsoidal dielectric particles with surface conductivity. This approach takes into account the existence of critical porosity that corresponds to the maximum packing volume fraction of solid inclusions. The approach is based on the Generalized Differential Effective Medium (GDEM) method. We have introduced a model of suspensions containing ellipsoidal inclusions of two types. Inclusions of the first type (phase 1) represent solid grains, and inclusions of the second type (phase 2) contain material with the same physical properties as the host (phase 0). In this model, with increasing porosity the concentration of the host decreases, and it tends to zero near the critical porosity. The proposed model has been used to simulate the effective electromagnetic parameters of concentrated suspensions. We have compared the modeling results for electrical conductivity and dielectric permittivity with the empirical equations. The results obtained have shown that the GDEM model describes the effective electrical conductivity and dielectric permittivity of suspensions in a wide range of inclusion concentrations.
An empirical model for polarized and cross-polarized scattering from a vegetation layer
NASA Technical Reports Server (NTRS)
Liu, H. L.; Fung, A. K.
1988-01-01
An empirical model for scattering from a vegetation layer above an irregular ground surface is developed in terms of the first-order solution for like-polarized scattering and the second-order solution for cross-polarized scattering. The effects of multiple scattering within the layer and at the surface-volume boundary are compensated by using a correction factor based on the matrix doubling method. The major feature of this model is that all parameters in the model are physical parameters of the vegetation medium. There are no regression parameters. Comparisons of this empirical model with theoretical matrix-doubling method and radar measurements indicate good agreements in polarization, angular trends, and k sub a up to 4, where k is the wave number and a is the disk radius. The computational time is shortened by a factor of 8, relative to the theoretical model calculation.
Modelling of the combustion velocity in UIT-85 on sustainable alternative gas fuel
NASA Astrophysics Data System (ADS)
Smolenskaya, N. M.; Korneev, N. V.
2017-05-01
The flame propagation velocity is one of the determining parameters characterizing the intensity of combustion process in the cylinder of an engine with spark ignition. Strengthening of requirements for toxicity and efficiency of the ICE contributes to gradual transition to sustainable alternative fuels, which include the mixture of natural gas with hydrogen. Currently, studies of conditions and regularities of combustion of this fuel to improve efficiency of its application are carried out in many countries. Therefore, the work is devoted to modeling the average propagation velocities of natural gas flame front laced with hydrogen to 15% by weight of the fuel, and determining the possibility of assessing the heat release characteristics on the average velocities of the flame front propagation in the primary and secondary phases of combustion. Experimental studies, conducted the on single cylinder universal installation UIT-85, showed the presence of relationship of the heat release characteristics with the parameters of the flame front propagation. Based on the analysis of experimental data, the empirical dependences for determination of average velocities of flame front propagation in the first and main phases of combustion, taking into account the change in various parameters of engine operation with spark ignition, were obtained. The obtained results allow to determine the characteristics of heat dissipation and to assess the impact of addition of hydrogen to the natural gas combustion process, that is needed to identify ways of improvement of the combustion process efficiency, including when you change the throttling parameters.
NASA Technical Reports Server (NTRS)
Parsons, David S.; Ordway, David; Johnson, Kenneth
2013-01-01
This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.
NASA Technical Reports Server (NTRS)
Parsons, David S.; Ordway, David O.; Johnson, Kenneth L.
2013-01-01
This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.
Limits of Predictability in Commuting Flows in the Absence of Data for Calibration
Yang, Yingxiang; Herrera, Carlos; Eagle, Nathan; González, Marta C.
2014-01-01
The estimation of commuting flows at different spatial scales is a fundamental problem for different areas of study. Many current methods rely on parameters requiring calibration from empirical trip volumes. Their values are often not generalizable to cases without calibration data. To solve this problem we develop a statistical expression to calculate commuting trips with a quantitative functional form to estimate the model parameter when empirical trip data is not available. We calculate commuting trip volumes at scales from within a city to an entire country, introducing a scaling parameter α to the recently proposed parameter free radiation model. The model requires only widely available population and facility density distributions. The parameter can be interpreted as the influence of the region scale and the degree of heterogeneity in the facility distribution. We explore in detail the scaling limitations of this problem, namely under which conditions the proposed model can be applied without trip data for calibration. On the other hand, when empirical trip data is available, we show that the proposed model's estimation accuracy is as good as other existing models. We validated the model in different regions in the U.S., then successfully applied it in three different countries. PMID:25012599
TU-FG-201-09: Predicting Accelerator Dysfunction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Able, C; Nguyen, C; Baydush, A
Purpose: To develop an integrated statistical process control (SPC) framework using digital performance and component data accumulated within the accelerator system that can detect dysfunction prior to unscheduled downtime. Methods: Seven digital accelerators were monitored for twelve to 18 months. The accelerators were operated in a ‘run to failure mode’ with the individual institutions determining when service would be initiated. Institutions were required to submit detailed service reports. Trajectory and text log files resulting from a robust daily VMAT QA delivery were decoded and evaluated using Individual and Moving Range (I/MR) control charts. The SPC evaluation was presented in amore » customized dashboard interface that allows the user to review 525 monitored parameters (480 MLC parameters). Chart limits were calculated using a hybrid technique that includes the standard SPC 3σ limits and an empirical factor based on the parameter/system specification. The individual (I) grand mean values and control limit ranges of the I/MR charts of all accelerators were compared using statistical (ranked analysis of variance (ANOVA)) and graphical analyses to determine consistency of operating parameters. Results: When an alarm or warning was directly connected to field service, process control charts predicted dysfunction consistently on beam generation related parameters (BGP)– RF Driver Voltage, Gun Grid Voltage, and Forward Power (W); beam uniformity parameters – angle and position steering coil currents; and Gantry position accuracy parameter: cross correlation max-value. Control charts for individual MLC – cross correlation max-value/position detected 50% to 60% of MLCs serviced prior to dysfunction or failure. In general, non-random changes were detected 5 to 80 days prior to a service intervention. The ANOVA comparison of BGP determined that each accelerator parameter operated at a distinct value. Conclusion: The SPC framework shows promise. Long term monitoring coordinated with service will be required to definitively determine the effectiveness of the model. Varian Medical System, Inc. provided funding in support of the research presented.« less
NASA Astrophysics Data System (ADS)
Lindgren, Sara; Heiter, Ulrike
2017-08-01
Context. Reliable metallicity values for late K and M dwarfs are important for studies of the chemical evolution of the Galaxy and advancement of planet formation theory in low-mass environments. Historically it has been challenging to determine the stellar parameters of low-mass stars because of their low surface temperature, which causes several molecules to form in the photospheric layers. In our work we use the fact that infrared high-resolution spectrographs have opened up a new window for investigating M dwarfs. This enables us to use similar methods as for warmer solar-like stars. Aims: Metallicity determination with high-resolution spectra is more accurate than with low-resolution spectra, but it is rather time consuming. In this paper we expand our sample analyzed with this precise method both in metallicity and effective temperature to build a calibration sample for a future revised empirical calibration. Methods: Because of the relatively few molecular lines in the J band, continuum rectification is possible for high-resolution spectra, allowing the stellar parameters to be determined with greater accuracy than with optical spectra. We obtained high-resolution spectra with the CRIRES spectrograph at the Very Large Telescope (VLT). The metallicity was determined using synthetic spectral fitting of several atomic species. For M dwarfs that are cooler than 3575 K, the line strengths of FeH lines were used to determine the effective temperatures, while for warmer stars a photometric calibration was used. Results: We analyzed 16 targets with a range of effective temperature from 3350-4550 K. The resulting metallicities lie between -0.5< [M/H] < +0.4. A few targets have previously been analyzed using low-resolution spectra and we find a rather good agreement with our values. A comparison with available photometric calibrations shows varying agreement and the spread within all empirical calibrations is large. Conclusions: Including the targets from our previous paper, we analyzed 28 M dwarfs with high-resolution infrared spectra. The targets spread approximately one dex in metallicity and 1400 K in effective temperature. For individual M dwarfs we achieve uncertainties of 0.05 dex and 100 K on average. Based on data obtained at ESO-VLT, Paranal Observatory, Chile, Program ID 090.D-0796(A).
Modeling Infrared Signal Reflections to Characterize Indoor Multipath Propagation
De-La-Llana-Calvo, Álvaro; Lázaro-Galilea, José Luis; Gardel-Vicente, Alfredo; Rodríguez-Navarro, David; Bravo-Muñoz, Ignacio; Tsirigotis, Georgios; Iglesias-Miguel, Juan
2017-01-01
In this paper, we propose a model to characterize Infrared (IR) signal reflections on any kind of surface material, together with a simplified procedure to compute the model parameters. The model works within the framework of Local Positioning Systems (LPS) based on IR signals (IR-LPS) to evaluate the behavior of transmitted signal Multipaths (MP), which are the main cause of error in IR-LPS, and makes several contributions to mitigation methods. Current methods are based on physics, optics, geometry and empirical methods, but these do not meet our requirements because of the need to apply several different restrictions and employ complex tools. We propose a simplified model based on only two reflection components, together with a method for determining the model parameters based on 12 empirical measurements that are easily performed in the real environment where the IR-LPS is being applied. Our experimental results show that the model provides a comprehensive solution to the real behavior of IR MP, yielding small errors when comparing real and modeled data (the mean error ranges from 1% to 4% depending on the environment surface materials). Other state-of-the-art methods yielded mean errors ranging from 15% to 40% in test measurements. PMID:28406436
Son, H S; Hong, Y S; Park, W M; Yu, M A; Lee, C H
2009-03-01
To estimate true Brix and alcoholic strength of must and wines without distillation, a novel approach using a refractometer and a hydrometer was developed. Initial Brix (I.B.), apparent refractometer Brix (A.R.), and apparent hydrometer Brix (A.H.) of must were measured by refractometer and hydrometer, respectively. Alcohol content (A) was determined with a hydrometer after distillation and true Brix (T.B.) was measured in distilled wines using a refractometer. Strong proportional correlations among A.R., A.H., T.B., and A in sugar solutions containing varying alcohol concentrations were observed in preliminary experiments. Similar proportional relationships among the parameters were also observed in must, which is a far more complex system than the sugar solution. To estimate T.B. and A of must during alcoholic fermentation, a total of 6 planar equations were empirically derived from the relationships among the experimental parameters. The empirical equations were then tested to estimate T.B. and A in 17 wine products, and resulted in good estimations of both quality factors. This novel approach was rapid, easy, and practical for use in routine analyses or for monitoring quality of must during fermentation and final wine products in a winery and/or laboratory.
Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI
NASA Astrophysics Data System (ADS)
Chen, Kho Chia; Bahar, Arifah; Kane, Ibrahim Lawal; Ting, Chee-Ming; Rahman, Haliza Abd
2015-02-01
In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parameter estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.
Vasylkivska, Veronika S.; Huerta, Nicolas J.
2017-06-24
Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog’s inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable withmore » respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.« less
NASA Astrophysics Data System (ADS)
Vasylkivska, Veronika S.; Huerta, Nicolas J.
2017-07-01
Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog's inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable with respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasylkivska, Veronika S.; Huerta, Nicolas J.
Determining the spatiotemporal characteristics of natural and induced seismic events holds the opportunity to gain new insights into why these events occur. Linking the seismicity characteristics with other geologic, geographic, natural, or anthropogenic factors could help to identify the causes and suggest mitigation strategies that reduce the risk associated with such events. The nearest-neighbor approach utilized in this work represents a practical first step toward identifying statistically correlated clusters of recorded earthquake events. Detailed study of the Oklahoma earthquake catalog’s inherent errors, empirical model parameters, and model assumptions is presented. We found that the cluster analysis results are stable withmore » respect to empirical parameters (e.g., fractal dimension) but were sensitive to epicenter location errors and seismicity rates. Most critically, we show that the patterns in the distribution of earthquake clusters in Oklahoma are primarily defined by spatial relationships between events. This observation is a stark contrast to California (also known for induced seismicity) where a comparable cluster distribution is defined by both spatial and temporal interactions between events. These results highlight the difficulty in understanding the mechanisms and behavior of induced seismicity but provide insights for future work.« less
Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks
NASA Astrophysics Data System (ADS)
Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.
2017-12-01
We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.
Reconstruction of normal forms by learning informed observation geometries from data.
Yair, Or; Talmon, Ronen; Coifman, Ronald R; Kevrekidis, Ioannis G
2017-09-19
The discovery of physical laws consistent with empirical observations is at the heart of (applied) science and engineering. These laws typically take the form of nonlinear differential equations depending on parameters; dynamical systems theory provides, through the appropriate normal forms, an "intrinsic" prototypical characterization of the types of dynamical regimes accessible to a given model. Using an implementation of data-informed geometry learning, we directly reconstruct the relevant "normal forms": a quantitative mapping from empirical observations to prototypical realizations of the underlying dynamics. Interestingly, the state variables and the parameters of these realizations are inferred from the empirical observations; without prior knowledge or understanding, they parametrize the dynamics intrinsically without explicit reference to fundamental physical quantities.
Variations in embodied energy and carbon emission intensities of construction materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan Omar, Wan-Mohd-Sabki; School of Environmental Engineering, Universiti Malaysia Perlis, 02600 Arau, Perlis; Doh, Jeung-Hwan, E-mail: j.doh@griffith.edu.au
2014-11-15
Identification of parameter variation allows us to conduct more detailed life cycle assessment (LCA) of energy and carbon emission material over their lifecycle. Previous research studies have demonstrated that hybrid LCA (HLCA) can generally overcome the problems of incompleteness and accuracy of embodied energy (EE) and carbon (EC) emission assessment. Unfortunately, the current interpretation and quantification procedure has not been extensively and empirically studied in a qualitative manner, especially in hybridising between the process LCA and I-O LCA. To determine this weakness, this study empirically demonstrates the changes in EE and EC intensities caused by variations to key parameters inmore » material production. Using Australia and Malaysia as a case study, the results are compared with previous hybrid models to identify key parameters and issues. The parameters considered in this study are technological changes, energy tariffs, primary energy factors, disaggregation constant, emission factors, and material price fluctuation. It was found that changes in technological efficiency, energy tariffs and material prices caused significant variations in the model. Finally, the comparison of hybrid models revealed that non-energy intensive materials greatly influence the variations due to high indirect energy and carbon emission in upstream boundary of material production, and as such, any decision related to these materials should be considered carefully. - Highlights: • We investigate the EE and EC intensity variation in Australia and Malaysia. • The influences of parameter variations on hybrid LCA model were evaluated. • Key significant contribution to the EE and EC intensity variation were identified. • High indirect EE and EC content caused significant variation in hybrid LCA models. • Non-energy intensive material caused variation between hybrid LCA models.« less
NASA Astrophysics Data System (ADS)
Zielke, O.; Arrowsmith, J.
2007-12-01
In order to determine the magnitude of pre-historic earthquakes, surface rupture length, average and maximum surface displacement are utilized, assuming that an earthquake of a specific size will cause surface features of correlated size. The well known Wells and Coppersmith (1994) paper and other studies defined empirical relationships between these and other parameters, based on historic events with independently known magnitude and rupture characteristics. However, these relationships show relatively large standard deviations and they are based only on a small number of events. To improve these first-order empirical relationships, the observation location relative to the rupture extent within the regional tectonic framework should be accounted for. This however cannot be done based on natural seismicity because of the limited size of datasets on large earthquakes. We have developed the numerical model FIMozFric, based on derivations by Okada (1992) to create synthetic seismic records for a given fault or fault system under the influence of either slip- or stress boundary conditions. Our model features A) the introduction of an upper and lower aseismic zone, B) a simple Coulomb friction law, C) bulk parameters simulating fault heterogeneity, and D) a fault interaction algorithm handling the large number of fault patches (typically 5,000-10,000). The joint implementation of these features produces well behaved synthetic seismic catalogs and realistic relationships among magnitude and surface rupture characteristics which are well within the error of the results by Wells and Coppersmith (1994). Furthermore, we use the synthetic seismic records to show that the relationships between magntiude and rupture characteristics are a function of the observation location within the regional tectonic framework. The model presented here can to provide paleoseismologists with a tool to improve magnitude estimates from surface rupture characteristics, by incorporating the regional and local structural context which can be determined in the field: Assuming a paleoseismologist measures the offset along a fault caused by an earthquake, our model can be used to determine the probability distribution of magnitudes which are capable of producing the observed offset, accounting for regional tectonic setting and observation location.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Selle, J E
Attempts were made to apply the Kaufman method of calculating binary phase diagrams to the calculation of binary phase diagrams between the rare earths, actinides, and the refractory transition metals. Difficulties were encountered in applying the method to the rare earths and actinides, and modifications were necessary to provide accurate representation of known diagrams. To calculate the interaction parameters for rare earth-rare earth diagrams, it was necessary to use the atomic volumes for each of the phases: liquid, body-centered cubic, hexagonal close-packed, and face-centered cubic. Determination of the atomic volumes of each of these phases for each element is discussedmore » in detail. In some cases, empirical means were necessary. Results are presented on the calculation of rare earth-rare earth, rare earth-actinide, and actinide-actinide diagrams. For rare earth-refractory transition metal diagrams and actinide-refractory transition metal diagrams, empirical means were required to develop values for the enthalpy of vaporization for rare earth elements and values for the constant (C) required when intermediate phases are present. Results of using the values determined for each element are presented.« less
GRAM-86 - FOUR DIMENSIONAL GLOBAL REFERENCE ATMOSPHERE MODEL
NASA Technical Reports Server (NTRS)
Johnson, D.
1994-01-01
The Four-D Global Reference Atmosphere program was developed from an empirical atmospheric model which generates values for pressure, density, temperature, and winds from surface level to orbital altitudes. This program can be used to generate altitude profiles of atmospheric parameters along any simulated trajectory through the atmosphere. The program was developed for design applications in the Space Shuttle program, such as the simulation of external tank re-entry trajectories. Other potential applications would be global circulation and diffusion studies, and generating profiles for comparison with other atmospheric measurement techniques, such as satellite measured temperature profiles and infrasonic measurement of wind profiles. The program is an amalgamation of two empirical atmospheric models for the low (25km) and the high (90km) atmosphere, with a newly developed latitude-longitude dependent model for the middle atmosphere. The high atmospheric region above 115km is simulated entirely by the Jacchia (1970) model. The Jacchia program sections are in separate subroutines so that other thermosphericexospheric models could easily be adapted if required for special applications. The atmospheric region between 30km and 90km is simulated by a latitude-longitude dependent empirical model modification of the latitude dependent empirical model of Groves (1971). Between 90km and 115km a smooth transition between the modified Groves values and the Jacchia values is accomplished by a fairing technique. Below 25km the atmospheric parameters are computed by the 4-D worldwide atmospheric model of Spiegler and Fowler (1972). This data set is not included. Between 25km and 30km an interpolation scheme is used between the 4-D results and the modified Groves values. The output parameters consist of components for: (1) latitude, longitude, and altitude dependent monthly and annual means, (2) quasi-biennial oscillations (QBO), and (3) random perturbations to partially simulate the variability due to synoptic, diurnal, planetary wave, and gravity wave variations. Quasi-biennial and random variation perturbations are computed from parameters determined by various empirical studies and are added to the monthly mean values. The UNIVAC version of GRAM is written in UNIVAC FORTRAN and has been implemented on a UNIVAC 1110 under control of EXEC 8 with a central memory requirement of approximately 30K of 36 bit words. The GRAM program was developed in 1976 and GRAM-86 was released in 1986. The monthly data files were last updated in 1986. The DEC VAX version of GRAM is written in FORTRAN 77 and has been implemented on a DEC VAX 11/780 under control of VMS 4.X with a central memory requirement of approximately 100K of 8 bit bytes. The GRAM program was originally developed in 1976 and later converted to the VAX in 1986 (GRAM-86). The monthly data files were last updated in 1986.
Camargo, M; Giarrizzo, T; Isaac, V J
2015-08-01
This study estimates the main biological parameters, including growth rates, asymptotic length, mortality, consumption by biomass, biological yield, and biomass, for the most abundant fish species found on the middle Xingu River, prior to the construction of the Belo Monte Dam. The specimens collected in experimental catches were analysed with empirical equations and length-based FISAT methods. For the 63 fish species studied, high growth rates (K) and high natural mortality (M) were related to early sexual maturation and low longevity. The predominance of species with short life cycles and a reduced number of age classes, determines high rates of stock turnover, which indicates high productivity for fisheries, and a low risk of overfishing.
Empirical Bayes methods for smoothing data and for simultaneous estimation of many parameters.
Yanagimoto, T; Kashiwagi, N
1990-01-01
A recent successful development is found in a series of innovative, new statistical methods for smoothing data that are based on the empirical Bayes method. This paper emphasizes their practical usefulness in medical sciences and their theoretically close relationship with the problem of simultaneous estimation of parameters, depending on strata. The paper also presents two examples of analyzing epidemiological data obtained in Japan using the smoothing methods to illustrate their favorable performance. PMID:2148512
A pore-network model for foam formation and propagation in porous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharabaf, H.; Yortsos, Y.C.
1996-12-31
We present a pore-network model, based on a pores-and-throats representation of the porous medium, to simulate the generation and mobilization of foams in porous media. The model allows for various parameters or processes, empirically treated in current models, to be quantified and interpreted. Contrary to previous works, we also consider a dynamic (invasion) in addition to a static process. We focus on the properties of the displacement, the onset of foam flow and mobilization, the foam texture and the sweep efficiencies obtained. The model simulates an invasion process, in which gas invades a porous medium occupied by a surfactant solution.more » The controlling parameter is the snap-off probability, which in turn determines the foam quality for various size distributions of pores and throats. For the front to advance, the applied pressure gradient needs to be sufficiently high to displace a series of lamellae along a minimum capillary resistance (threshold) path. We determine this path using a novel algorithm. The fraction of the flowing lamellae, X{sub f} (and, consequently, the fraction of the trapped lamellae, X{sub f}) which are currently empirical, are also calculated. The model allows the delineation of conditions tinder which high-quality (strong) or low-quality (weak) foams form. In either case, the sweep efficiencies in displacements in various media are calculated. In particular, the invasion by foam of low permeability layers during injection in a heterogeneous system is demonstrated.« less
Wang, Tianmiao; Wu, Yao; Liang, Jianhong; Han, Chenhao; Chen, Jiao; Zhao, Qiteng
2015-01-01
Skid-steering mobile robots are widely used because of their simple mechanism and robustness. However, due to the complex wheel-ground interactions and the kinematic constraints, it is a challenge to understand the kinematics and dynamics of such a robotic platform. In this paper, we develop an analysis and experimental kinematic scheme for a skid-steering wheeled vehicle based-on a laser scanner sensor. The kinematics model is established based on the boundedness of the instantaneous centers of rotation (ICR) of treads on the 2D motion plane. The kinematic parameters (the ICR coefficient χ, the path curvature variable λ and robot speed v), including the effect of vehicle dynamics, are introduced to describe the kinematics model. Then, an exact but costly dynamic model is used and the simulation of this model’s stationary response for the vehicle shows a qualitative relationship for the specified parameters χ and λ. Moreover, the parameters of the kinematic model are determined based-on a laser scanner localization experimental analysis method with a skid-steering robotic platform, Pioneer P3-AT. The relationship between the ICR coefficient χ and two physical factors is studied, i.e., the radius of the path curvature λ and the robot speed v. An empirical function-based relationship between the ICR coefficient of the robot and the path parameters is derived. To validate the obtained results, it is empirically demonstrated that the proposed kinematics model significantly improves the dead-reckoning performance of this skid–steering robot. PMID:25919370
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golubev, A.; Balashov, Y.; Mavrin, S.
Washout coefficient Λ is widely used as a parameter in washout models. These models describes overall HTO washout with rain by a first-order kinetic equation, while washout coefficient Λ depends on the type of rain event and rain intensity and empirical parameters a, b. The washout coefficient is a macroscopic parameter and we have considered in this paper its relationship with a microscopic rate K of HTO isotopic exchange in atmospheric humidity and drops of rainwater. We have shown that the empirical parameters a, b can be represented through the rain event characteristics using the relationships of molecular impact rate,more » rain intensity and specific rain water content while washout coefficient Λ can be represented through the exchange rate K, rain intensity, raindrop diameter and terminal raindrop velocity.« less
Pekalski, A A; Zevenbergen, J F; Braithwaite, M; Lemkowitz, S M; Pasman, H J
2005-02-14
Experimental and theoretical investigation of explosive decomposition of ethylene oxide (EO) at fixed initial experimental parameters (T=100 degrees C, P=4 bar) in a 20-l sphere was conducted. Safety-related parameters, namely the maximum explosion pressure, the maximum rate of pressure rise, and the Kd values, were experimentally determined for pure ethylene oxide and ethylene oxide diluted with nitrogen. The influence of the ignition energy on the explosion parameters was also studied. All these dependencies are quantified in empirical formulas. Additionally, the effect of turbulence on explosive decomposition of ethylene oxide was investigated. In contrast to previous studies, it is found that turbulence significantly influences the explosion severity parameters, mostly the rate of pressure rise. Thermodynamic models are used to calculate the maximum explosion pressure of pure and of nitrogen-diluted ethylene oxide, at different initial temperatures. Soot formation was experimentally observed. Relation between the amounts of soot formed and the explosion pressure was experimentally observed and was calculated.
NASA Astrophysics Data System (ADS)
Claure, Yuri Navarro; Matsubara, Edson Takashi; Padovani, Carlos; Prati, Ronaldo Cristiano
2018-03-01
Traditional methods for estimating timing parameters in hydrological science require a rigorous study of the relations of flow resistance, slope, flow regime, watershed size, water velocity, and other local variables. These studies are mostly based on empirical observations, where the timing parameter is estimated using empirically derived formulas. The application of these studies to other locations is not always direct. The locations in which equations are used should have comparable characteristics to the locations from which such equations have been derived. To overcome this barrier, in this work, we developed a data-driven approach to estimate timing parameters such as travel time. Our proposal estimates timing parameters using historical data of the location without the need of adapting or using empirical formulas from other locations. The proposal only uses one variable measured at two different locations on the same river (for instance, two river-level measurements, one upstream and the other downstream on the same river). The recorded data from each location generates two time series. Our method aligns these two time series using derivative dynamic time warping (DDTW) and perceptually important points (PIP). Using data from timing parameters, a polynomial function generalizes the data by inducing a polynomial water travel time estimator, called PolyWaTT. To evaluate the potential of our proposal, we applied PolyWaTT to three different watersheds: a floodplain ecosystem located in the part of Brazil known as Pantanal, the world's largest tropical wetland area; and the Missouri River and the Pearl River, in United States of America. We compared our proposal with empirical formulas and a data-driven state-of-the-art method. The experimental results demonstrate that PolyWaTT showed a lower mean absolute error than all other methods tested in this study, and for longer distances the mean absolute error achieved by PolyWaTT is three times smaller than empirical formulas.
Representing Micro-Macro Linkages by Actor-Based Dynamic Network Models
Snijders, Tom A.B.; Steglich, Christian E.G.
2014-01-01
Stochastic actor-based models for network dynamics have the primary aim of statistical inference about processes of network change, but may be regarded as a kind of agent-based models. Similar to many other agent-based models, they are based on local rules for actor behavior. Different from many other agent-based models, by including elements of generalized linear statistical models they aim to be realistic detailed representations of network dynamics in empirical data sets. Statistical parallels to micro-macro considerations can be found in the estimation of parameters determining local actor behavior from empirical data, and the assessment of goodness of fit from the correspondence with network-level descriptives. This article studies several network-level consequences of dynamic actor-based models applied to represent cross-sectional network data. Two examples illustrate how network-level characteristics can be obtained as emergent features implied by micro-specifications of actor-based models. PMID:25960578
Single crystal to polycrystal neutron transmission simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dessieux, Luc Lucius; Stoica, Alexandru Dan; Bingham, Philip R.
A collection of routines for calculation of the total cross section that determines the attenuation of neutrons by crystalline solids is presented. The total cross section is calculated semi-empirically as a function of crystal structure, neutron energy, temperature, and crystal orientation. The semi-empirical formula includes the contribution of parasitic Bragg scattering to the total cross section using both the crystal’s mosaic spread value and its orientation with respect to the neutron beam direction as parameters. These routines allow users to enter a distribution of crystal orientations for calculation of total cross sections of user defined powder or pseudo powder distributions,more » which enables simulation of non-uniformities such as texture and strain. In conclusion, the spectra for neutron transmission simulations in the neutron thermal energy range (2 meV–100 meV) are presented for single crystal and polycrystal samples and compared to measurements.« less
Optimal thresholds for the estimation of area rain-rate moments by the threshold method
NASA Technical Reports Server (NTRS)
Short, David A.; Shimizu, Kunio; Kedem, Benjamin
1993-01-01
Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.
Droplet breakup in accelerating gas flows. Part 2: Secondary atomization
NASA Technical Reports Server (NTRS)
Zajac, L. J.
1973-01-01
An experimental investigation to determine the effects of an accelerating gas flow on the atomization characteristics of liquid sprays was conducted. The sprays were produced by impinging two liquid jets. The liquid was molten wax and the gas was nitrogen. The use of molten wax allowed for a quantitative measure of the resulting dropsize distribution. The results of this study, indicate that a significant amount of droplet breakup will occur as a result of the action of the gas on the liquid droplets. Empirical correlations are presented in terms of parameters that were found to affect the mass median dropsize most significantly, the orifice diameter, the liquid injection velocity, and the maximum gas velocity. An empirical correlation for the normalized dropsize distribution is also presented. These correlations are in a form that may be incorporated readily into existing combustion model computer codes for the purpose of calculating rocket engine combustion performance.
NASA Astrophysics Data System (ADS)
Joyce, M.; Chaboyer, B.
2018-03-01
Theoretical stellar evolution models are constructed and tailored to the best known, observationally derived characteristics of metal-poor ([Fe/H] ∼ ‑2.3) stars representing a range of evolutionary phases: subgiant HD 140283, globular cluster M92, and four single, main sequence stars with well-determined parallaxes: HIP 46120, HIP 54639, HIP 106924, and WOLF 1137. It is found that the use of a solar-calibrated value of the mixing length parameter α MLT in models of these objects is ineffective at reproducing their observed properties. Empirically calibrated values of α MLT are presented for each object, accounting for uncertainties in the input physics employed in the models. It is advocated that the implementation of an adaptive mixing length is necessary in order for stellar evolution models to maintain fidelity in the era of high-precision observations.
Single crystal to polycrystal neutron transmission simulation
Dessieux, Luc Lucius; Stoica, Alexandru Dan; Bingham, Philip R.
2018-02-02
A collection of routines for calculation of the total cross section that determines the attenuation of neutrons by crystalline solids is presented. The total cross section is calculated semi-empirically as a function of crystal structure, neutron energy, temperature, and crystal orientation. The semi-empirical formula includes the contribution of parasitic Bragg scattering to the total cross section using both the crystal’s mosaic spread value and its orientation with respect to the neutron beam direction as parameters. These routines allow users to enter a distribution of crystal orientations for calculation of total cross sections of user defined powder or pseudo powder distributions,more » which enables simulation of non-uniformities such as texture and strain. In conclusion, the spectra for neutron transmission simulations in the neutron thermal energy range (2 meV–100 meV) are presented for single crystal and polycrystal samples and compared to measurements.« less
Uncertainty quantification in Eulerian-Lagrangian models for particle-laden flows
NASA Astrophysics Data System (ADS)
Fountoulakis, Vasileios; Jacobs, Gustaaf; Udaykumar, Hs
2017-11-01
A common approach to ameliorate the computational burden in simulations of particle-laden flows is to use a point-particle based Eulerian-Lagrangian model, which traces individual particles in their Lagrangian frame and models particles as mathematical points. The particle motion is determined by Stokes drag law, which is empirically corrected for Reynolds number, Mach number and other parameters. The empirical corrections are subject to uncertainty. Treating them as random variables renders the coupled system of PDEs and ODEs stochastic. An approach to quantify the propagation of this parametric uncertainty to the particle solution variables is proposed. The approach is based on averaging of the governing equations and allows for estimation of the first moments of the quantities of interest. We demonstrate the feasibility of our proposed methodology of uncertainty quantification of particle-laden flows on one-dimensional linear and nonlinear Eulerian-Lagrangian systems. This research is supported by AFOSR under Grant FA9550-16-1-0008.
Empirical Allometric Models to Estimate Total Needle Biomass For Loblolly Pine
Hector M. de los Santos-Posadas; Bruce E. Borders
2002-01-01
Empirical geometric models based on the cone surface formula were adapted and used to estimate total dry needle biomass (TNB) and live branch basal area (LBBA). The results suggest that the empirical geometric equations produced good fit and stable parameters while estimating TNB and LBBA. The data used include trees form a spacing study of 12 years old and a set of...
DOT National Transportation Integrated Search
2009-02-01
The resilient modulus (MR) input parameters in the Mechanistic-Empirical Pavement Design Guide (MEPDG) program have a significant effect on the projected pavement performance. The MEPDG program uses three different levels of inputs depending on the d...
Direct determination of surface albedos from satellite imagery
NASA Technical Reports Server (NTRS)
Mekler, Y.; Joseph, J. H.
1983-01-01
An empirical method to measure the spectral surface albedo of surfaces from Landsat imagery is presented and analyzed. The empiricism in the method is due only to the fact that three parameters of the solution must be determined for each spectral photograph of an image on the basis of independently known albedos at three points. The approach is otherwise based on exact solutions of the radiative transfer equation for upwelling intensity. Application of the method allows the routine construction of spectral albedo maps from satelite imagery, without requiring detailed knowledge of the atmospheric aerosol content, as long as the optical depth is less than 0.75, and of the calibration of the satellite sensor.
Principles of parametric estimation in modeling language competition
Zhang, Menghan; Gong, Tao
2013-01-01
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678
Principles of parametric estimation in modeling language competition.
Zhang, Menghan; Gong, Tao
2013-06-11
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.
Application of empirical and dynamical closure methods to simple climate models
NASA Astrophysics Data System (ADS)
Padilla, Lauren Elizabeth
This dissertation applies empirically- and physically-based methods for closure of uncertain parameters and processes to three model systems that lie on the simple end of climate model complexity. Each model isolates one of three sources of closure uncertainty: uncertain observational data, large dimension, and wide ranging length scales. They serve as efficient test systems toward extension of the methods to more realistic climate models. The empirical approach uses the Unscented Kalman Filter (UKF) to estimate the transient climate sensitivity (TCS) parameter in a globally-averaged energy balance model. Uncertainty in climate forcing and historical temperature make TCS difficult to determine. A range of probabilistic estimates of TCS computed for various assumptions about past forcing and natural variability corroborate ranges reported in the IPCC AR4 found by different means. Also computed are estimates of how quickly uncertainty in TCS may be expected to diminish in the future as additional observations become available. For higher system dimensions the UKF approach may become prohibitively expensive. A modified UKF algorithm is developed in which the error covariance is represented by a reduced-rank approximation, substantially reducing the number of model evaluations required to provide probability densities for unknown parameters. The method estimates the state and parameters of an abstract atmospheric model, known as Lorenz 96, with accuracy close to that of a full-order UKF for 30-60% rank reduction. The physical approach to closure uses the Multiscale Modeling Framework (MMF) to demonstrate closure of small-scale, nonlinear processes that would not be resolved directly in climate models. A one-dimensional, abstract test model with a broad spatial spectrum is developed. The test model couples the Kuramoto-Sivashinsky equation to a transport equation that includes cloud formation and precipitation-like processes. In the test model, three main sources of MMF error are evaluated independently. Loss of nonlinear multi-scale interactions and periodic boundary conditions in closure models were dominant sources of error. Using a reduced order modeling approach to maximize energy content allowed reduction of the closure model dimension up to 75% without loss in accuracy. MMF and a comparable alternative model peformed equally well compared to direct numerical simulation.
Dynamics of bloggers’ communities: Bipartite networks from empirical data and agent-based modeling
NASA Astrophysics Data System (ADS)
Mitrović, Marija; Tadić, Bosiljka
2012-11-01
We present an analysis of the empirical data and the agent-based modeling of the emotional behavior of users on the Web portals where the user interaction is mediated by posted comments, like Blogs and Diggs. We consider the dataset of discussion-driven popular Diggs, in which all comments are screened by machine-learning emotion detection in the text, to determine positive and negative valence (attractiveness and aversiveness) of each comment. By mapping the data onto a suitable bipartite network, we perform an analysis of the network topology and the related time-series of the emotional comments. The agent-based model is then introduced to simulate the dynamics and to capture the emergence of the emotional behaviors and communities. The agents are linked to posts on a bipartite network, whose structure evolves through their actions on the posts. The emotional states (arousal and valence) of each agent fluctuate in time, subject to the current contents of the posts to which the agent is exposed. By an agent’s action on a post its current emotions are transferred to the post. The model rules and the key parameters are inferred from the considered empirical data to ensure their realistic values and mutual consistency. The model assumes that the emotional arousal over posts drives the agent’s action. The simulations are preformed for the case of constant flux of agents and the results are analyzed in full analogy with the empirical data. The main conclusions are that the emotion-driven dynamics leads to long-range temporal correlations and emergent networks with community structure, that are comparable with the ones in the empirical system of popular posts. In view of pure emotion-driven agents actions, this type of comparisons provide a quantitative measure for the role of emotions in the dynamics on real blogs. Furthermore, the model reveals the underlying mechanisms which relate the post popularity with the emotion dynamics and the prevalence of negative emotions (critique). We also demonstrate how the community structure is tuned by varying a relevant parameter in the model. All data used in these works are fully anonymized.
NASA Astrophysics Data System (ADS)
Kim, R.-S.; Cho, K.-S.; Moon, Y.-J.; Dryer, M.; Lee, J.; Yi, Y.; Kim, K.-H.; Wang, H.; Park, Y.-D.; Kim, Yong Ha
2010-12-01
In this study, we discuss the general behaviors of geomagnetic storm strength associated with observed parameters of coronal mass ejection (CME) such as speed (V) and earthward direction (D) of CMEs as well as the longitude (L) and magnetic field orientation (M) of overlaying potential fields of the CME source region, and we develop an empirical model to predict geomagnetic storm occurrence with its strength (gauged by the Dst index) in terms of these CME parameters. For this we select 66 halo or partial halo CMEs associated with M-class and X-class solar flares, which have clearly identifiable source regions, from 1997 to 2003. After examining how each of these CME parameters correlates with the geoeffectiveness of the CMEs, we find several properties as follows: (1) Parameter D best correlates with storm strength Dst; (2) the majority of geoeffective CMEs have been originated from solar longitude 15°W, and CMEs originated away from this longitude tend to produce weaker storms; (3) correlations between Dst and the CME parameters improve if CMEs are separated into two groups depending on whether their magnetic fields are oriented southward or northward in their source regions. Based on these observations, we present two empirical expressions for Dst in terms of L, V, and D for two groups of CMEs, respectively. This is a new attempt to predict not only the occurrence of geomagnetic storms, but also the storm strength (Dst) solely based on the CME parameters.
The effect of seasonal birth pulses on pathogen persistence in wild mammal populations.
Peel, A J; Pulliam, J R C; Luis, A D; Plowright, R K; O'Shea, T J; Hayman, D T S; Wood, J L N; Webb, C T; Restif, O
2014-07-07
The notion of a critical community size (CCS), or population size that is likely to result in long-term persistence of a communicable disease, has been developed based on the empirical observations of acute immunizing infections in human populations, and extended for use in wildlife populations. Seasonal birth pulses are frequently observed in wildlife and are expected to impact infection dynamics, yet their effect on pathogen persistence and CCS have not been considered. To investigate this issue theoretically, we use stochastic epidemiological models to ask how host life-history traits and infection parameters interact to determine pathogen persistence within a closed population. We fit seasonal birth pulse models to data from diverse mammalian species in order to identify realistic parameter ranges. When varying the synchrony of the birth pulse with all other parameters being constant, our model predicted that the CCS can vary by more than two orders of magnitude. Tighter birth pulses tended to drive pathogen extinction by creating large amplitude oscillations in prevalence, especially with high demographic turnover and short infectious periods. Parameters affecting the relative timing of the epidemic and birth pulse peaks determined the intensity and direction of the effect of pre-existing immunity in the population on the pathogen's ability to persist beyond the initial epidemic following its introduction.
The effect of seasonal birth pulses on pathogen persistence in wild mammal populations
Peel, A. J.; Pulliam, J. R. C.; Luis, A. D.; Plowright, R. K.; O'Shea, T. J.; Hayman, D. T. S.; Wood, J. L. N.; Webb, C. T.; Restif, O.
2014-01-01
The notion of a critical community size (CCS), or population size that is likely to result in long-term persistence of a communicable disease, has been developed based on the empirical observations of acute immunizing infections in human populations, and extended for use in wildlife populations. Seasonal birth pulses are frequently observed in wildlife and are expected to impact infection dynamics, yet their effect on pathogen persistence and CCS have not been considered. To investigate this issue theoretically, we use stochastic epidemiological models to ask how host life-history traits and infection parameters interact to determine pathogen persistence within a closed population. We fit seasonal birth pulse models to data from diverse mammalian species in order to identify realistic parameter ranges. When varying the synchrony of the birth pulse with all other parameters being constant, our model predicted that the CCS can vary by more than two orders of magnitude. Tighter birth pulses tended to drive pathogen extinction by creating large amplitude oscillations in prevalence, especially with high demographic turnover and short infectious periods. Parameters affecting the relative timing of the epidemic and birth pulse peaks determined the intensity and direction of the effect of pre-existing immunity in the population on the pathogen's ability to persist beyond the initial epidemic following its introduction. PMID:24827436
McBride, Devin W.; Rodgers, Victor G. J.
2013-01-01
The activity coefficient is largely considered an empirical parameter that was traditionally introduced to correct the non-ideality observed in thermodynamic systems such as osmotic pressure. Here, the activity coefficient of free-solvent is related to physically realistic parameters and a mathematical expression is developed to directly predict the activity coefficients of free-solvent, for aqueous protein solutions up to near-saturation concentrations. The model is based on the free-solvent model, which has previously been shown to provide excellent prediction of the osmotic pressure of concentrated and crowded globular proteins in aqueous solutions up to near-saturation concentrations. Thus, this model uses only the independently determined, physically realizable quantities: mole fraction, solvent accessible surface area, and ion binding, in its prediction. Predictions are presented for the activity coefficients of free-solvent for near-saturated protein solutions containing either bovine serum albumin or hemoglobin. As a verification step, the predictability of the model for the activity coefficient of sucrose solutions was evaluated. The predicted activity coefficients of free-solvent are compared to the calculated activity coefficients of free-solvent based on osmotic pressure data. It is observed that the predicted activity coefficients are increasingly dependent on the solute-solvent parameters as the protein concentration increases to near-saturation concentrations. PMID:24324733
Rime ice accretion and its effect on airfoil performance. Ph.D. Thesis. Final Report
NASA Technical Reports Server (NTRS)
Bragg, M. B.
1982-01-01
A methodology was developed to predict the growth of rime ice, and the resulting aerodynamic penalty on unprotected, subcritical, airfoil surfaces. The system of equations governing the trajectory of a water droplet in the airfoil flowfield is developed and a numerical solution is obtained to predict the mass flux of super cooled water droplets freezing on impact. A rime ice shape is predicted. The effect of time on the ice growth is modeled by a time-stepping procedure where the flowfield and droplet mass flux are updated periodically through the ice accretion process. Two similarity parameters, the trajectory similarity parameter and accumulation parameter, are found to govern the accretion of rime ice. In addition, an analytical solution is presented for Langmuir's classical modified inertia parameter. The aerodynamic evaluation of the effect of the ice accretion on airfoil performance is determined using an existing airfoil analysis code with empirical corrections. The change in maximum lift coefficient is found from an analysis of the new iced airfoil shape. The drag correction needed due to the severe surface roughness is formulated from existing iced airfoil and rough airfoil data. A small scale wind tunnel test was conducted to determine the change in airfoil performance due to a simulated rime ice shape.
An Analysis Method for Superconducting Resonator Parameter Extraction with Complex Baseline Removal
NASA Technical Reports Server (NTRS)
Cataldo, Giuseppe
2014-01-01
A new semi-empirical model is proposed for extracting the quality (Q) factors of arrays of superconducting microwave kinetic inductance detectors (MKIDs). The determination of the total internal and coupling Q factors enables the computation of the loss in the superconducting transmission lines. The method used allows the simultaneous analysis of multiple interacting discrete resonators with the presence of a complex spectral baseline arising from reflections in the system. The baseline removal allows an unbiased estimate of the device response as measured in a cryogenic instrumentation setting.
Adsorption of basic dyes on granular activated carbon and natural zeolite.
Meshko, V; Markovska, L; Mincheva, M; Rodrigues, A E
2001-10-01
The adsorption of basic dyes from aqueous solution onto granular activated carbon and natural zeolite has been studied using an agitated batch adsorber. The influence of agitation, initial dye concentration and adsorbent mass has been studied. The parameters of Langmuir and Freundlich adsorption isotherms have been determined using the adsorption data. Homogeneous diffusion model (solid diffusion) combined with external mass transfer resistance is proposed for the kinetic investigation. The dependence of solid diffusion coefficient on initial concentration and mass adsorbent is represented by the simple empirical equations.
Jovian ultraviolet auroral activity, 1981-1991
NASA Technical Reports Server (NTRS)
Livengood, T. A.; Moos, H. W.; Ballester, G. E.; Prange, R. M.
1992-01-01
IUE observations of H2 UV emissions for the 1981-1991 period are presently used to investigate the auroral brightness distribution on the surface of Jupiter. The brightness, which is diagnostic of energy input to the atmosphere as well as of magnetospheric processes, is determined by comparing model-predicted brightnesses against empirical ones. The north and south aurorae appear to be correlated in brightness and in variations of the longitude of peak brightness. There are strong fluctuations in all the parameters of the brightness distribution on much shorter time scales than those of solar maximum-minimum.
Analysis of Spin Financial Market by GARCH Model
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2013-08-01
A spin model is used for simulations of financial markets. To determine return volatility in the spin financial market we use the GARCH model often used for volatility estimation in empirical finance. We apply the Bayesian inference performed by the Markov Chain Monte Carlo method to the parameter estimation of the GARCH model. It is found that volatility determined by the GARCH model exhibits "volatility clustering" also observed in the real financial markets. Using volatility determined by the GARCH model we examine the mixture-of-distribution hypothesis (MDH) suggested for the asset return dynamics. We find that the returns standardized by volatility are approximately standard normal random variables. Moreover we find that the absolute standardized returns show no significant autocorrelation. These findings are consistent with the view of the MDH for the return dynamics.
A Multicenter Evaluation of Prolonged Empiric Antibiotic Therapy in Adult ICUs in the United States.
Thomas, Zachariah; Bandali, Farooq; Sankaranarayanan, Jayashri; Reardon, Tom; Olsen, Keith M
2015-12-01
The purpose of this study is to determine the rate of prolonged empiric antibiotic therapy in adult ICUs in the United States. Our secondary objective is to examine the relationship between the prolonged empiric antibiotic therapy rate and certain ICU characteristics. Multicenter, prospective, observational, 72-hour snapshot study. Sixty-seven ICUs from 32 hospitals in the United States. Nine hundred ninety-eight patients admitted to the ICU between midnight on June 20, 2011, and June 21, 2011, were included in the study. None. Antibiotic orders were categorized as prophylactic, definitive, empiric, or prolonged empiric antibiotic therapy. Prolonged empiric antibiotic therapy was defined as empiric antibiotics that continued for at least 72 hours in the absence of adjudicated infection. Standard definitions from the Centers for Disease Control and Prevention were used to determine infection. Prolonged empiric antibiotic therapy rate was determined as the ratio of the total number of empiric antibiotics continued for at least 72 hours divided by the total number of empiric antibiotics. Univariate analysis of factors associated with the ICU prolonged empiric antibiotic therapy rate was conducted using Student t test. A total of 660 unique antibiotics were prescribed as empiric therapy to 364 patients. Of the empiric antibiotics, 333 of 660 (50%) were continued for at least 72 hours in instances where Centers for Disease Control and Prevention infection criteria were not met. Suspected pneumonia accounted for approximately 60% of empiric antibiotic use. The most frequently prescribed empiric antibiotics were vancomycin and piperacillin/tazobactam. ICUs that utilized invasive techniques for the diagnosis of ventilator-associated pneumonia had lower rates of prolonged empiric antibiotic therapy than those that did not, 45.1% versus 59.5% (p = 0.03). No other institutional factor was significantly associated with prolonged empiric antibiotic therapy rate. Half of all empiric antibiotics ordered in critically ill patients are continued for at least 72 hours in absence of adjudicated infection. Additional studies are needed to confirm these findings and determine the risks and benefits of prolonged empiric therapy in the critically ill.
An Empirical Bayes Approach to Mantel-Haenszel DIF Analysis.
ERIC Educational Resources Information Center
Zwick, Rebecca; Thayer, Dorothy T.; Lewis, Charles
1999-01-01
Developed an empirical Bayes enhancement to Mantel-Haenszel (MH) analysis of differential item functioning (DIF) in which it is assumed that the MH statistics are normally distributed and that the prior distribution of underlying DIF parameters is also normal. (Author/SLD)
Non-linear Parameter Estimates from Non-stationary MEG Data
Martínez-Vargas, Juan D.; López, Jose D.; Baker, Adam; Castellanos-Dominguez, German; Woolrich, Mark W.; Barnes, Gareth
2016-01-01
We demonstrate a method to estimate key electrophysiological parameters from resting state data. In this paper, we focus on the estimation of head-position parameters. The recovery of these parameters is especially challenging as they are non-linearly related to the measured field. In order to do this we use an empirical Bayesian scheme to estimate the cortical current distribution due to a range of laterally shifted head-models. We compare different methods of approaching this problem from the division of M/EEG data into stationary sections and performing separate source inversions, to explaining all of the M/EEG data with a single inversion. We demonstrate this through estimation of head position in both simulated and empirical resting state MEG data collected using a head-cast. PMID:27597815
SU-E-T-439: An Improved Formula of Scatter-To-Primary Ratio for Photon Dose Calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, T
2014-06-01
Purpose: Scatter-to-primary ratio (SPR) is an important dosimetric quantity that describes the contribution from the scatter photons in an external photon beam. The purpose of this study is to develop an improved analytical formula to describe SPR as a function of circular field size (r) and depth (d) using Monte Carlo (MC) simulation. Methods: MC simulation was performed for Mohan photon spectra (Co-60, 4, 6, 10, 15, 23 MV) using EGSNRC code. Point-spread scatter dose kernels in water are generated. The scatter-to-primary ratio (SPR) is also calculated using MC simulation as a function of field size for circular field sizemore » with radius r and depth d. The doses from forward scatter and backscatter photons are calculated using a convolution of the point-spread scatter dose kernel and by accounting for scatter photons contributing to dose before (z'd) reaching the depth of interest, d, where z' is the location of scatter photons, respectively. The depth dependence of the ratio of the forward scatter and backscatter doses is determined as a function of depth and field size. Results: We are able to improve the existing 3-parameter (a, w, d0) empirical formula for SPR by introducing depth dependence for one of the parameter d0, which becomes 0 for deeper depths. The depth dependence of d0 can be directly calculated as a ratio of backscatter-to-forward scatter doses for otherwise the same field and depth. With the improved empirical formula, we can fit SPR for all megavoltage photon beams to within 2%. Existing 3-parameter formula cannot fit SPR data for Co-60 to better than 3.1%. Conclusion: An improved empirical formula is developed to fit SPR for all megavoltage photon energies to within 2%.« less
EMPIRE: A code for nuclear astrophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palumbo, A.
The nuclear reaction code EMPIRE is presented as a useful tool for nuclear astrophysics. EMPIRE combines a variety of the reaction models with a comprehensive library of input parameters providing a diversity of options for the user. With exclusion of the directsemidirect capture all reaction mechanisms relevant to the nuclear astrophysics energy range of interest are implemented in the code. Comparison to experimental data show consistent agreement for all relevant channels.
Martinez, G T; Rosenauer, A; De Backer, A; Verbeeck, J; Van Aert, S
2014-02-01
High angle annular dark field scanning transmission electron microscopy (HAADF STEM) images provide sample information which is sensitive to the chemical composition. The image intensities indeed scale with the mean atomic number Z. To some extent, chemically different atomic column types can therefore be visually distinguished. However, in order to quantify the atomic column composition with high accuracy and precision, model-based methods are necessary. Therefore, an empirical incoherent parametric imaging model can be used of which the unknown parameters are determined using statistical parameter estimation theory (Van Aert et al., 2009, [1]). In this paper, it will be shown how this method can be combined with frozen lattice multislice simulations in order to evolve from a relative toward an absolute quantification of the composition of single atomic columns with mixed atom types. Furthermore, the validity of the model assumptions are explored and discussed. © 2013 Published by Elsevier B.V. All rights reserved.
An Initial Non-Equilibrium Porous-Media Model for CFD Simulation of Stirling Regenerators
NASA Technical Reports Server (NTRS)
Tew, Roy; Simon, Terry; Gedeon, David; Ibrahim, Mounir; Rong, Wei
2006-01-01
The objective of this paper is to define empirical parameters (or closwre models) for an initial thermai non-equilibrium porous-media model for use in Computational Fluid Dynamics (CFD) codes for simulation of Stirling regenerators. The two CFD codes currently being used at Glenn Research Center (GRC) for Stirling engine modeling are Fluent and CFD-ACE. The porous-media models available in each of these codes are equilibrium models, which assmne that the solid matrix and the fluid are in thermal equilibrium at each spatial location within the porous medium. This is believed to be a poor assumption for the oscillating-flow environment within Stirling regenerators; Stirling 1-D regenerator models, used in Stirling design, we non-equilibrium regenerator models and suggest regenerator matrix and gas average temperatures can differ by several degrees at a given axial location end time during the cycle. A NASA regenerator research grant has been providing experimental and computational results to support definition of various empirical coefficients needed in defining a noa-equilibrium, macroscopic, porous-media model (i.e., to define "closure" relations). The grant effort is being led by Cleveland State University, with subcontractor assistance from the University of Minnesota, Gedeon Associates, and Sunpower, Inc. Friction-factor and heat-transfer correlations based on data taken with the NASAlSunpower oscillating-flow test rig also provide experimentally based correlations that are useful in defining parameters for the porous-media model; these correlations are documented in Gedeon Associates' Sage Stirling-Code Manuals. These sources of experimentally based information were used to define the following terms and parameters needed in the non-equilibrium porous-media model: hydrodynamic dispersion, permeability, inertial coefficient, fluid effective thermal conductivity (including themal dispersion and estimate of tortuosity effects}, and fluid-solid heat transfer coefficient. Solid effective thermal conductivity (including the effect of tortuosity) was also estimated. Determination of the porous-media model parameters was based on planned use in a CFD model of Infinia's Stirling Technology Demonstration Convertor (TDC), which uses a random-fiber regenerator matrix. The non-equilibrium porous-media model presented is considered to be an initial, or "draft," model for possible incorporation in commercial CFD codes, with the expectation that the empirical parameters will likely need to be updated once resulting Stirling CFD model regenerator and engine results have been analyzed. The emphasis of the paper is on use of available data to define empirical parameters (and closure models) needed in a thermal non-equilibrium porous-media model for Stirling regenerator simulation. Such a model has not yet been implemented by the authors or their associates. However, it is anticipated that a thermal non-equilibrium model such as that presented here, when iacorporated in the CFD codes, will improve our ability to accurately model Stirling regenerators with CFD relative to current thermal-equilibrium porous-media models.
Statistical microeconomics and commodity prices: theory and empirical results.
Baaquie, Belal E
2016-01-13
A review is made of the statistical generalization of microeconomics by Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)), where the market price of every traded commodity, at each instant of time, is considered to be an independent random variable. The dynamics of commodity market prices is given by the unequal time correlation function and is modelled by the Feynman path integral based on an action functional. The correlation functions of the model are defined using the path integral. The existence of the action functional for commodity prices that was postulated to exist in Baaquie (Baaquie 2013 Phys. A 392, 4400-4416. (doi:10.1016/j.physa.2013.05.008)) has been empirically ascertained in Baaquie et al. (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). The model's action functionals for different commodities has been empirically determined and calibrated using the unequal time correlation functions of the market commodity prices using a perturbation expansion (Baaquie et al. 2015 Phys. A 428, 19-37. (doi:10.1016/j.physa.2015.02.030)). Nine commodities drawn from the energy, metal and grain sectors are empirically studied and their auto-correlation for up to 300 days is described by the model to an accuracy of R(2)>0.90-using only six parameters. © 2015 The Author(s).
A Bayesian estimation of a stochastic predator-prey model of economic fluctuations
NASA Astrophysics Data System (ADS)
Dibeh, Ghassan; Luchinsky, Dmitry G.; Luchinskaya, Daria D.; Smelyanskiy, Vadim N.
2007-06-01
In this paper, we develop a Bayesian framework for the empirical estimation of the parameters of one of the best known nonlinear models of the business cycle: The Marx-inspired model of a growth cycle introduced by R. M. Goodwin. The model predicts a series of closed cycles representing the dynamics of labor's share and the employment rate in the capitalist economy. The Bayesian framework is used to empirically estimate a modified Goodwin model. The original model is extended in two ways. First, we allow for exogenous periodic variations of the otherwise steady growth rates of the labor force and productivity per worker. Second, we allow for stochastic variations of those parameters. The resultant modified Goodwin model is a stochastic predator-prey model with periodic forcing. The model is then estimated using a newly developed Bayesian estimation method on data sets representing growth cycles in France and Italy during the years 1960-2005. Results show that inference of the parameters of the stochastic Goodwin model can be achieved. The comparison of the dynamics of the Goodwin model with the inferred values of parameters demonstrates quantitative agreement with the growth cycle empirical data.
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
Solvent empirical scales and their importance for the study of intermolecular interactions
NASA Astrophysics Data System (ADS)
Babusca, Daniela; Benchea, Andreea Celia; Morosanu, Ana Cezarina; Dimitriu, Dan Gheorghe; Dorohoi, Dana Ortansa
2017-01-01
The solvent empirical scales were developed in order to classify the solvents regarding their influence on the absorption or fluorescence spectra of different spectrally active molecules. The intermolecular interactions in binary solutions of three molecule having an intramolecular charge transfer visible absorption band are studied in this paper: 5-[2-(1,2,2,4-tetramethyl-1,2,3,4-tetrahydroquinolin-6-yl)-vinyl]-thiophene-2-carbaldehyde (QTC), 1-cyano-2-{5-[2-(1,2,2,4-tetramethyl-1,2,3,4-tetrahydroquinolin-6-yl)-vinyl]-thiophen-2-yl}-vinyl)-phosphonic acid diethyl ester (QTCP) and p-phenyl pyridazinium-p-nitro-phenacylid (PPNP). The solvent empirical scales with a single parameter (Z scale of Kosower, ET (30) or ETN scale of Reichardt and Dimroth) can be used to describe the strength of intermolecular interactions. The contributions of each type of interactions to the total spectral shift are evaluated using the solvent multiple parameters empirical scales defined by Kamlet and Taft and by Catalan et al.
NASA Astrophysics Data System (ADS)
Kamiyama, M.; Orourke, M. J.; Flores-Berrones, R.
1992-09-01
A new type of semi-empirical expression for scaling strong-motion peaks in terms of seismic source, propagation path, and local site conditions is derived. Peak acceleration, peak velocity, and peak displacement are analyzed in a similar fashion because they are interrelated. However, emphasis is placed on the peak velocity which is a key ground motion parameter for lifeline earthquake engineering studies. With the help of seismic source theories, the semi-empirical model is derived using strong motions obtained in Japan. In the derivation, statistical considerations are used in the selection of the model itself and the model parameters. Earthquake magnitude M and hypocentral distance r are selected as independent variables and the dummy variables are introduced to identify the amplification factor due to individual local site conditions. The resulting semi-empirical expressions for the peak acceleration, velocity, and displacement are then compared with strong-motion data observed during three earthquakes in the U.S. and Mexico.
A Proposed Change to ITU-R Recommendation 681
NASA Technical Reports Server (NTRS)
Davarian, F.
1996-01-01
Recommendation 681 of the International Telecommunications Union (ITU) provides five models for the prediction of propagation effects on land mobile satellite links: empirical roadside shadowing (ERS), attenuation frequency scaling, fade duration distribution, non-fade duration distribution, and fading due to multipath. Because the above prediction models have been empirically derived using a limited amount of data, these schemes work only for restricted ranges of link parameters. With the first two models, for example, the frequency and elevation angle parameters are restricted to 0.8 to 2.7 GHz and 20 to 60 degrees, respectively. Recently measured data have enabled us to enhance the range of the first two schemes. Moreover, for convenience, they have been combined into a single scheme named the extended empirical roadside shadowing (EERS) model.
Parr, Wendy V.; Valentin, Dominique; Reedman, Phil; Grose, Claire; Green, James A.
2017-01-01
The study’s aim was to investigate a central tenet of biodynamic philosophy as applied to wine tasting, namely that wines taste different in systematic ways on days determined by the lunar cycle. Nineteen New Zealand wine professionals tasted blind 12 Pinot noir wines at times determined within the biodynamic calendar for wine drinkers as being favourable (Fruit day) and unfavourable (Root day) for wine tasting. Tasters rated each wine four times, twice on a Fruit day and twice on a Root day, using 20 experimenter-provided descriptors. Wine descriptors spanned a range of varietal-relevant aroma, taste, and mouthfeel characteristics, and were selected with the aim of elucidating both qualitative and quantitative aspects of each wine’s perceived aromatic, taste, and structural aspects including overall wine quality and liking. A post-experimental questionnaire was completed by each participant to determine their degree of knowledge about the purpose of the study, and their awareness of the existence of the biodynamic wine drinkers’ calendar. Basic wine physico-chemical parameters were determined for the wines tasted on each of a Fruit day and a Root day. Results demonstrated that the wines were judged differentially on all attributes measured although type of day as determined by the biodynamic calendar for wine drinkers did not influence systematically any of the wine characteristics evaluated. The findings highlight the importance of testing experimentally practices that are based on anecdotal evidence but that lend themselves to empirical investigation. PMID:28046047
Parr, Wendy V; Valentin, Dominique; Reedman, Phil; Grose, Claire; Green, James A
2017-01-01
The study's aim was to investigate a central tenet of biodynamic philosophy as applied to wine tasting, namely that wines taste different in systematic ways on days determined by the lunar cycle. Nineteen New Zealand wine professionals tasted blind 12 Pinot noir wines at times determined within the biodynamic calendar for wine drinkers as being favourable (Fruit day) and unfavourable (Root day) for wine tasting. Tasters rated each wine four times, twice on a Fruit day and twice on a Root day, using 20 experimenter-provided descriptors. Wine descriptors spanned a range of varietal-relevant aroma, taste, and mouthfeel characteristics, and were selected with the aim of elucidating both qualitative and quantitative aspects of each wine's perceived aromatic, taste, and structural aspects including overall wine quality and liking. A post-experimental questionnaire was completed by each participant to determine their degree of knowledge about the purpose of the study, and their awareness of the existence of the biodynamic wine drinkers' calendar. Basic wine physico-chemical parameters were determined for the wines tasted on each of a Fruit day and a Root day. Results demonstrated that the wines were judged differentially on all attributes measured although type of day as determined by the biodynamic calendar for wine drinkers did not influence systematically any of the wine characteristics evaluated. The findings highlight the importance of testing experimentally practices that are based on anecdotal evidence but that lend themselves to empirical investigation.
Constitutive Equation with Varying Parameters for Superplastic Flow Behavior
NASA Astrophysics Data System (ADS)
Guan, Zhiping; Ren, Mingwen; Jia, Hongjie; Zhao, Po; Ma, Pinkui
2014-03-01
In this study, constitutive equations for superplastic materials with an extra large elongation were investigated through mechanical analysis. From the view of phenomenology, firstly, some traditional empirical constitutive relations were standardized by restricting some strain paths and parameter conditions, and the coefficients in these relations were strictly given new mechanical definitions. Subsequently, a new, general constitutive equation with varying parameters was theoretically deduced based on the general mechanical equation of state. The superplastic tension test data of Zn-5%Al alloy at 340 °C under strain rates, velocities, and loads were employed for building a new constitutive equation and examining its validity. Analysis results indicated that the constitutive equation with varying parameters could characterize superplastic flow behavior in practical superplastic forming with high prediction accuracy and without any restriction of strain path or deformation condition, showing good industrial or scientific interest. On the contrary, those empirical equations have low prediction capabilities due to constant parameters and poor applicability because of the limit of special strain path or parameter conditions based on strict phenomenology.
NASA Astrophysics Data System (ADS)
Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O'Brien, Katherine R.
2017-01-01
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.
Adams, Matthew P; Collier, Catherine J; Uthicke, Sven; Ow, Yan X; Langlois, Lucas; O'Brien, Katherine R
2017-01-04
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (T opt ) for maximum photosynthetic rate (P max ). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike.
Adams, Matthew P.; Collier, Catherine J.; Uthicke, Sven; Ow, Yan X.; Langlois, Lucas; O’Brien, Katherine R.
2017-01-01
When several models can describe a biological process, the equation that best fits the data is typically considered the best. However, models are most useful when they also possess biologically-meaningful parameters. In particular, model parameters should be stable, physically interpretable, and transferable to other contexts, e.g. for direct indication of system state, or usage in other model types. As an example of implementing these recommended requirements for model parameters, we evaluated twelve published empirical models for temperature-dependent tropical seagrass photosynthesis, based on two criteria: (1) goodness of fit, and (2) how easily biologically-meaningful parameters can be obtained. All models were formulated in terms of parameters characterising the thermal optimum (Topt) for maximum photosynthetic rate (Pmax). These parameters indicate the upper thermal limits of seagrass photosynthetic capacity, and hence can be used to assess the vulnerability of seagrass to temperature change. Our study exemplifies an approach to model selection which optimises the usefulness of empirical models for both modellers and ecologists alike. PMID:28051123
NASA Astrophysics Data System (ADS)
Chirra, Prathyush; Leo, Patrick; Yim, Michael; Bloch, B. Nicolas; Rastinehad, Ardeshir R.; Purysko, Andrei; Rosen, Mark; Madabhushi, Anant; Viswanath, Satish
2018-02-01
The recent advent of radiomics has enabled the development of prognostic and predictive tools which use routine imaging, but a key question that still remains is how reproducible these features may be across multiple sites and scanners. This is especially relevant in the context of MRI data, where signal intensity values lack tissue specific, quantitative meaning, as well as being dependent on acquisition parameters (magnetic field strength, image resolution, type of receiver coil). In this paper we present the first empirical study of the reproducibility of 5 different radiomic feature families in a multi-site setting; specifically, for characterizing prostate MRI appearance. Our cohort comprised 147 patient T2w MRI datasets from 4 different sites, all of which were first pre-processed to correct acquisition-related for artifacts such as bias field, differing voxel resolutions, as well as intensity drift (non-standardness). 406 3D voxel wise radiomic features were extracted and evaluated in a cross-site setting to determine how reproducible they were within a relatively homogeneous non-tumor tissue region; using 2 different measures of reproducibility: Multivariate Coefficient of Variation and Instability Score. Our results demonstrated that Haralick features were most reproducible between all 4 sites. By comparison, Laws features were among the least reproducible between sites, as well as performing highly variably across their entire parameter space. Similarly, the Gabor feature family demonstrated good cross-site reproducibility, but for certain parameter combinations alone. These trends indicate that despite extensive pre-processing, only a subset of radiomic features and associated parameters may be reproducible enough for use within radiomics-based machine learning classifier schemes.
Modeling noisy resonant system response
NASA Astrophysics Data System (ADS)
Weber, Patrick Thomas; Walrath, David Edwin
2017-02-01
In this paper, a theory-based model replicating empirical acoustic resonant signals is presented and studied to understand sources of noise present in acoustic signals. Statistical properties of empirical signals are quantified and a noise amplitude parameter, which models frequency and amplitude-based noise, is created, defined, and presented. This theory-driven model isolates each phenomenon and allows for parameters to be independently studied. Using seven independent degrees of freedom, this model will accurately reproduce qualitative and quantitative properties measured from laboratory data. Results are presented and demonstrate success in replicating qualitative and quantitative properties of experimental data.
Gravitation theory - Empirical status from solar system experiments.
NASA Technical Reports Server (NTRS)
Nordtvedt, K. L., Jr.
1972-01-01
Review of historical and recent experiments which speak in favor of a post-Newtonian relativistic gravitational theory. The topics include the foundational experiments, metric theories of gravity, experiments designed to differentiate among the metric theories, and tests of Machian concepts of gravity. It is shown that the metric field for any metric theory can be specified by a series of potential terms with several parameters. It is pointed out that empirical results available up to date yield values of the parameters which are consistent with the prediction of Einstein's general relativity.
Soil Moisture Estimate Under Forest Using a Semi-Empirical Model at P-Band
NASA Technical Reports Server (NTRS)
Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak
2013-01-01
Here we present the result of a semi-empirical inversion model for soil moisture retrieval using the three backscattering coefficients: sigma(sub HH), sigma(sub VV) and sigma(sub HV). In this paper we focus on the soil moisture estimate and use the biomass as an ancillary parameter estimated automatically from the algorithm and used as a validation parameter, We will first remind the model analytical formulation. Then we will sow some results obtained with real SAR data and compare them to ground estimates.
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1994-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading about 800 C, these fibers display creep related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of a mechanism-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the Bend Stress Relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model, but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model tensile creep predictions based on the BSR test results with the literature data show good agreement, supporting both the predictive capability of the model and the use of the BSR text as a simple method for parameter determination for other fibers.
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1991-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading above 800 C, these fibers display creep-related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of mechanistic-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the bend stress relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model predictions and BSR test results with the literature tensile creep data show good agreement, supporting both the predictive capability of the model and the use of the BSR test as a simple method for parameter determination for other fibers.
NASA Astrophysics Data System (ADS)
Amri, N.; Hashim, M. I.; Ismail, N.; Rohman, F. S.; Bashah, N. A. A.
2017-09-01
Electrocoagulation (EC) is a promising technology that extensively used to remove fluoride ions efficiently from industrial wastewater. However, it has received very little consideration and understanding on mechanism and factors that affecting the fluoride removal process. In order to determine the efficiency of fluoride removal in EC process, the effect of operating parameters such as voltage and electrolysis time were investigated in this study. A batch experiment with monopolar aluminium electrodes was conducted to identify the model of fluoride removal using empirical model equation. The EC process was investigated using several parameters which include voltage (3 - 12 V) and electrolysis time (0 - 60 minutes) at a constant initial fluoride concentration of 25 mg/L. The result shows that the fluoride removal efficiency increased steadily with increasing voltage and electrolysis time. The best fluoride removal efficiency was obtained with 94.8 % removal at 25 mg/L initial fluoride concentration, voltage of 12 V and 60 minutes electrolysis time. The results indicated that the rate constant, k and number of order, n decreased as the voltage increased. The rate of fluoride removal model was developed based on the empirical model equation using the correlation of k and n. Overall, the result showed that EC process can be considered as a potential alternative technology for fluoride removal in wastewater.
Shen, L; Levine, S H; Catchen, G L
1987-07-01
This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration.
Using Whole-House Field Tests to Empirically Derive Moisture Buffering Model Inputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woods, J.; Winkler, J.; Christensen, D.
2014-08-01
Building energy simulations can be used to predict a building's interior conditions, along with the energy use associated with keeping these conditions comfortable. These models simulate the loads on the building (e.g., internal gains, envelope heat transfer), determine the operation of the space conditioning equipment, and then calculate the building's temperature and humidity throughout the year. The indoor temperature and humidity are affected not only by the loads and the space conditioning equipment, but also by the capacitance of the building materials, which buffer changes in temperature and humidity. This research developed an empirical method to extract whole-house model inputsmore » for use with a more accurate moisture capacitance model (the effective moisture penetration depth model). The experimental approach was to subject the materials in the house to a square-wave relative humidity profile, measure all of the moisture transfer terms (e.g., infiltration, air conditioner condensate) and calculate the only unmeasured term: the moisture absorption into the materials. After validating the method with laboratory measurements, we performed the tests in a field house. A least-squares fit of an analytical solution to the measured moisture absorption curves was used to determine the three independent model parameters representing the moisture buffering potential of this house and its furnishings. Follow on tests with realistic latent and sensible loads showed good agreement with the derived parameters, especially compared to the commonly-used effective capacitance approach. These results show that the EMPD model, once the inputs are known, is an accurate moisture buffering model.« less
Damping parameter study of a perforated plate with bias flow
NASA Astrophysics Data System (ADS)
Mazdeh, Alireza
One of the main impediments to successful operation of combustion systems in industrial and aerospace applications including gas turbines, ramjets, rocket motors, afterburners (augmenters) and even large heaters/boilers is the dynamic instability also known as thermo-acoustic instability. Concerns with this ongoing problem have grown with the introduction of Lean Premixed Combustion (LPC) systems developed to address the environmental concerns associated with the conventional combustion systems. The most common way to mitigate thermo-acoustic instability is adding acoustic damping to the combustor using acoustic liners. Recently damping properties of bias flow initially introduced to liners only for cooling purposes have been recognized and proven to be an asset in enhancing the damping effectiveness of liners. Acoustic liners are currently being designed using empirical design rules followed by build-test-improve steps; basically by trial and error. There is growing concerns on the lack of reliability associated with the experimental evaluation of the acoustic liners with small size apertures. The development of physics-based tools in assisting the design of such liners has become of great interest to practitioners recently. This dissertation focuses primarily on how Large-Eddy Simulations (LES) or similar techniques such as Scaled Adaptive Simulation (SAS) can be used to characterize damping properties of bias flow. The dissertation also reviews assumptions made in the existing analytical, semi-empirical, and numerical models, provides a criteria to rank order the existing models, and identifies the best existing theoretical model. Flow field calculations by LES provide good insight into the mechanisms that led to acoustic damping. Comparison of simulation results with empirical and analytical studies shows that LES simulation is a viable alternative to the empirical and analytical methods and can accurately predict the damping behavior of liners. Currently the role of LES for research studies concerned with damping properties of liners is limited to validation of other empirical or theoretical approaches. This research has shown that LES can go beyond that and can be used for performing parametric studies to characterize the sensitivity of acoustic properties of multi--perforated liners to the changes in the geometry and flow conditions and be used as a tool to design acoustic liners. The conducted research provides an insightful understanding about the contribution of different flow and geometry parameters such as perforated plate thickness, aperture radius, porosity factors and bias flow velocity. While the study agrees with previous observations obtained by analytical or experimental methods, it also quantifies the impact from these parameters on the acoustic impedance of perforated plate, a key parameter to determine the acoustic performance of any system. The conducted study has also explored the limitations and capabilities of commercial tool when are applied for performing simulation studies on damping properties of liners. The overall agreement between LES results and previous studies proves that commercial tools can be effectively used for these applications under certain conditions.
GRAM 88 - 4D GLOBAL REFERENCE ATMOSPHERE MODEL-1988
NASA Technical Reports Server (NTRS)
Johnson, D. L.
1994-01-01
The Four-D Global Reference Atmosphere program was developed from an empirical atmospheric model which generates values for pressure, density, temperature, and winds from surface level to orbital altitudes. This program can generate altitude profiles of atmospheric parameters along any simulated trajectory through the atmosphere. The program was developed for design applications in the Space Shuttle program, such as the simulation of external tank re-entry trajectories. Other potential applications are global circulation and diffusion studies; also the generation of profiles for comparison with other atmospheric measurement techniques such as satellite measured temperature profiles and infrasonic measurement of wind profiles. GRAM-88 is the latest version of the software GRAM. The software GRAM-88 contains a number of changes that have improved the model statistics, in particular, the small scale density perturbation statistics. It also corrected a low latitude grid problem as well as the SCIDAT data base. Furthermore, GRAM-88 now uses the U.S. Standard Atmosphere 1976 as a comparison standard rather than the US62 used in other versions. The program is an amalgamation of two empirical atmospheric models for the low (25km) and the high (90km) atmosphere, with a newly developed latitude-longitude dependent model for the middle atmosphere. The Jacchia (1970) model simulates the high atmospheric region above 115km. The Jacchia program sections are in separate subroutines so that other thermosphericexospheric models could easily be adapted if required for special applications. The improved code eliminated the calculation of geostrophic winds above 125 km altitude from the model. The atmospheric region between 30km and 90km is simulated by a latitude-longitude dependent empirical model modification of the latitude dependent empirical model of Groves (1971). A fairing technique between 90km and 115km accomplished a smooth transition between the modified Groves values and the Jacchia values. Below 25km the atmospheric parameters are computed by the 4-D worldwide atmospheric model of Spiegler and Fowler (1972). This data set is not included. GRAM-88 incorporates a hydrostatic/gas law check in the 0-30 km altitude range to flag and change any bad data points. Between 5km and 30km, an interpolation scheme is used between the 4-D results and the modified Groves values. The output parameters consist of components for: (1) latitude, longitude, and altitude dependent monthly and annual means, (2) quasi-biennial oscillations (QBO), and (3) random perturbations to partially simulate the variability due to synoptic, diurnal, planetary wave, and gravity wave variations. Quasi-biennial and random variation perturbations are computed from parameters determined by various empirical studies and are added to the monthly mean values. The GRAM-88 program is for batch execution on the IBM 3084. It is written in STANDARD FORTRAN 77 under the MVS/XA operating system. The IBM DISPLA graphics routines are necessary for graphical output. The program was developed in 1988.
Empirical flow parameters - a tool for hydraulic model validity assessment : [summary].
DOT National Transportation Integrated Search
2013-10-01
Hydraulic modeling assembles models based on generalizations of parameter values from textbooks, professional literature, computer program documentation, and engineering experience. Actual measurements adjacent to the model location are seldom availa...
Multi-scale comparison of source parameter estimation using empirical Green's function approach
NASA Astrophysics Data System (ADS)
Chen, X.; Cheng, Y.
2015-12-01
Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.
Impact detection and analysis/health monitoring system for composites
NASA Astrophysics Data System (ADS)
Child, James E.; Kumar, Amrita; Beard, Shawn; Qing, Peter; Paslay, Don G.
2006-05-01
This manuscript includes information from test evaluations and development of a smart event detection system for use in monitoring composite rocket motor cases for damaging impacts. The primary purpose of the system as a sentry for case impact event logging is accomplished through; implementation of a passive network of miniaturized piezoelectric sensors, logger with pre-determined force threshold levels, and analysis software. Empirical approaches to structural characterizations and network calibrations along with implementation techniques were successfully evaluated, testing was performed on both unloaded (less propellants) as well as loaded rocket motors with the cylindrical areas being of primary focus. The logged test impact data with known physical network parameters provided for impact location as well as force determination, typically within 3 inches of actual impact location using a 4 foot network grid and force accuracy within 25%of an actual impact force. The simplistic empirical characterization approach along with the robust / flexible sensor grids and battery operated portable logger show promise of a system that can increase confidence in composite integrity for both new assets progressing through manufacturing processes as well as existing assets that may be in storage or transportation.
Long-term Trends and Variability of Eddy Activities in the South China Sea
NASA Astrophysics Data System (ADS)
Zhang, M.; von Storch, H.
2017-12-01
For constructing empirical downscaling models and projecting possible future states of eddy activities in the South China Sea (SCS), long-term statistical characteristics of the SCS eddy are needed. We use a daily global eddy-resolving model product named STORM covering the period of 1950-2010. This simulation has employed the MPI-OM model with a mean horizontal resolution of 10km and been driven by the NCEP reanalysis-1 data set. An eddy detection and tracking algorithm operating on the gridded sea surface height anomaly (SSHA) fields was developed. A set of parameters for the criteria in the SCS are determined through sensitivity tests. Our method detected more than 6000 eddy tracks in the South China Sea. For all of them, eddy diameters, track length, eddy intensity, eddy lifetime and eddy frequency were determined. The long-term trends and variability of those properties also has been derived. Most of the eddies propagate westward. Nearly 100 eddies travel longer than 1000km, and over 800 eddies have a lifespan of more than 2 months. Furthermore, for building the statistical empirical model, the relationship between the SCS eddy statistics and the large-scale atmospheric and oceanic phenomena has been investigated.
NASA Astrophysics Data System (ADS)
Chaney, Nathaniel W.; Herman, Jonathan D.; Ek, Michael B.; Wood, Eric F.
2016-11-01
With their origins in numerical weather prediction and climate modeling, land surface models aim to accurately partition the surface energy balance. An overlooked challenge in these schemes is the role of model parameter uncertainty, particularly at unmonitored sites. This study provides global parameter estimates for the Noah land surface model using 85 eddy covariance sites in the global FLUXNET network. The at-site parameters are first calibrated using a Latin Hypercube-based ensemble of the most sensitive parameters, determined by the Sobol method, to be the minimum stomatal resistance (rs,min), the Zilitinkevich empirical constant (Czil), and the bare soil evaporation exponent (fxexp). Calibration leads to an increase in the mean Kling-Gupta Efficiency performance metric from 0.54 to 0.71. These calibrated parameter sets are then related to local environmental characteristics using the Extra-Trees machine learning algorithm. The fitted Extra-Trees model is used to map the optimal parameter sets over the globe at a 5 km spatial resolution. The leave-one-out cross validation of the mapped parameters using the Noah land surface model suggests that there is the potential to skillfully relate calibrated model parameter sets to local environmental characteristics. The results demonstrate the potential to use FLUXNET to tune the parameterizations of surface fluxes in land surface models and to provide improved parameter estimates over the globe.
Comparison of the WSA-ENLIL model with three CME cone types
NASA Astrophysics Data System (ADS)
Jang, Soojeong; Moon, Y.; Na, H.
2013-07-01
We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the prediction of geomagnetic storms.Abstract (2,250 Maximum Characters): We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the prediction of geomagnetic storms.
Westine, Carl D; Spybrook, Jessaca; Taylor, Joseph A
2013-12-01
Prior research has focused primarily on empirically estimating design parameters for cluster-randomized trials (CRTs) of mathematics and reading achievement. Little is known about how design parameters compare across other educational outcomes. This article presents empirical estimates of design parameters that can be used to appropriately power CRTs in science education and compares them to estimates using mathematics and reading. Estimates of intraclass correlations (ICCs) are computed for unconditional two-level (students in schools) and three-level (students in schools in districts) hierarchical linear models of science achievement. Relevant student- and school-level pretest and demographic covariates are then considered, and estimates of variance explained are computed. Subjects: Five consecutive years of Texas student-level data for Grades 5, 8, 10, and 11. Science, mathematics, and reading achievement raw scores as measured by the Texas Assessment of Knowledge and Skills. Results: Findings show that ICCs in science range from .172 to .196 across grades and are generally higher than comparable statistics in mathematics, .163-.172, and reading, .099-.156. When available, a 1-year lagged student-level science pretest explains the most variability in the outcome. The 1-year lagged school-level science pretest is the best alternative in the absence of a 1-year lagged student-level science pretest. Science educational researchers should utilize design parameters derived from science achievement outcomes. © The Author(s) 2014.
Effect on Gaseous Film Cooling of Coolant Injection Through Angled Slots and Normal Holes
NASA Technical Reports Server (NTRS)
Papell, S. Stephen
1960-01-01
A study was made to determine the effect of coolant injection angularity on gaseous film-cooling effectiveness. In the correlation of experimental data an effective injection angle was defined by a vector summation of the coolant and mainstream gas flows. The cosine of this angle was used as a parameter to empirically develop a corrective term to qualify a correlating equation presented in Technical Note D-130 that was limited to tangential injection of the coolant. Data were also obtained for coolant injection through rows of holes normal to the test plate. The slot correlating equation was adapted to fit these data by the definition of an effective slot height. An additional corrective term was then determined to correlate these data.
Estimating Finite Rate of Population Increase for Sharks Based on Vital Parameters
Liu, Kwang-Ming; Chin, Chien-Pang; Chen, Chun-Hui; Chang, Jui-Han
2015-01-01
The vital parameter data for 62 stocks, covering 38 species, collected from the literature, including parameters of age, growth, and reproduction, were log-transformed and analyzed using multivariate analyses. Three groups were identified and empirical equations were developed for each to describe the relationships between the predicted finite rates of population increase (λ’) and the vital parameters, maximum age (Tmax), age at maturity (Tm), annual fecundity (f/Rc)), size at birth (Lb), size at maturity (Lm), and asymptotic length (L∞). Group (1) included species with slow growth rates (0.034 yr-1 < k < 0.103 yr-1) and extended longevity (26 yr < Tmax < 81 yr), e.g., shortfin mako Isurus oxyrinchus, dusky shark Carcharhinus obscurus, etc.; Group (2) included species with fast growth rates (0.103 yr-1 < k < 0.358 yr-1) and short longevity (9 yr < Tmax < 26 yr), e.g., starspotted smoothhound Mustelus manazo, gray smoothhound M. californicus, etc.; Group (3) included late maturing species (Lm/L∞ ≧ 0.75) with moderate longevity (Tmax < 29 yr), e.g., pelagic thresher Alopias pelagicus, sevengill shark Notorynchus cepedianus. The empirical equation for all data pooled was also developed. The λ’ values estimated by these empirical equations showed good agreement with those calculated using conventional demographic analysis. The predictability was further validated by an independent data set of three species. The empirical equations developed in this study not only reduce the uncertainties in estimation but also account for the difference in life history among groups. This method therefore provides an efficient and effective approach to the implementation of precautionary shark management measures. PMID:26576058
Inference of directional selection and mutation parameters assuming equilibrium.
Vogl, Claus; Bergman, Juraj
2015-12-01
In a classical study, Wright (1931) proposed a model for the evolution of a biallelic locus under the influence of mutation, directional selection and drift. He derived the equilibrium distribution of the allelic proportion conditional on the scaled mutation rate, the mutation bias and the scaled strength of directional selection. The equilibrium distribution can be used for inference of these parameters with genome-wide datasets of "site frequency spectra" (SFS). Assuming that the scaled mutation rate is low, Wright's model can be approximated by a boundary-mutation model, where mutations are introduced into the population exclusively from sites fixed for the preferred or unpreferred allelic states. With the boundary-mutation model, inference can be partitioned: (i) the shape of the SFS distribution within the polymorphic region is determined by random drift and directional selection, but not by the mutation parameters, such that inference of the selection parameter relies exclusively on the polymorphic sites in the SFS; (ii) the mutation parameters can be inferred from the amount of polymorphic and monomorphic preferred and unpreferred alleles, conditional on the selection parameter. Herein, we derive maximum likelihood estimators for the mutation and selection parameters in equilibrium and apply the method to simulated SFS data as well as empirical data from a Madagascar population of Drosophila simulans. Copyright © 2015 Elsevier Inc. All rights reserved.
Lacey, Ronald E; Faulkner, William Brock
2015-07-01
This work applied a propagation of uncertainty method to typical total suspended particulate (TSP) sampling apparatus in order to estimate the overall measurement uncertainty. The objectives of this study were to estimate the uncertainty for three TSP samplers, develop an uncertainty budget, and determine the sensitivity of the total uncertainty to environmental parameters. The samplers evaluated were the TAMU High Volume TSP Sampler at a nominal volumetric flow rate of 1.42 m3 min(-1) (50 CFM), the TAMU Low Volume TSP Sampler at a nominal volumetric flow rate of 17 L min(-1) (0.6 CFM) and the EPA TSP Sampler at the nominal volumetric flow rates of 1.1 and 1.7 m3 min(-1) (39 and 60 CFM). Under nominal operating conditions the overall measurement uncertainty was found to vary from 6.1x10(-6) g m(-3) to 18.0x10(-6) g m(-3), which represented an uncertainty of 1.7% to 5.2% of the measurement. Analysis of the uncertainty budget determined that three of the instrument parameters contributed significantly to the overall uncertainty: the uncertainty in the pressure drop measurement across the orifice meter during both calibration and testing and the uncertainty of the airflow standard used during calibration of the orifice meter. Five environmental parameters occurring during field measurements were considered for their effect on overall uncertainty: ambient TSP concentration, volumetric airflow rate, ambient temperature, ambient pressure, and ambient relative humidity. Of these, only ambient TSP concentration and volumetric airflow rate were found to have a strong effect on the overall uncertainty. The technique described in this paper can be applied to other measurement systems and is especially useful where there are no methods available to generate these values empirically. This work addresses measurement uncertainty of TSP samplers used in ambient conditions. Estimation of uncertainty in gravimetric measurements is of particular interest, since as ambient particulate matter (PM) concentrations approach regulatory limits, the uncertainty of the measurement is essential in determining the sample size and the probability of type II errors in hypothesis testing. This is an important factor in determining if ambient PM concentrations exceed regulatory limits. The technique described in this paper can be applied to other measurement systems and is especially useful where there are no methods available to generate these values empirically.
Controls on the variability of net infiltration to desert sandstone
Heilweil, Victor M.; McKinney, Tim S.; Zhdanov, Michael S.; Watt, Dennis E.
2007-01-01
As populations grow in arid climates and desert bedrock aquifers are increasingly targeted for future development, understanding and quantifying the spatial variability of net infiltration becomes critically important for accurately inventorying water resources and mapping contamination vulnerability. This paper presents a conceptual model of net infiltration to desert sandstone and then develops an empirical equation for its spatial quantification at the watershed scale using linear least squares inversion methods for evaluating controlling parameters (independent variables) based on estimated net infiltration rates (dependent variables). Net infiltration rates used for this regression analysis were calculated from environmental tracers in boreholes and more than 3000 linear meters of vadose zone excavations in an upland basin in southwestern Utah underlain by Navajo sandstone. Soil coarseness, distance to upgradient outcrop, and topographic slope were shown to be the primary physical parameters controlling the spatial variability of net infiltration. Although the method should be transferable to other desert sandstone settings for determining the relative spatial distribution of net infiltration, further study is needed to evaluate the effects of other potential parameters such as slope aspect, outcrop parameters, and climate on absolute net infiltration rates.
Determination of soil degradation from flooding for estimating ecosystem services in Slovakia
NASA Astrophysics Data System (ADS)
Hlavcova, Kamila; Szolgay, Jan; Karabova, Beata; Kohnova, Silvia
2015-04-01
Floods as natural hazards are related to soil health, land-use and land management. They not only represent threats on their own, but can also be triggered, controlled and amplified by interactions with other soil threats and soil degradation processes. Among the many direct impacts of flooding on soil health, including soil texture, structure, changes in the soil's chemical properties, deterioration of soil aggregation and water holding capacity, etc., are soil erosion, mudflows, depositions of sediment and debris. Flooding is initiated by a combination of predispositive and triggering factors and apart from climate drivers it is related to the physiographic conditions of the land, state of the soil, land use and land management. Due to the diversity and complexity of their potential interactions, diverse methodologies and approaches are needed for describing a particular type of event in a specific environment, especially in ungauged sites. In engineering studies and also in many rainfall-runoff models, the SCS-CN method has remained widely applied for soil and land use-based estimations of direct runoff and flooding potential. The SCS-CN method is an empirical rainfall-runoff model developed by the USDA Natural Resources Conservation Service (formerly called the Soil Conservation Service or SCS). The runoff curve number (CN) is based on the hydrological soil characteristics, land use, land management and antecedent saturation conditions of soil. Since the method and curve numbers were derived on the basis of an empirical analysis of rainfall-runoff events from small catchments and hillslope plots monitored by the USDA, the use of the method for the conditions of Slovakia raises uncertainty and can cause inaccurate results in determining direct runoff. The objective of the study presented (also within the framework of the EU-FP7 RECARE Project) was to develop the SCS - CN methodology for the flood conditions in Slovakia (and especially for the RECARE pilot site of Myjava), with an emphasis on the determination of soil degradation from flooding for estimating ecosystem services. The parameters of the SCS-CN methodology were regionalised empirically based on actual rainfall and discharge measurements. Since there has been no appropriate methodology provided for the regionalisation of SCS-CN method parameters in Slovakia, such as runoff curve numbers and initial abstraction coefficients (λ), the work presented is important for the correct application of the SCS-CN method in our conditions.
Heinonen, Johannes P M; Palmer, Stephen C F; Redpath, Steve M; Travis, Justin M J
2014-01-01
Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations. We introduce an individual-based model for understanding and predicting spatial hen harrier (Circus cyaneus) population dynamics in Great Britain. The model uses a landscape with habitat, prey and game management indices. The hen harrier population was initialised according to empirical census estimates for 1988/89 and simulated until 2030, and predictions for 1998, 2004 and 2010 were compared to empirical census estimates for respective years. The model produced a good qualitative match to overall trends between 1989 and 2010. Parameter explorations revealed relatively high elasticity in particular to demographic parameters such as juvenile male mortality. This highlights the need for robust parameter estimates from empirical research. There are clearly challenges for replication of real-world population trends, but this model provides a useful tool for increasing understanding of drivers of hen harrier dynamics and focusing research efforts in order to inform conflict management decisions.
Heinonen, Johannes P. M.; Palmer, Stephen C. F.; Redpath, Steve M.; Travis, Justin M. J.
2014-01-01
Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations. We introduce an individual-based model for understanding and predicting spatial hen harrier (Circus cyaneus) population dynamics in Great Britain. The model uses a landscape with habitat, prey and game management indices. The hen harrier population was initialised according to empirical census estimates for 1988/89 and simulated until 2030, and predictions for 1998, 2004 and 2010 were compared to empirical census estimates for respective years. The model produced a good qualitative match to overall trends between 1989 and 2010. Parameter explorations revealed relatively high elasticity in particular to demographic parameters such as juvenile male mortality. This highlights the need for robust parameter estimates from empirical research. There are clearly challenges for replication of real-world population trends, but this model provides a useful tool for increasing understanding of drivers of hen harrier dynamics and focusing research efforts in order to inform conflict management decisions. PMID:25405860
Monthly hydroclimatology of the continental United States
NASA Astrophysics Data System (ADS)
Petersen, Thomas; Devineni, Naresh; Sankarasubramanian, A.
2018-04-01
Physical/semi-empirical models that do not require any calibration are of paramount need for estimating hydrological fluxes for ungauged sites. We develop semi-empirical models for estimating the mean and variance of the monthly streamflow based on Taylor Series approximation of a lumped physically based water balance model. The proposed models require mean and variance of monthly precipitation and potential evapotranspiration, co-variability of precipitation and potential evapotranspiration and regionally calibrated catchment retention sensitivity, atmospheric moisture uptake sensitivity, groundwater-partitioning factor, and the maximum soil moisture holding capacity parameters. Estimates of mean and variance of monthly streamflow using the semi-empirical equations are compared with the observed estimates for 1373 catchments in the continental United States. Analyses show that the proposed models explain the spatial variability in monthly moments for basins in lower elevations. A regionalization of parameters for each water resources region show good agreement between observed moments and model estimated moments during January, February, March and April for mean and all months except May and June for variance. Thus, the proposed relationships could be employed for understanding and estimating the monthly hydroclimatology of ungauged basins using regional parameters.
Testing a new Free Core Nutation empirical model
NASA Astrophysics Data System (ADS)
Belda, Santiago; Ferrándiz, José M.; Heinkelmann, Robert; Nilsson, Tobias; Schuh, Harald
2016-03-01
The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sathaye, Jayant A.
2000-04-01
Integrated assessment (IA) modeling of climate policy is increasingly global in nature, with models incorporating regional disaggregation. The existing empirical basis for IA modeling, however, largely arises from research on industrialized economies. Given the growing importance of developing countries in determining long-term global energy and carbon emissions trends, filling this gap with improved statistical information on developing countries' energy and carbon-emissions characteristics is an important priority for enhancing IA modeling. Earlier research at LBNL on this topic has focused on assembling and analyzing statistical data on productivity trends and technological change in the energy-intensive manufacturing sectors of five developing countries,more » India, Brazil, Mexico, Indonesia, and South Korea. The proposed work will extend this analysis to the agriculture and electric power sectors in India, South Korea, and two other developing countries. They will also examine the impact of alternative model specifications on estimates of productivity growth and technological change for each of the three sectors, and estimate the contribution of various capital inputs--imported vs. indigenous, rigid vs. malleable-- in contributing to productivity growth and technological change. The project has already produced a data resource on the manufacturing sector which is being shared with IA modelers. This will be extended to the agriculture and electric power sectors, which would also be made accessible to IA modeling groups seeking to enhance the empirical descriptions of developing country characteristics. The project will entail basic statistical and econometric analysis of productivity and energy trends in these developing country sectors, with parameter estimates also made available to modeling groups. The parameter estimates will be developed using alternative model specifications that could be directly utilized by the existing IAMs for the manufacturing, agriculture, and electric power sectors.« less
Grassland productivity in response to nutrient additions and herbivory is scale-dependent
Baldwin, Douglas C.; Naithani, Kusum J.
2016-01-01
Vegetation response to nutrient addition can vary across space, yet studies that explicitly incorporate spatial pattern into experimental approaches are rare. To explore whether there are unique spatial scales (grains) at which grass response to nutrients and herbivory is best expressed, we imposed a large (∼3.75 ha) experiment in a South African coastal grassland ecosystem. In two of six 60 × 60 m grassland plots, we imposed a scaled sampling design in which fertilizer was added in replicated sub-plots (1 × 1 m, 2 × 2 m, and 4 × 4 m). The remaining plots either received no additions or were fertilized evenly across the entire area. Three of the six plots were fenced to exclude herbivory. We calculated empirical semivariograms for all plots one year following nutrient additions to determine whether the scale of grass response (biomass and nutrient concentrations) corresponded to the scale of the sub-plot additions and compared these results to reference plots (unfertilized or unscaled) and to plots with and without herbivory. We compared empirical semivariogram parameters to parameters from semivariograms derived from a set of simulated landscapes (neutral models). Empirical semivariograms showed spatial structure in plots that received multi-scaled nutrient additions, particularly at the 2 × 2 m grain. The level of biomass response was predicted by foliar P concentration and, to a lesser extent, N, with the treatment effect of herbivory having a minimal influence. Neutral models confirmed the length scale of the biomass response and indicated few differences due to herbivory. Overall, we conclude that interpretation of nutrient limitation in grasslands is dependent on the grain used to measure grass response and that herbivory had a secondary effect. PMID:27920956
NASA Astrophysics Data System (ADS)
Heuer, B.; Plenefisch, T.; Seidl, D.; Klinge, K.
Investigations on the interdependence of different source parameters are an impor- tant task to get more insight into the mechanics and dynamics of earthquake rup- ture, to model source processes and to make predictions for ground motion at the surface. The interdependencies, providing so-called scaling relations, have often been investigated for large earthquakes. However, they are not commonly determined for micro-earthquakes and swarm-earthquakes, especially for those of the Vogtland/NW- Bohemia region. For the most recent swarm in the Vogtland/NW-Bohemia, which took place between August and December 2000 near Novy Kostel (Czech Republic), we systematically determine the most important source parameters such as energy E0, seismic moment M0, local magnitude ML, fault length L, corner frequency fc and rise time r and build their interdependencies. The swarm of 2000 is well suited for such investigations since it covers a large magnitude interval (1.5 ML 3.7) and there are also observations in the near-field at several stations. In the present paper we mostly concentrate on two near-field stations with hypocentral distances between 11 and 13 km, namely WERN (Wernitzgrün) and SBG (Schönberg). Our data processing includes restitution to true ground displacement and rotation into the ray-based prin- cipal co-ordinate system, which we determine by the covariance matrix of the P- and S-displacement, respectively. Data preparation, determination of the distinct source parameters as well as statistical interpretation of the results will be exemplary pre- sented. The results will be discussed with respect to temporal variations in the swarm activity (the swarm consists of eight distinct sub-episodes) and already existing focal mechanisms.
Strongly enhanced 1/f - noise level in {kappa}-(BEDT-TTF){sub 2}X salts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandenburg, J.; Muller, J.; Wirth, S.
2010-01-01
Fluctuation spectroscopy has been used as an investigative tool to understand the scattering mechanisms of carriers and their low-frequency dynamics in quasi-two-dimensional organic conductors ?-(BEDT-TTF)2X. We report on the very high noise level in these systems as determined from Hooge's empirical law to quantify 1/f-type noise in solids. The value of the Hooge parameter ?H, i.e. the normalized noise level, of 105-107 is several orders of magnitude higher than values of ?Hnot, vert, similar10-2-10-3 typically found in homogeneous metals and semiconductors.
NASA Astrophysics Data System (ADS)
Karpushin, P. A.; Popov, Yu B.; Popova, A. I.; Popova, K. Yu; Krasnenko, N. P.; Lavrinenko, A. V.
2017-11-01
In this paper, the probabilities of faultless operation of aerologic stations are analyzed, the hypothesis of normality of the empirical data required for using the Kalman filter algorithms is tested, and the spatial correlation functions of distributions of meteorological parameters are determined. The results of a statistical analysis of two-term (0, 12 GMT) radiosonde observations of the temperature and wind velocity components at some preset altitude ranges in the troposphere in 2001-2016 are presented. These data can be used in mathematical modeling of physical processes in the atmosphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chremos, Alexandros, E-mail: achremos@imperial.ac.uk; Nikoubashman, Arash, E-mail: arashn@princeton.edu; Panagiotopoulos, Athanassios Z.
In this contribution, we develop a coarse-graining methodology for mapping specific block copolymer systems to bead-spring particle-based models. We map the constituent Kuhn segments to Lennard-Jones particles, and establish a semi-empirical correlation between the experimentally determined Flory-Huggins parameter χ and the interaction of the model potential. For these purposes, we have performed an extensive set of isobaric–isothermal Monte Carlo simulations of binary mixtures of Lennard-Jones particles with the same size but with asymmetric energetic parameters. The phase behavior of these monomeric mixtures is then extended to chains with finite sizes through theoretical considerations. Such a top-down coarse-graining approach is importantmore » from a computational point of view, since many characteristic features of block copolymer systems are on time and length scales which are still inaccessible through fully atomistic simulations. We demonstrate the applicability of our method for generating parameters by reproducing the morphology diagram of a specific diblock copolymer, namely, poly(styrene-b-methyl methacrylate), which has been extensively studied in experiments.« less
High fidelity studies of exploding foil initiator bridges, Part 1: Experimental method
NASA Astrophysics Data System (ADS)
Bowden, Mike; Neal, William
2017-01-01
Simulations of high voltage detonators, such as Exploding Bridgewire (EBW) and Exploding Foil Initiators (EFI), have historically been simple, often empirical, one-dimensional models capable of predicting parameters such as current, voltage and in the case of EFIs, flyer velocity. Correspondingly, experimental methods have in general been limited to the same parameters. With the advent of complex, first principles magnetohydrodynamic codes such as ALEGRA and ALE-MHD, it is now possible to simulate these components in three dimensions, predicting a much greater range of parameters than before. A significant improvement in experimental capability was therefore required to ensure these simulations could be adequately validated. In this first paper of a three part study, the experimental method for determining the current, voltage, flyer velocity and multi-dimensional profile of detonator components is presented. This improved capability, along with high fidelity simulations, offer an opportunity to gain a greater understanding of the processes behind the functioning of EBW and EFI detonators.
Tephra Fallout Hazard Assessment for VEI5 Plinian Eruption at Kuju Volcano, Japan, Using TEPHRA2
NASA Astrophysics Data System (ADS)
Tsuji, Tomohiro; Ikeda, Michiharu; Kishimoto, Hiroshi; Fujita, Koji; Nishizaka, Naoki; Onishi, Kozo
2017-06-01
Tephra fallout has a potential impact on engineered structures and systems at nuclear power plants. We provide the first report estimating potential accumulations of tephra fallout as big as VEI5 eruption from Kuju Volcano and calculated hazard curves at the Ikata Power Plant, using the TEPHRA2 computer program. We reconstructed the eruptive parameters of Kj-P1 tephra fallout deposit based on geological survey and literature review. A series of parameter studies were carried out to determine the best values of empirical parameters, such as diffusion coefficient and the fall time threshold. Based on such a reconstruction, we represent probabilistic analyses which assess the variation in meteorological condition, using wind profiles extracted from a 22 year long wind dataset. The obtained hazard curves and probability maps of tephra fallout associated to a Plinian eruption were used to discuss the exceeding probability at the site and the implications of such a severe eruption scenario.
Simulation of Thematic Mapper performance as a function of sensor scanning parameters
NASA Technical Reports Server (NTRS)
Johnson, R. H.; Shah, N. J.; Schmidt, N. F.
1975-01-01
The investigation and results of the Thematic Mapper Instrument Performance Study are described. The Thematic Mapper is the advanced multispectral scanner initially planned for the Earth Observation Satellite and now planned for LANDSAT D. The use of existing digital airborne scanner data obtained with the Modular Multispectral Scanner (M2S) at Bendix provided an opportunity to simulate the effects of variation of design parameters of the Thematic Mapper. Analysis and processing of this data on the Bendix Multispectral Data Analysis System were used to empirically determine categorization performance on data generated with variations of the sampling period and scan overlap parameters of the Thematic Mapper. The Bendix M2S data, with a 2.5 milliradian instantaneous field of view and a spatial resolution (pixel size) of 10-m from 13,000 ft altitude, allowed a direct simulation of Thematic Mapper data with a 30-m resolution. The flight data chosen were obtained on 30 June 1973 over agricultural test sites in Indiana.
NASA Astrophysics Data System (ADS)
Termini, Donatella
2013-04-01
Recent catastrophic events due to intense rainfalls have mobilized large amount of sediments causing extensive damages in vast areas. These events have highlighted how debris-flows runout estimations are of crucial importance to delineate the potentially hazardous areas and to make reliable assessment of the level of risk of the territory. Especially in recent years, several researches have been conducted in order to define predicitive models. But, existing runout estimation methods need input parameters that can be difficult to estimate. Recent experimental researches have also allowed the assessment of the physics of the debris flows. But, the major part of the experimental studies analyze the basic kinematic conditions which determine the phenomenon evolution. Experimental program has been recently conducted at the Hydraulic laboratory of the Department of Civil, Environmental, Aerospatial and of Materials (DICAM) - University of Palermo (Italy). The experiments, carried out in a laboratory flume appositely constructed, were planned in order to evaluate the influence of different geometrical parameters (such as the slope and the geometrical characteristics of the confluences to the main channel) on the propagation phenomenon of the debris flow and its deposition. Thus, the aim of the present work is to give a contribution to defining input parameters in runout estimation by numerical modeling. The propagation phenomenon is analyzed for different concentrations of solid materials. Particular attention is devoted to the identification of the stopping distance of the debris flow and of the involved parameters (volume, angle of depositions, type of material) in the empirical predictive equations available in literature (Rickenmanm, 1999; Bethurst et al. 1997). Bethurst J.C., Burton A., Ward T.J. 1997. Debris flow run-out and landslide sediment delivery model tests. Journal of hydraulic Engineering, ASCE, 123(5), 419-429 Rickenmann D. 1999. Empirical relationships fro debris flow. Natural hazards, 19, pp. 47-77
NASA Astrophysics Data System (ADS)
Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.
2012-08-01
A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.
Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.
Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir
2018-04-01
In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.
Jet Aeroacoustics: Noise Generation Mechanism and Prediction
NASA Technical Reports Server (NTRS)
Tam, Christopher
1998-01-01
This report covers the third year research effort of the project. The research work focussed on the fine scale mixing noise of both subsonic and supersonic jets and the effects of nozzle geometry and tabs on subsonic jet noise. In publication 1, a new semi-empirical theory of jet mixing noise from fine scale turbulence is developed. By an analogy to gas kinetic theory, it is shown that the source of noise is related to the time fluctuations of the turbulence kinetic theory. On starting with the Reynolds Averaged Navier-Stokes equations, a formula for the radiated noise is derived. An empirical model of the space-time correlation function of the turbulence kinetic energy is adopted. The form of the model is in good agreement with the space-time two-point velocity correlation function measured by Davies and coworkers. The parameters of the correlation are related to the parameters of the k-epsilon turbulence model. Thus the theory is self-contained. Extensive comparisons between the computed noise spectrum of the theory and experimental measured have been carried out. The parameters include jet Mach number from 0.3 to 2.0 and temperature ratio from 1.0 to 4.8. Excellent agreements are found in the spectrum shape, noise intensity and directivity. It is envisaged that the theory would supercede all semi-empirical and totally empirical jet noise prediction methods in current use.
Mathematical modeling of synthetic unit hydrograph case study: Citarum watershed
NASA Astrophysics Data System (ADS)
Islahuddin, Muhammad; Sukrainingtyas, Adiska L. A.; Kusuma, M. Syahril B.; Soewono, Edy
2015-09-01
Deriving unit hydrograph is very important in analyzing watershed's hydrologic response of a rainfall event. In most cases, hourly measures of stream flow data needed in deriving unit hydrograph are not always available. Hence, one needs to develop methods for deriving unit hydrograph for ungagged watershed. Methods that have evolved are based on theoretical or empirical formulas relating hydrograph peak discharge and timing to watershed characteristics. These are usually referred to Synthetic Unit Hydrograph. In this paper, a gamma probability density function and its variant are used as mathematical approximations of a unit hydrograph for Citarum Watershed. The model is adjusted with real field condition by translation and scaling. Optimal parameters are determined by using Particle Swarm Optimization method with weighted objective function. With these models, a synthetic unit hydrograph can be developed and hydrologic parameters can be well predicted.
NASA Astrophysics Data System (ADS)
Wu, Xian-Qian; Wang, Xi; Wei, Yan-Peng; Song, Hong-Wei; Huang, Chen-Guang
2012-06-01
Shot peening is a widely used surface treatment method by generating compressive residual stress near the surface of metallic materials to increase fatigue life and resistance to corrosion fatigue, cracking, etc. Compressive residual stress and dent profile are important factors to evaluate the effectiveness of shot peening process. In this paper, the influence of dimensionless parameters on maximum compressive residual stress and maximum depth of the dent were investigated. Firstly, dimensionless relations of processing parameters that affect the maximum compressive residual stress and the maximum depth of the dent were deduced by dimensional analysis method. Secondly, the influence of each dimensionless parameter on dimensionless variables was investigated by the finite element method. Furthermore, related empirical formulas were given for each dimensionless parameter based on the simulation results. Finally, comparison was made and good agreement was found between the simulation results and the empirical formula, which shows that a useful approach is provided in this paper for analyzing the influence of each individual parameter.
Liu, Xiaodong; Baoyin, Hexi; Marchis, Franck
In this study, the hierarchical stability of the seven known large size ratio triple asteroids is investigated. The effect of the solar gravity and primary's J 2 are considered. The force function is expanded in terms of mass ratios based on the Hill's approximation and the large size ratio property. The empirical stability parameters are used to examine the hierarchical stability of the triple asteroids. It is found that the all the known large size ratio triple asteroid systems are hierarchically stable. This study provides useful information for future evolutions of the triple asteroids.
Precise determination of time to reach viral load set point after acute HIV-1 infection.
Huang, Xiaojie; Chen, Hui; Li, Wei; Li, Haiying; Jin, Xia; Perelson, Alan S; Fox, Zoe; Zhang, Tong; Xu, Xiaoning; Wu, Hao
2012-12-01
The HIV viral load set point has long been used as a prognostic marker of disease progression and more recently as an end-point parameter in HIV vaccine clinical trials. The definition of set point, however, is variable. Moreover, the earliest time at which the set point is reached after the onset of infection has never been clearly defined. In this study, we obtained sequential plasma viral load data from 60 acutely HIV-infected Chinese patients among a cohort of men who have sex with men, mathematically determined viral load set point levels, and estimated time to attain set point after infection. We also compared the results derived from our models and that obtained from an empirical method. With novel uncomplicated mathematic model, we discovered that set points may vary from 21 to 119 days dependent on the patients' initial viral load trajectory. The viral load set points were 4.28 ± 0.86 and 4.25 ± 0.87 log10 copies per milliliter (P = 0.08), respectively, as determined by our model and an empirical method, suggesting an excellent agreement between the old and new methods. We provide a novel method to estimate viral load set point at the very early stage of HIV infection. Application of this model can accurately and reliably determine the set point, thus providing a new tool for physicians to better monitor early intervention strategies in acutely infected patients and scientists to rationally design preventative vaccine studies.
ERIC Educational Resources Information Center
Fan, Xitao
This paper empirically and systematically assessed the performance of bootstrap resampling procedure as it was applied to a regression model. Parameter estimates from Monte Carlo experiments (repeated sampling from population) and bootstrap experiments (repeated resampling from one original bootstrap sample) were generated and compared. Sample…
Empirical Histograms in Item Response Theory with Ordinal Data
ERIC Educational Resources Information Center
Woods, Carol M.
2007-01-01
The purpose of this research is to describe, test, and illustrate a new implementation of the empirical histogram (EH) method for ordinal items. The EH method involves the estimation of item response model parameters simultaneously with the approximation of the distribution of the random latent variable (theta) as a histogram. Software for the EH…
Extended Analysis of Empirical Citations with Skinner's "Verbal Behavior": 1984-2004
ERIC Educational Resources Information Center
Dixon, Mark R.; Small, Stacey L.; Rosales, Rocio
2007-01-01
The present paper comments on and extends the citation analysis of verbal operant publications based on Skinner's "Verbal Behavior" (1957) by Dymond, O'Hora, Whelan, and O'Donovan (2006). Variations in population parameters were evaluated for only those studies that Dymond et al. categorized as empirical. Preliminary results indicate that the…
Simms, Laura E.; Engebretson, Mark J.; Pilipenko, Viacheslav; ...
2016-04-07
The daily maximum relativistic electron flux at geostationary orbit can be predicted well with a set of daily averaged predictor variables including previous day's flux, seed electron flux, solar wind velocity and number density, AE index, IMF Bz, Dst, and ULF and VLF wave power. As predictor variables are intercorrelated, we used multiple regression analyses to determine which are the most predictive of flux when other variables are controlled. Empirical models produced from regressions of flux on measured predictors from 1 day previous were reasonably effective at predicting novel observations. Adding previous flux to the parameter set improves the predictionmore » of the peak of the increases but delays its anticipation of an event. Previous day's solar wind number density and velocity, AE index, and ULF wave activity are the most significant explanatory variables; however, the AE index, measuring substorm processes, shows a negative correlation with flux when other parameters are controlled. This may be due to the triggering of electromagnetic ion cyclotron waves by substorms that cause electron precipitation. VLF waves show lower, but significant, influence. The combined effect of ULF and VLF waves shows a synergistic interaction, where each increases the influence of the other on flux enhancement. Correlations between observations and predictions for this 1 day lag model ranged from 0.71 to 0.89 (average: 0.78). Furthermore, a path analysis of correlations between predictors suggests that solar wind and IMF parameters affect flux through intermediate processes such as ring current ( Dst), AE, and wave activity.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simms, Laura E.; Engebretson, Mark J.; Pilipenko, Viacheslav
The daily maximum relativistic electron flux at geostationary orbit can be predicted well with a set of daily averaged predictor variables including previous day's flux, seed electron flux, solar wind velocity and number density, AE index, IMF Bz, Dst, and ULF and VLF wave power. As predictor variables are intercorrelated, we used multiple regression analyses to determine which are the most predictive of flux when other variables are controlled. Empirical models produced from regressions of flux on measured predictors from 1 day previous were reasonably effective at predicting novel observations. Adding previous flux to the parameter set improves the predictionmore » of the peak of the increases but delays its anticipation of an event. Previous day's solar wind number density and velocity, AE index, and ULF wave activity are the most significant explanatory variables; however, the AE index, measuring substorm processes, shows a negative correlation with flux when other parameters are controlled. This may be due to the triggering of electromagnetic ion cyclotron waves by substorms that cause electron precipitation. VLF waves show lower, but significant, influence. The combined effect of ULF and VLF waves shows a synergistic interaction, where each increases the influence of the other on flux enhancement. Correlations between observations and predictions for this 1 day lag model ranged from 0.71 to 0.89 (average: 0.78). Furthermore, a path analysis of correlations between predictors suggests that solar wind and IMF parameters affect flux through intermediate processes such as ring current ( Dst), AE, and wave activity.« less
Fluorescence Imaging Study of Transition in Underexpanded Free Jets
NASA Technical Reports Server (NTRS)
Wilkes, Jennifer A.; Danehy, Paul M.; Nowak, Robert J.
2005-01-01
Planar laser-induced fluorescence (PLIF) is demonstrated to be a valuable tool for studying the onset of transition to turbulence. For this study, we have used PLIF of nitric oxide (NO) to image underexpanded axisymmetric free jets issuing into a low-pressure chamber through a smooth converging nozzle with a sonic orifice. Flows were studied over a range of Reynolds numbers and nozzle-exit-to-ambient pressure ratios with the aim of empirically determining criteria governing the onset of turbulence. We have developed an image processing technique, involving calculation of the standard deviation of the intensity in PLIF images, in order to aid in the identification of turbulence. We have used the resulting images to identify laminar, transitional and turbulent flow regimes. Jet scaling parameters were used to define a rescaled Reynolds number that incorporates the influence of a varying pressure ratio. An empirical correlation was found between transition length and this rescaled Reynolds number for highly underexpanded jets.
Empirical algorithms to estimate water column pH in the Southern Ocean
NASA Astrophysics Data System (ADS)
Williams, N. L.; Juranek, L. W.; Johnson, K. S.; Feely, R. A.; Riser, S. C.; Talley, L. D.; Russell, J. L.; Sarmiento, J. L.; Wanninkhof, R.
2016-04-01
Empirical algorithms are developed using high-quality GO-SHIP hydrographic measurements of commonly measured parameters (temperature, salinity, pressure, nitrate, and oxygen) that estimate pH in the Pacific sector of the Southern Ocean. The coefficients of determination, R2, are 0.98 for pH from nitrate (pHN) and 0.97 for pH from oxygen (pHOx) with RMS errors of 0.010 and 0.008, respectively. These algorithms are applied to Southern Ocean Carbon and Climate Observations and Modeling (SOCCOM) biogeochemical profiling floats, which include novel sensors (pH, nitrate, oxygen, fluorescence, and backscatter). These algorithms are used to estimate pH on floats with no pH sensors and to validate and adjust pH sensor data from floats with pH sensors. The adjusted float data provide, for the first time, seasonal cycles in surface pH on weekly resolution that range from 0.05 to 0.08 on weekly resolution for the Pacific sector of the Southern Ocean.
NASA Astrophysics Data System (ADS)
Fetisova, Yu. A.; Ermolenko, B. V.; Ermolenko, G. V.; Kiseleva, S. V.
2017-04-01
We studied the information basis for the assessment of wind power potential on the territory of Russia. We described the methodology to determine the parameters of the Weibull function, which reflects the density of distribution of probabilities of wind flow speeds at a defined basic height above the surface of the earth using the available data on the average speed at this height and its repetition by gradations. The application of the least square method for determining these parameters, unlike the use of graphical methods, allows performing a statistical assessment of the results of approximation of empirical histograms by the Weibull formula. On the basis of the computer-aided analysis of the statistical data, it was shown that, at a fixed point where the wind speed changes at different heights, the range of parameter variation of the Weibull distribution curve is relatively small, the sensitivity of the function to parameter changes is quite low, and the influence of changes on the shape of speed distribution curves is negligible. Taking this into consideration, we proposed and mathematically verified the methodology of determining the speed parameters of the Weibull function at other heights using the parameter computations for this function at a basic height, which is known or defined by the average speed of wind flow, or the roughness coefficient of the geological substrate. We gave examples of practical application of the suggested methodology in the development of the Atlas of Renewable Energy Resources in Russia in conditions of deficiency of source meteorological data. The proposed methodology, to some extent, may solve the problem related to the lack of information on the vertical profile of repeatability of the wind flow speeds in the presence of a wide assortment of wind turbines with different ranges of wind-wheel axis heights and various performance characteristics in the global market; as a result, this methodology can become a powerful tool for effective selection of equipment in the process of designing a power supply system in a certain location.
Model improvements and validation of TerraSAR-X precise orbit determination
NASA Astrophysics Data System (ADS)
Hackel, S.; Montenbruck, O.; Steigenberger, P.; Balss, U.; Gisinger, C.; Eineder, M.
2017-05-01
The radar imaging satellite mission TerraSAR-X requires precisely determined satellite orbits for validating geodetic remote sensing techniques. Since the achieved quality of the operationally derived, reduced-dynamic (RD) orbit solutions limits the capabilities of the synthetic aperture radar (SAR) validation, an effort is made to improve the estimated orbit solutions. This paper discusses the benefits of refined dynamical models on orbit accuracy as well as estimated empirical accelerations and compares different dynamic models in a RD orbit determination. Modeling aspects discussed in the paper include the use of a macro-model for drag and radiation pressure computation, the use of high-quality atmospheric density and wind models as well as the benefit of high-fidelity gravity and ocean tide models. The Sun-synchronous dusk-dawn orbit geometry of TerraSAR-X results in a particular high correlation of solar radiation pressure modeling and estimated normal-direction positions. Furthermore, this mission offers a unique suite of independent sensors for orbit validation. Several parameters serve as quality indicators for the estimated satellite orbit solutions. These include the magnitude of the estimated empirical accelerations, satellite laser ranging (SLR) residuals, and SLR-based orbit corrections. Moreover, the radargrammetric distance measurements of the SAR instrument are selected for assessing the quality of the orbit solutions and compared to the SLR analysis. The use of high-fidelity satellite dynamics models in the RD approach is shown to clearly improve the orbit quality compared to simplified models and loosely constrained empirical accelerations. The estimated empirical accelerations are substantially reduced by 30% in tangential direction when working with the refined dynamical models. Likewise the SLR residuals are reduced from -3 ± 17 to 2 ± 13 mm, and the SLR-derived normal-direction position corrections are reduced from 15 to 6 mm, obtained from the 2012-2014 period. The radar range bias is reduced from -10.3 to -6.1 mm with the updated orbit solutions, which coincides with the reduced standard deviation of the SLR residuals. The improvements are mainly driven by the satellite macro-model for the purpose of solar radiation pressure modeling, improved atmospheric density models, and the use of state-of-the-art gravity field models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iorio, L., E-mail: lorenzo.iorio@libero.it
2011-09-15
The subject of this paper is the empirically determined anomalous secular increases of the astronomical unit, of the order of some cm yr{sup -1}, and of the eccentricity of the lunar orbit, of the order of 10{sup -12} yr{sup -1}. The aim is to find an empirical explanation of both anomalies as far as their orders of magnitude are concerned. The methods employed are working out perturbatively with the Gauss equations the secular effects on the semi-major axis a and the eccentricity e of a test particle orbiting a central body acted upon by a small anomalous radial acceleration Amore » proportional to the radial velocity v{sub r} of the particle-body relative motion. The results show that non-vanishing secular variations
Fateen, Seif-Eddeen K; Khalil, Menna M; Elnabawy, Ahmed O
2013-03-01
Peng-Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij . In this work, we developed a semi-empirical correlation for kij partly based on the Huron-Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.
Modeling dynamic beta-gamma polymorphic transition in Tin
NASA Astrophysics Data System (ADS)
Chauvin, Camille; Montheillet, Frank; Petit, Jacques; CEA Gramat Collaboration; EMSE Collaboration
2015-06-01
Solid-solid phase transitions in metals have been studied by shock waves techniques for many decades. Recent experiments have investigated the transition during isentropic compression experiments and shock-wave compression and have highlighted the strong influence of the loading rate on the transition. Complementary data obtained with velocity and temperature measurements around the polymorphic transition beta-gamma of Tin on gas gun experiments have displayed the importance of the kinetics of the transition. But, even though this phenomenon is known, modeling the kinetic remains complex and based on empirical formulations. A multiphase EOS is available in our 1D Lagrangian code Unidim. We propose to present the influence of various kinetic laws (either empirical or involving nucleation and growth mechanisms) and their parameters (Gibbs free energy, temperature, pressure) on the transformation rate. We compare experimental and calculated velocities and temperature profiles and we underline the effects of the empirical parameters of these models.
An empirical-statistical model for laser cladding of Ti-6Al-4V powder on Ti-6Al-4V substrate
NASA Astrophysics Data System (ADS)
Nabhani, Mohammad; Razavi, Reza Shoja; Barekat, Masoud
2018-03-01
In this article, Ti-6Al-4V powder alloy was directly deposited on Ti-6Al-4V substrate using laser cladding process. In this process, some key parameters such as laser power (P), laser scanning rate (V) and powder feeding rate (F) play important roles. Using linear regression analysis, this paper develops the empirical-statistical relation between these key parameters and geometrical characteristics of single clad tracks (i.e. clad height, clad width, penetration depth, wetting angle, and dilution) as a combined parameter (PαVβFγ). The results indicated that the clad width linearly depended on PV-1/3 and powder feeding rate had no effect on it. The dilution controlled by a combined parameter as VF-1/2 and laser power was a dispensable factor. However, laser power was the dominant factor for the clad height, penetration depth, and wetting angle so that they were proportional to PV-1F1/4, PVF-1/8, and P3/4V-1F-1/4, respectively. Based on the results of correlation coefficient (R > 0.9) and analysis of residuals, it was confirmed that these empirical-statistical relations were in good agreement with the measured values of single clad tracks. Finally, these relations led to the design of a processing map that can predict the geometrical characteristics of the single clad tracks based on the key parameters.
Smsynth: AN Imagery Synthesis System for Soil Moisture Retrieval
NASA Astrophysics Data System (ADS)
Cao, Y.; Xu, L.; Peng, J.
2018-04-01
Soil moisture (SM) is a important variable in various research areas, such as weather and climate forecasting, agriculture, drought and flood monitoring and prediction, and human health. An ongoing challenge in estimating SM via synthetic aperture radar (SAR) is the development of the retrieval SM methods, especially the empirical models needs as training samples a lot of measurements of SM and soil roughness parameters which are very difficult to acquire. As such, it is difficult to develop empirical models using realistic SAR imagery and it is necessary to develop methods to synthesis SAR imagery. To tackle this issue, a SAR imagery synthesis system based on the SM named SMSynth is presented, which can simulate radar signals that are realistic as far as possible to the real SAR imagery. In SMSynth, SAR backscatter coefficients for each soil type are simulated via the Oh model under the Bayesian framework, where the spatial correlation is modeled by the Markov random field (MRF) model. The backscattering coefficients simulated based on the designed soil parameters and sensor parameters are added into the Bayesian framework through the data likelihood where the soil parameters and sensor parameters are set as realistic as possible to the circumstances on the ground and in the validity range of the Oh model. In this way, a complete and coherent Bayesian probabilistic framework is established. Experimental results show that SMSynth is capable of generating realistic SAR images that suit the needs of a large amount of training samples of empirical models.
Empirical estimation of school siting parameter towards improving children's safety
NASA Astrophysics Data System (ADS)
Aziz, I. S.; Yusoff, Z. M.; Rasam, A. R. A.; Rahman, A. N. N. A.; Omar, D.
2014-02-01
Distance from school to home is a key determination in ensuring the safety of hildren. School siting parameters are made to make sure that a particular school is located in a safe environment. School siting parameters are made by Department of Town and Country Planning Malaysia (DTCP) and latest review was on June 2012. These school siting parameters are crucially important as they can affect the safety, school reputation, and not to mention the perception of the pupil and parents of the school. There have been many studies to review school siting parameters since these change in conjunction with this ever-changing world. In this study, the focus is the impact of school siting parameter on people with low income that live in the urban area, specifically in Johor Bahru, Malaysia. In achieving that, this study will use two methods which are on site and off site. The on site method is to give questionnaires to people and off site is to use Geographic Information System (GIS) and Statistical Product and Service Solutions (SPSS), to analyse the results obtained from the questionnaire. The output is a maps of suitable safe distance from school to house. The results of this study will be useful to people with low income as their children tend to walk to school rather than use transportation.
Empirical Likelihood in Nonignorable Covariate-Missing Data Problems.
Xie, Yanmei; Zhang, Biao
2017-04-20
Missing covariate data occurs often in regression analysis, which frequently arises in the health and social sciences as well as in survey sampling. We study methods for the analysis of a nonignorable covariate-missing data problem in an assumed conditional mean function when some covariates are completely observed but other covariates are missing for some subjects. We adopt the semiparametric perspective of Bartlett et al. (Improving upon the efficiency of complete case analysis when covariates are MNAR. Biostatistics 2014;15:719-30) on regression analyses with nonignorable missing covariates, in which they have introduced the use of two working models, the working probability model of missingness and the working conditional score model. In this paper, we study an empirical likelihood approach to nonignorable covariate-missing data problems with the objective of effectively utilizing the two working models in the analysis of covariate-missing data. We propose a unified approach to constructing a system of unbiased estimating equations, where there are more equations than unknown parameters of interest. One useful feature of these unbiased estimating equations is that they naturally incorporate the incomplete data into the data analysis, making it possible to seek efficient estimation of the parameter of interest even when the working regression function is not specified to be the optimal regression function. We apply the general methodology of empirical likelihood to optimally combine these unbiased estimating equations. We propose three maximum empirical likelihood estimators of the underlying regression parameters and compare their efficiencies with other existing competitors. We present a simulation study to compare the finite-sample performance of various methods with respect to bias, efficiency, and robustness to model misspecification. The proposed empirical likelihood method is also illustrated by an analysis of a data set from the US National Health and Nutrition Examination Survey (NHANES).
Topography and geology site effects from the intensity prediction model (ShakeMap) for Austria
NASA Astrophysics Data System (ADS)
del Puy Papí Isaba, María; Jia, Yan; Weginger, Stefan
2017-04-01
The seismicity in Austria can be categorized as moderated. Despite the fact that the hazard seems to be rather low, earthquakes can cause great damage and losses, specially in densely populated and industrialized areas. It is well known, that equations which predict intensity as a function of magnitude and distance, among other parameters, are useful tool for hazard and risk assessment. Therefore, this study aims to determine an empirical model of the ground shaking intensities (ShakeMap) of a series of earthquakes occurred in Austria between 1000 and 2014. Furthermore, the obtained empirical model will lead to further interpretation of both, contemporary and historical earthquakes. A total of 285 events, which epicenters were located in Austria, and a sum of 22.739 reported macreoseismic data points from Austria and adjoining countries, were used. These events are enclosed in the period 1000-2014 and characterized by having a local magnitude greater than 3. In the first state of the model development, the data was careful selected, e.g. solely intensities equal or greater than III were used. In a second state the data was adjusted to the selected empirical model. Finally, geology and topography corrections were obtained by means of the model residuals in order to derive intensity-based site amplification effects.
A genetic-algorithm approach for assessing the liquefaction potential of sandy soils
NASA Astrophysics Data System (ADS)
Sen, G.; Akyol, E.
2010-04-01
The determination of liquefaction potential is required to take into account a large number of parameters, which creates a complex nonlinear structure of the liquefaction phenomenon. The conventional methods rely on simple statistical and empirical relations or charts. However, they cannot characterise these complexities. Genetic algorithms are suited to solve these types of problems. A genetic algorithm-based model has been developed to determine the liquefaction potential by confirming Cone Penetration Test datasets derived from case studies of sandy soils. Software has been developed that uses genetic algorithms for the parameter selection and assessment of liquefaction potential. Then several estimation functions for the assessment of a Liquefaction Index have been generated from the dataset. The generated Liquefaction Index estimation functions were evaluated by assessing the training and test data. The suggested formulation estimates the liquefaction occurrence with significant accuracy. Besides, the parametric study on the liquefaction index curves shows a good relation with the physical behaviour. The total number of misestimated cases was only 7.8% for the proposed method, which is quite low when compared to another commonly used method.
Welding current and melting rate in GMAW of aluminium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pandey, S.; Rao, U.R.K.; Aghakhani, M.
1996-12-31
Studies on GMAW of aluminium and its alloy 5083, revealed that the welding current and melting rate were affected by any change in wire feed rate, arc voltage, nozzle to plate distance, welding speed and torch angle. Empirical models have been presented to determine accurately the welding current and melting rate for any set of these parameters. These results can be utilized for determining accurately the heat input into the workpiece from which reliable predictions can be made about the mechanical and the metallurgical properties of a welded joint. The analysis of the model also helps in providing a vitalmore » information about the static V-I characteristics of the welding power source. The models were developed using a two-level fractional factorial design. The adequacy of the model was tested by the use of analysis of variance technique and the significance of the coefficients was tested by the student`s t test. The estimated and observed values of the welding current and melting rate have been shown on a scatter diagram and the interaction effects of different parameters involved have been presented in graphical forms.« less
Microgravity Geyser and Flow Field Prediction
NASA Technical Reports Server (NTRS)
Hochstein, J. I.; Marchetta, J. G.; Thornton, R. J.
2006-01-01
Modeling and prediction of flow fields and geyser formation in microgravity cryogenic propellant tanks was investigated. A computational simulation was used to reproduce the test matrix of experimental results performed by other investigators, as well as to model the flows in a larger tank. An underprediction of geyser height by the model led to a sensitivity study to determine if variations in surface tension coefficient, contact angle, or jet pipe turbulence significantly influence the simulations. It was determined that computational geyser height is not sensitive to slight variations in any of these items. An existing empirical correlation based on dimensionless parameters was re-examined in an effort to improve the accuracy of geyser prediction. This resulted in the proposal for a re-formulation of two dimensionless parameters used in the correlation; the non-dimensional geyser height and the Bond number. It was concluded that the new non-dimensional geyser height shows little promise. Although further data will be required to make a definite judgement, the reformulation of the Bond number provided correlations that are more accurate and appear to be more general than the previously established correlation.
A new simple local muscle recovery model and its theoretical and experimental validation.
Ma, Liang; Zhang, Wei; Wu, Su; Zhang, Zhanwu
2015-01-01
This study was conducted to provide theoretical and experimental validation of a local muscle recovery model. Muscle recovery has been modeled in different empirical and theoretical approaches to determine work-rest allowance for musculoskeletal disorder (MSD) prevention. However, time-related parameters and individual attributes have not been sufficiently considered in conventional approaches. A new muscle recovery model was proposed by integrating time-related task parameters and individual attributes. Theoretically, this muscle recovery model was compared to other theoretical models mathematically. Experimentally, a total of 20 subjects participated in the experimental validation. Hand grip force recovery and shoulder joint strength recovery were measured after a fatiguing operation. The recovery profile was fitted by using the recovery model, and individual recovery rates were calculated as well after fitting. Good fitting values (r(2) > .8) were found for all the subjects. Significant differences in recovery rates were found among different muscle groups (p < .05). The theoretical muscle recovery model was primarily validated by characterization of the recovery process after fatiguing operation. The determined recovery rate may be useful to represent individual recovery attribute.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jing; Guan, Huaiqun; Solberg, Timothy
2011-07-15
Purpose: A statistical projection restoration algorithm based on the penalized weighted least-squares (PWLS) criterion can substantially improve the image quality of low-dose CBCT images. The performance of PWLS is largely dependent on the choice of the penalty parameter. Previously, the penalty parameter was chosen empirically by trial and error. In this work, the authors developed an inverse technique to calculate the penalty parameter in PWLS for noise suppression of low-dose CBCT in image guided radiotherapy (IGRT). Methods: In IGRT, a daily CBCT is acquired for the same patient during a treatment course. In this work, the authors acquired the CBCTmore » with a high-mAs protocol for the first session and then a lower mAs protocol for the subsequent sessions. The high-mAs projections served as the goal (ideal) toward, which the low-mAs projections were to be smoothed by minimizing the PWLS objective function. The penalty parameter was determined through an inverse calculation of the derivative of the objective function incorporating both the high and low-mAs projections. Then the parameter obtained can be used for PWLS to smooth the noise in low-dose projections. CBCT projections for a CatPhan 600 and an anthropomorphic head phantom, as well as for a brain patient, were used to evaluate the performance of the proposed technique. Results: The penalty parameter in PWLS was obtained for each CBCT projection using the proposed strategy. The noise in the low-dose CBCT images reconstructed from the smoothed projections was greatly suppressed. Image quality in PWLS-processed low-dose CBCT was comparable to its corresponding high-dose CBCT. Conclusions: A technique was proposed to estimate the penalty parameter for PWLS algorithm. It provides an objective and efficient way to obtain the penalty parameter for image restoration algorithms that require predefined smoothing parameters.« less
Estimation of Melting Points of Organics.
Yalkowsky, Samuel H; Alantary, Doaa
2018-05-01
Unified physicochemical property estimation relationships is a system of empirical and theoretical relationships that relate 20 physicochemical properties of organic molecules to each other and to chemical structure. Melting point is a key parameter in the unified physicochemical property estimation relationships scheme because it is a determinant of several other properties including vapor pressure, and solubility. This review describes the first-principals calculation of the melting points of organic compounds from structure. The calculation is based on the fact that the melting point, T m , is equal to the ratio of the heat of melting, ΔH m , to the entropy of melting, ΔS m . The heat of melting is shown to be an additive constitutive property. However, the entropy of melting is not entirely group additive. It is primarily dependent on molecular geometry, including parameters which reflect the degree of restriction of molecular motion in the crystal to that of the liquid. Symmetry, eccentricity, chirality, flexibility, and hydrogen bonding, each affect molecular freedom in different ways and thus make different contributions to the total entropy of fusion. The relationships of these entropy determining parameters to chemical structure are used to develop a reasonably accurate means of predicting the melting points over 2000 compounds. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Empirical correlations of the performance of vapor-anode PX-series AMTEC cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, L.; Merrill, J.M.; Mayberry, C.
Power systems based on AMTEC technology will be used for future NASA missions, including a Pluto-Express (PX) or Europa mission planned for approximately year 2004. AMTEC technology may also be used as an alternative to photovoltaic based power systems for future Air Force missions. An extensive development program of Alkali-Metal Thermal-to-Electric Conversion (AMTEC) technology has been underway at the Vehicle Technologies Branch of the Air Force Research Laboratory (AFRL) in Albuquerque, New Mexico since 1992. Under this program, numerical modeling and experimental investigations of the performance of the various multi-BASE tube, vapor-anode AMTEC cells have been and are being performed.more » Vacuum testing of AMTEC cells at AFRL determines the effects of changing the hot and cold end temperatures, T{sub hot} and T{sub cold}, and applied external load, R{sub ext}, on the cell electric power output, current-voltage characteristics, and conversion efficiency. Test results have traditionally been used to provide feedback to cell designers, and to validate numerical models. The current work utilizes the test data to develop empirical correlations for cell output performance under various working conditions. Because the empirical correlations are developed directly from the experimental data, uncertainties arising from material properties that must be used in numerical modeling can be avoided. Empirical correlations of recent vapor-anode PX-series AMTEC cells have been developed. Based on AMTEC theory and the experimental data, the cell output power (as well as voltage and current) was correlated as a function of three parameters (T{sub hot}, T{sub cold}, and R{sub ext}) for a given cell. Correlations were developed for different cells (PX-3C, PX-3A, PX-G3, and PX-5A), and were in good agreement with experimental data for these cells. Use of these correlations can greatly reduce the testing required to determine electrical performance of a given type of AMTEC cell over a wide range of operating conditions.« less
Selection of fire spread model for Russian fire behavior prediction system
Alexandra V. Volokitina; Kevin C. Ryan; Tatiana M. Sofronova; Mark A. Sofronov
2010-01-01
Mathematical modeling of fire behavior prediction is only possible if the models are supplied with an information database that provides spatially explicit input parameters for modeled area. Mathematical models can be of three kinds: 1) physical; 2) empirical; and 3) quasi-empirical (Sullivan, 2009). Physical models (Grishin, 1992) are of academic interest only because...
ERIC Educational Resources Information Center
Fierro, Catriel; Ostrovsky, Ana Elisa; Di Doménico, María Cristina
2018-01-01
This study is an empirical analysis of the field's current state in Argentinian universities. Bibliometric parameters were used to retrieve the total listed texts (N = 797) of eight undergraduate history courses' syllabi from Argentina's most populated public university psychology programs. Then, professors in charge of the selected courses (N =…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakodynskaya, I.K.; Neverov, A.A; Ryabov, A.D.
1986-07-01
The rate of the reaction of di-mu-chlorobis(acetanilidato-2C, 0) dipalladium(II) with styrene leading to 2-acetaminostilbene was found in 11 organic solvents. In all media, the reaction has second-order kinetics. The free energy, enthalpy and entropy of activation were determined in each solvent. The data for the solubility of the starting Pd(II) complex were used to determine the free energy for the transfer of the ground state of this reaction from a standard solvent (heptane) to the other solvents. The analogous transfer functions were calculated for the transition state. The correlation of the transfer functions of the starting and transition states ofmore » this reaction with empirical solvent parameters was examined.« less
Nakai, Yoichi; Hidaka, Hiroshi; Watanabe, Naoki; Kojima, Takao M
2016-06-14
We measured equilibrium constants for H3O(+)(H2O)n-1 + H2O↔H3O(+)(H2O)n (n = 4-9) reactions taking place in an ion drift tube with various applied electric fields at gas temperatures of 238-330 K. The zero-field reaction equilibrium constants were determined by extrapolation of those obtained at non-zero electric fields. From the zero-field reaction equilibrium constants, the standard enthalpy and entropy changes, ΔHn,n-1 (0) and ΔSn,n-1 (0), of stepwise association for n = 4-8 were derived and were in reasonable agreement with those measured in previous studies. We also examined the electric field dependence of the reaction equilibrium constants at non-zero electric fields for n = 4-8. An effective temperature for the reaction equilibrium constants at non-zero electric field was empirically obtained using a parameter describing the electric field dependence of the reaction equilibrium constants. Furthermore, the size dependence of the parameter was thought to reflect the evolution of the hydrogen-bond structure of H3O(+)(H2O)n with the cluster size. The reflection of structural information in the electric field dependence of the reaction equilibria is particularly noteworthy.
An Extension of the Partial Credit Model with an Application to the Measurement of Change.
ERIC Educational Resources Information Center
Fischer, Gerhard H.; Ponocny, Ivo
1994-01-01
An extension to the partial credit model, the linear partial credit model, is considered under the assumption of a certain linear decomposition of the item x category parameters into basic parameters. A conditional maximum likelihood algorithm for estimating basic parameters is presented and illustrated with simulation and an empirical study. (SLD)
USDA-ARS?s Scientific Manuscript database
Multi-angle remote sensing has been proved useful for mapping vegetation community types in desert regions. Based on Multi-angle Imaging Spectro-Radiometer (MISR) multi-angular images, this study compares roles played by Bidirectional Reflectance Distribution Function (BRDF) model parameters with th...
Sharp, Jonathan D; Byrne, Robert H; Liu, Xuewu; Feely, Richard A; Cuyler, Erin E; Wanninkhof, Rik; Alin, Simone R
2017-08-15
This work describes an improved algorithm for spectrophotometric determinations of seawater carbonate ion concentrations ([CO 3 2- ] spec ) derived from observations of ultraviolet absorbance spectra in lead-enriched seawater. Quality-control assessments of [CO 3 2- ] spec data obtained on two NOAA research cruises (2012 and 2016) revealed a substantial intercruise difference in average Δ[CO 3 2- ] (the difference between a sample's [CO 3 2- ] spec value and the corresponding [CO 3 2- ] value calculated from paired measurements of pH and dissolved inorganic carbon). Follow-up investigation determined that this discordance was due to the use of two different spectrophotometers, even though both had been properly calibrated. Here we present an essential methodological refinement to correct [CO 3 2- ] spec absorbance data for small but significant instrumental differences. After applying the correction (which, notably, is not necessary for pH determinations from sulfonephthalein dye absorbances) to the shipboard absorbance data, we fit the combined-cruise data set to produce empirically updated parameters for use in processing future (and historical) [CO 3 2- ] spec absorbance measurements. With the new procedure, the average Δ[CO 3 2- ] offset between the two aforementioned cruises was reduced from 3.7 μmol kg -1 to 0.7 μmol kg -1 , which is well within the standard deviation of the measurements (1.9 μmol kg -1 ). We also introduce an empirical model to calculate in situ carbonate ion concentrations from [CO 3 2- ] spec . We demonstrate that these in situ values can be used to determine calcium carbonate saturation states that are in good agreement with those determined by more laborious and expensive conventional methods.
Low-Order Modeling of Dynamic Stall on Airfoils in Incompressible Flow
NASA Astrophysics Data System (ADS)
Narsipur, Shreyas
Unsteady aerodynamics has been a topic of research since the late 1930's and has increased in popularity among researchers studying dynamic stall in helicopters, insect/bird flight, micro air vehicles, wind-turbine aerodynamics, and ow-energy harvesting devices. Several experimental and computational studies have helped researchers gain a good understanding of the unsteady ow phenomena, but have proved to be expensive and time-intensive for rapid design and analysis purposes. Since the early 1970's, the push to develop low-order models to solve unsteady ow problems has resulted in several semi-empirical models capable of effectively analyzing unsteady aerodynamics in a fraction of the time required by high-order methods. However, due to the various complexities associated with time-dependent flows, several empirical constants and curve fits derived from existing experimental and computational results are required by the semi-empirical models to be an effective analysis tool. The aim of the current work is to develop a low-order model capable of simulating incompressible dynamic-stall type ow problems with a focus on accurately modeling the unsteady ow physics with the aim of reducing empirical dependencies. The lumped-vortex-element (LVE) algorithm is used as the baseline unsteady inviscid model to which augmentations are applied to model unsteady viscous effects. The current research is divided into two phases. The first phase focused on augmentations aimed at modeling pure unsteady trailing-edge boundary-layer separation and stall without leading-edge vortex (LEV) formation. The second phase is targeted at including LEV shedding capabilities to the LVE algorithm and combining with the trailing-edge separation model from phase one to realize a holistic, optimized, and robust low-order dynamic stall model. In phase one, initial augmentations to theory were focused on modeling the effects of steady trailing-edge separation by implementing a non-linear decambering flap to model the effect of the separated boundary-layer. Unsteady RANS results for several pitch and plunge motions showed that the differences in aerodynamic loads between steady and unsteady flows can be attributed to the boundary-layer convection lag, which can be modeled by choosing an appropriate value of the time lag parameter, tau2. In order to provide appropriate viscous corrections to inviscid unsteady calculations, the non-linear decambering flap is applied with a time lag determined by the tau2 value, which was found to be independent of motion kinematics for a given airfoil and Reynolds number. The predictions of the aerodynamic loads, unsteady stall, hysteresis loops, and ow reattachment from the low-order model agree well with CFD and experimental results, both for individual cases and for trends between motions. The model was also found to perform as well as existing semi-empirical models while using only a single empirically defined parameter. Inclusion of LEV shedding capabilities and combining the resulting algorithm with phase one's trailing-edge separation model was the primary objective of phase two. Computational results at low and high Reynolds numbers were used to analyze the ow morphology of the LEV to identify the common surface signature associated with LEV initiation at both low and high Reynolds numbers and relate it to the critical leading-edge suction parameter (LESP ) to control the initiation and termination of LEV shedding in the low-order model. The critical LESP, like the tau2 parameter, was found to be independent of motion kinematics for a given airfoil and Reynolds number. Results from the final low-order model compared excellently with CFD and experimental solutions, both in terms of aerodynamic loads and vortex ow pattern predictions. Overall, the final combined dynamic stall model that resulted from the current research was successful in accurately modeling the physics of unsteady ow thereby helping restrict the number of empirical coefficients to just two variables while successfully modeling the aerodynamic forces and ow patterns in a simple and precise manner.
Exchangeability, extreme returns and Value-at-Risk forecasts
NASA Astrophysics Data System (ADS)
Huang, Chun-Kai; North, Delia; Zewotir, Temesgen
2017-07-01
In this paper, we propose a new approach to extreme value modelling for the forecasting of Value-at-Risk (VaR). In particular, the block maxima and the peaks-over-threshold methods are generalised to exchangeable random sequences. This caters for the dependencies, such as serial autocorrelation, of financial returns observed empirically. In addition, this approach allows for parameter variations within each VaR estimation window. Empirical prior distributions of the extreme value parameters are attained by using resampling procedures. We compare the results of our VaR forecasts to that of the unconditional extreme value theory (EVT) approach and the conditional GARCH-EVT model for robust conclusions.
NASA Astrophysics Data System (ADS)
Houdebine, E. R.; Mullan, D. J.; Paletou, F.; Gebran, M.
2016-05-01
The reliable determination of rotation-activity correlations (RACs) depends on precise measurements of the following stellar parameters: T eff, parallax, radius, metallicity, and rotational speed v sin I. In this paper, our goal is to focus on the determination of these parameters for a sample of K and M dwarfs. In a future paper (Paper II), we will combine our rotational data with activity data in order to construct RACs. Here, we report on a determination of effective temperatures based on the (R-I) C color from the calibrations of Mann et al. and Kenyon & Hartmann for four samples of late-K, dM2, dM3, and dM4 stars. We also determine stellar parameters (T eff, log(g), and [M/H]) using the principal component analysis-based inversion technique for a sample of 105 late-K dwarfs. We compile all effective temperatures from the literature for this sample. We determine empirical radius-[M/H] correlations in our stellar samples. This allows us to propose new effective temperatures, stellar radii, and metallicities for a large sample of 612 late-K and M dwarfs. Our mean radii agree well with those of Boyajian et al. We analyze HARPS and SOPHIE spectra of 105 late-K dwarfs, and we have detected v sin I in 92 stars. In combination with our previous v sin I measurements in M and K dwarfs, we now derive P/sin I measures for a sample of 418 K and M dwarfs. We investigate the distributions of P/sin I, and we show that they are different from one spectral subtype to another at a 99.9% confidence level. Based on observations available at Observatoire de Haute Provence and the European Southern Observatory databases and on Hipparcos parallax measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Houdebine, E. R.; Mullan, D. J.; Paletou, F.
The reliable determination of rotation–activity correlations (RACs) depends on precise measurements of the following stellar parameters: T {sub eff}, parallax, radius, metallicity, and rotational speed v sin i . In this paper, our goal is to focus on the determination of these parameters for a sample of K and M dwarfs. In a future paper (Paper II), we will combine our rotational data with activity data in order to construct RACs. Here, we report on a determination of effective temperatures based on the ( R – I ){sub C} color from the calibrations of Mann et al. and Kenyon andmore » Hartmann for four samples of late-K, dM2, dM3, and dM4 stars. We also determine stellar parameters ( T {sub eff}, log( g ), and [M/H]) using the principal component analysis–based inversion technique for a sample of 105 late-K dwarfs. We compile all effective temperatures from the literature for this sample. We determine empirical radius–[M/H] correlations in our stellar samples. This allows us to propose new effective temperatures, stellar radii, and metallicities for a large sample of 612 late-K and M dwarfs. Our mean radii agree well with those of Boyajian et al. We analyze HARPS and SOPHIE spectra of 105 late-K dwarfs, and we have detected v sin i in 92 stars. In combination with our previous v sin i measurements in M and K dwarfs, we now derive P /sin i measures for a sample of 418 K and M dwarfs. We investigate the distributions of P /sin i , and we show that they are different from one spectral subtype to another at a 99.9% confidence level.« less
Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses
Lanfear, Robert; Hua, Xia; Warren, Dan L.
2016-01-01
Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794
NASA Astrophysics Data System (ADS)
Bentley, S. N.; Watt, C. E. J.; Owens, M. J.; Rae, I. J.
2018-04-01
Ultralow frequency (ULF) waves in the magnetosphere are involved in the energization and transport of radiation belt particles and are strongly driven by the external solar wind. However, the interdependency of solar wind parameters and the variety of solar wind-magnetosphere coupling processes make it difficult to distinguish the effect of individual processes and to predict magnetospheric wave power using solar wind properties. We examine 15 years of dayside ground-based measurements at a single representative frequency (2.5 mHz) and a single magnetic latitude (corresponding to L ˜ 6.6RE). We determine the relative contribution to ULF wave power from instantaneous nonderived solar wind parameters, accounting for their interdependencies. The most influential parameters for ground-based ULF wave power are solar wind speed vsw, southward interplanetary magnetic field component Bz<0, and summed power in number density perturbations δNp. Together, the subordinate parameters Bz and δNp still account for significant amounts of power. We suggest that these three parameters correspond to driving by the Kelvin-Helmholtz instability, formation, and/or propagation of flux transfer events and density perturbations from solar wind structures sweeping past the Earth. We anticipate that this new parameter reduction will aid comparisons of ULF generation mechanisms between magnetospheric sectors and will enable more sophisticated empirical models predicting magnetospheric ULF power using external solar wind driving parameters.
NASA Astrophysics Data System (ADS)
Purba, H.; Musu, J. T.; Diria, S. A.; Permono, W.; Sadjati, O.; Sopandi, I.; Ruzi, F.
2018-03-01
Well logging data provide many geological information and its trends resemble nonlinear or non-stationary signals. As long well log data recorded, there will be external factors can interfere or influence its signal resolution. A sensitive signal analysis is required to improve the accuracy of logging interpretation which it becomes an important thing to determine sequence stratigraphy. Complete Ensemble Empirical Mode Decomposition (CEEMD) is one of nonlinear and non-stationary signal analysis method which decomposes complex signal into a series of intrinsic mode function (IMF). Gamma Ray and Spontaneous Potential well log parameters decomposed into IMF-1 up to IMF-10 and each of its combination and correlation makes physical meaning identification. It identifies the stratigraphy and cycle sequence and provides an effective signal treatment method for sequence interface. This method was applied to BRK- 30 and BRK-13 well logging data. The result shows that the combination of IMF-5, IMF-6, and IMF-7 pattern represent short-term and middle-term while IMF-9 and IMF-10 represent the long-term sedimentation which describe distal front and delta front facies, and inter-distributary mouth bar facies, respectively. Thus, CEEMD clearly can determine the different sedimentary layer interface and better identification of the cycle of stratigraphic base level.
NASA Astrophysics Data System (ADS)
Civale, John; Ter Haar, Gail; Rivens, Ian; Bamber, Jeff
2005-09-01
Currently, the intensity to be used in our clinical HIFU treatments is calculated from the acoustic path lengths in different tissues measured on diagnostic ultrasound images of the patient in the treatment position, and published values of ultrasound attenuation coefficients. This yields an approximate value for the acoustic power at the transducer required to give a stipulated focal intensity in situ. Estimation methods for the actual acoustic attenuation have been investigated in large parts of the tissue path overlying the target volume from the backscattered ultrasound signal for each patient (backscatter attenuation estimation: BAE). Several methods have been investigated. The backscattered echo information acquired from an Acuson scanner has been used to compute the diffraction-corrected attenuation coefficient at each frequency using two methods: a substitution method and an inverse diffraction filtering process. A homogeneous sponge phantom was used to validate the techniques. The use of BAE to determine the correct HIFU exposure parameters for lesioning has been tested in ex vivo liver. HIFU lesions created with a 1.7-MHz therapy transducer have been studied using a semiautomated image processing technique. The reproducibility of lesion size for given in situ intensities determined using BAE and empirical techniques has been compared.
Forecasting of dissolved oxygen in the Guanting reservoir using an optimized NGBM (1,1) model.
An, Yan; Zou, Zhihong; Zhao, Yanfei
2015-03-01
An optimized nonlinear grey Bernoulli model was proposed by using a particle swarm optimization algorithm to solve the parameter optimization problem. In addition, each item in the first-order accumulated generating sequence was set in turn as an initial condition to determine which alternative would yield the highest forecasting accuracy. To test the forecasting performance, the optimized models with different initial conditions were then used to simulate dissolved oxygen concentrations in the Guanting reservoir inlet and outlet (China). The empirical results show that the optimized model can remarkably improve forecasting accuracy, and the particle swarm optimization technique is a good tool to solve parameter optimization problems. What's more, the optimized model with an initial condition that performs well in in-sample simulation may not do as well as in out-of-sample forecasting. Copyright © 2015. Published by Elsevier B.V.
Rheological properties of simulated debris flows in the laboratory environment
Ling, Chi-Hai; Chen, Cheng-lung; Jan, Chyan-Deng; ,
1990-01-01
Steady debris flows with or without a snout are simulated in a 'conveyor-belt' flume using dry glass spheres of a uniform size, 5 or 14 mm in diameter, and their rheological properties described quantitatively in constants in a generalized viscoplastic fluid (GVF) model. Close agreement of the measured velocity profiles with the theoretical ones obtained from the GVF model strongly supports the validity of a GVF model based on the continuum-mechanics approach. Further comparisons of the measured and theoretical velocity profiles along with empirical relations among the shear stress, the normal stress, and the shear rate developed from the 'ring-shear' apparatus determine the values of the rheological parameters in the GVF model, namely the flow-behavior index, the consistency index, and the cross-consistency index. Critical issues in the evaluation of such rheological parameters using the conveyor-belt flume and the ring-shear apparatus are thus addressed in this study.
An extended Zel'dovich model for the halo mass function
NASA Astrophysics Data System (ADS)
Lim, Seunghwan; Lee, Jounghun
2013-01-01
A new way to construct a fitting formula for the halo mass function is presented. Our formula is expressed as a solution to the modified Jedamzik matrix equation that automatically satisfies the normalization constraint. The characteristic parameters expressed in terms of the linear shear eigenvalues are empirically determined by fitting the analytic formula to the numerical results from the high-resolution N-body simulation and found to be independent of scale, redshift and background cosmology. Our fitting formula with the best-fit parameters is shown to work excellently in the wide mass-range at various redshifts: The ratio of the analytic formula to the N-body results departs from unity by up to 10% and 5% over 1011 <= M/(h-1Msolar) <= 5 × 1015 at z = 0,0.5 and 1 for the FoF-halo and SO-halo cases, respectively.
Denis-Alpizar, Otoniel; Bemish, Raymond J; Meuwly, Markus
2017-03-21
Vibrational energy relaxation (VER) of diatomics following collisions with the surrounding medium is an important elementary process for modeling high-temperature gas flow. VER is characterized by two parameters: the vibrational relaxation time τ vib and the state relaxation rates. Here the vibrational relaxation of CO(ν=0←ν=1) in Ar is considered for validating a computational approach to determine the vibrational relaxation time parameter (pτ vib ) using an accurate, fully dimensional potential energy surface. For lower temperatures, comparison with experimental data shows very good agreement whereas at higher temperatures (up to 25 000 K), comparisons with an empirically modified model due to Park confirm its validity for CO in Ar. Additionally, the calculations provide insight into the importance of Δν>1 transitions that are ignored in typical applications of the Landau-Teller framework.
NASA Astrophysics Data System (ADS)
Rizvi, Zarghaam Haider; Shrestha, Dinesh; Sattari, Amir S.; Wuttke, Frank
2018-02-01
Macroscopic parameters such as effective thermal conductivity (ETC) is an important parameter which is affected by micro and meso level behaviour of particulate materials, and has been extensively examined in the past decades. In this paper, a new lattice based numerical model is developed to predict the ETC of sand and modified high thermal backfill material for energy transportation used for underground power cables. 2D and 3D simulations are performed to analyse and detect differences resulting from model simplification. The thermal conductivity of the granular mixture is determined numerically considering the volume and the shape of the each constituting portion. The new numerical method is validated with transient needle measurements and the existing theoretical and semi empirical models for thermal conductivity prediction sand and the modified backfill material for dry condition. The numerical prediction and the measured values are in agreement to a large extent.
A charge optimized many-body potential for titanium nitride (TiN).
Cheng, Y-T; Liang, T; Martinez, J A; Phillpot, S R; Sinnott, S B
2014-07-02
This work presents a new empirical, variable charge potential for TiN systems in the charge-optimized many-body potential framework. The potential parameters were determined by fitting them to experimental data for the enthalpy of formation, lattice parameters, and elastic constants of rocksalt structured TiN. The potential does a good job of describing the fundamental physical properties (defect formation and surface energies) of TiN relative to the predictions of first-principles calculations. This potential is used in classical molecular dynamics simulations to examine the interface of fcc-Ti(0 0 1)/TiN(0 0 1) and to characterize the adsorption of oxygen atoms and molecules on the TiN(0 0 1) surface. The results indicate that the potential is well suited to model TiN thin films and to explore the chemistry associated with their oxidation.
Sensor data validation and reconstruction. Phase 1: System architecture study
NASA Technical Reports Server (NTRS)
1991-01-01
The sensor validation and data reconstruction task reviewed relevant literature and selected applicable validation and reconstruction techniques for further study; analyzed the selected techniques and emphasized those which could be used for both validation and reconstruction; analyzed Space Shuttle Main Engine (SSME) hot fire test data to determine statistical and physical relationships between various parameters; developed statistical and empirical correlations between parameters to perform validation and reconstruction tasks, using a computer aided engineering (CAE) package; and conceptually designed an expert system based knowledge fusion tool, which allows the user to relate diverse types of information when validating sensor data. The host hardware for the system is intended to be a Sun SPARCstation, but could be any RISC workstation with a UNIX operating system and a windowing/graphics system such as Motif or Dataviews. The information fusion tool is intended to be developed using the NEXPERT Object expert system shell, and the C programming language.
Specific prognostic factors for secondary pancreatic infection in severe acute pancreatitis.
Armengol-Carrasco, M; Oller, B; Escudero, L E; Roca, J; Gener, J; Rodríguez, N; del Moral, P; Moreno, P
1999-01-01
The aim of the present study was to investigate whether there are specific prognostic factors to predict the development of secondary pancreatic infection (SPI) in severe acute pancreatitis in order to perform a computed tomography-fine needle aspiration with bacteriological sampling at the right moment and confirm the diagnosis. Twenty-five clinical and laboratory parameters were determined sequentially in 150 patients with severe acute pancreatitis (SAP) and univariate, and multivariate regression analyses were done looking for correlation with the development of SPI. Only APACHE II score and C-reactive protein levels were related to the development of SPI in the multivariate analysis. A regression equation was designed using these two parameters, and empiric cut-off points defined the subgroup of patients at high risk of developing secondary pancreatic infection. The results showed that it is possible to predict SPI during SAP allowing bacteriological confirmation and early treatment of this severe condition.
Six-hourly time series of horizontal troposphere gradients in VLBI analyis
NASA Astrophysics Data System (ADS)
Landskron, Daniel; Hofmeister, Armin; Mayer, David; Böhm, Johannes
2016-04-01
Consideration of horizontal gradients is indispensable for high-precision VLBI and GNSS analysis. As a rule of thumb, all observations below 15 degrees elevation need to be corrected for the influence of azimuthal asymmetry on the delay times, which is mainly a product of the non-spherical shape of the atmosphere and ever-changing weather conditions. Based on the well-known gradient estimation model by Chen and Herring (1997), we developed an augmented gradient model with additional parameters which are determined from ray-traced delays for the complete history of VLBI observations. As input to the ray-tracer, we used operational and re-analysis data from the European Centre for Medium-Range Weather Forecasts. Finally, we applied those a priori gradient parameters to VLBI analysis along with other empirical gradient models and assessed their impact on baseline length repeatabilities as well as on celestial and terrestrial reference frames.
Tepekule, Burcu; Uecker, Hildegard; Derungs, Isabel; Frenoy, Antoine; Bonhoeffer, Sebastian
2017-09-01
Multiple treatment strategies are available for empiric antibiotic therapy in hospitals, but neither clinical studies nor theoretical investigations have yielded a clear picture when which strategy is optimal and why. Extending earlier work of others and us, we present a mathematical model capturing treatment strategies using two drugs, i.e the multi-drug therapies referred to as cycling, mixing, and combination therapy, as well as monotherapy with either drug. We randomly sample a large parameter space to determine the conditions determining success or failure of these strategies. We find that combination therapy tends to outperform the other treatment strategies. By using linear discriminant analysis and particle swarm optimization, we find that the most important parameters determining success or failure of combination therapy relative to the other treatment strategies are the de novo rate of emergence of double resistance in patients infected with sensitive bacteria and the fitness costs associated with double resistance. The rate at which double resistance is imported into the hospital via patients admitted from the outside community has little influence, as all treatment strategies are affected equally. The parameter sets for which combination therapy fails tend to fall into areas with low biological plausibility as they are characterised by very high rates of de novo emergence of resistance to both drugs compared to a single drug, and the cost of double resistance is considerably smaller than the sum of the costs of single resistance.
Nordtvedt, K L
1972-12-15
I have reviewed the historical and contemporary experiments that guide us in choosing a post-Newtonian, relativistic gravitational theory. The foundation experiments essentially constrain gravitation theory to be a metric theory in which matter couples solely to one gravitational field, the metric field, although other cosmological gravitational fields may exist. The metric field for any metric theory can be specified (for the solar system, for our present purposes) by a series of potential terms with several parameters. A variety of experiments specify (or put limits on) the numerical values of the seven parameters in the post-Newtonian metric field, and other such experiments have been planned. The empirical results, to date, yield values of the parameters that are consistent with the predictions of Einstein's general relativity.
Determinants of Crime in Virginia: An Empirical Analysis
ERIC Educational Resources Information Center
Ali, Abdiweli M.; Peek, Willam
2009-01-01
This paper is an empirical analysis of the determinants of crime in Virginia. Over a dozen explanatory variables that current literature suggests as important determinants of crime are collected. The data is from 1970 to 2000. These include economic, fiscal, demographic, political, and social variables. The regression results indicate that crime…
In Search of the Largest Possible Tsunami: An Example Following the 2011 Japan Tsunami
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2012-12-01
Many tsunami hazard assessments focus on estimating the largest possible tsunami: i.e., the worst-case scenario. This is typically performed by examining historic and prehistoric tsunami data or by estimating the largest source that can produce a tsunami. We demonstrate that worst-case assessments derived from tsunami and tsunami-source catalogs are greatly affected by sampling bias. Both tsunami and tsunami sources are well represented by a Pareto distribution. It is intuitive to assume that there is some limiting size (i.e., runup or seismic moment) for which a Pareto distribution is truncated or tapered. Likelihood methods are used to determine whether a limiting size can be determined from existing catalogs. Results from synthetic catalogs indicate that several observations near the limiting size are needed for accurate parameter estimation. Accordingly, the catalog length needed to empirically determine the limiting size is dependent on the difference between the limiting size and the observation threshold, with larger catalog lengths needed for larger limiting-threshold size differences. Most, if not all, tsunami catalogs and regional tsunami source catalogs are of insufficient length to determine the upper bound on tsunami runup. As an example, estimates of the empirical tsunami runup distribution are obtained from the Miyako tide gauge station in Japan, which recorded the 2011 Tohoku-oki tsunami as the largest tsunami among 51 other events. Parameter estimation using a tapered Pareto distribution is made both with and without the Tohoku-oki event. The catalog without the 2011 event appears to have a low limiting tsunami runup. However, this is an artifact of undersampling. Including the 2011 event, the catalog conforms more to a pure Pareto distribution with no confidence in estimating a limiting runup. Estimating the size distribution of regional tsunami sources is subject to the same sampling bias. Physical attenuation mechanisms such as wave breaking likely limit the maximum tsunami runup at a particular site. However, historic and prehistoric data alone cannot determine the upper bound on tsunami runup. Because of problems endemic to sampling Pareto distributions of tsunamis and their sources, we recommend that tsunami hazard assessment be based on a specific design probability of exceedance following a pure Pareto distribution, rather than attempting to determine the worst-case scenario.
Theoretical Foundation of Zisman's Empirical Equation for Wetting of Liquids on Solid Surfaces
ERIC Educational Resources Information Center
Zhu, Ruzeng; Cui, Shuwen; Wang, Xiaosong
2010-01-01
Theories of wetting of liquids on solid surfaces under the condition that van der Waals force is dominant are briefly reviewed. We show theoretically that Zisman's empirical equation for wetting of liquids on solid surfaces is a linear approximation of the Young-van der Waals equation in the wetting region, and we express the two parameters in…
Some special features of Wigner’s mass formula for nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nurmukhamedov, A. M., E-mail: fattah52@mail.ru
2014-12-15
Experimental data on anomalous values of the empirical function b(A) in Wigner’s mass formula are presented, the application of Student’s t criterion in experimentally proving the restoration of Wigner’s SU(4) symmetry in nuclei is validated, and a physical interpretation of the basic parameter of the empirical function a(A) in Wigner’s mass formula is given.
On the Time Evolution of Gamma-Ray Burst Pulses: A Self-Consistent Description.
Ryde; Svensson
2000-01-20
For the first time, the consequences of combining two well-established empirical relations that describe different aspects of the spectral evolution of observed gamma-ray burst (GRB) pulses are explored. These empirical relations are (1) the hardness-intensity correlation and (2) the hardness-photon fluence correlation. From these we find a self-consistent, quantitative, and compact description for the temporal evolution of pulse decay phases within a GRB light curve. In particular, we show that in the case in which the two empirical relations are both valid, the instantaneous photon flux (intensity) must behave as 1&solm0;&parl0;1+t&solm0;tau&parr0;, where tau is a time constant that can be expressed in terms of the parameters of the two empirical relations. The time evolution is fully defined by two initial constants and two parameters. We study a complete sample of 83 bright GRB pulses observed by the Compton Gamma-Ray Observatory and identify a major subgroup of GRB pulses ( approximately 45%) which satisfy the spectral-temporal behavior described above. In particular, the decay phase follows a reciprocal law in time. It is unclear what physics causes such a decay phase.
Velocity lag of solid particles in oscillating gases and in gases passing through normal shock waves
NASA Technical Reports Server (NTRS)
Maxwell, B. R.; Seasholtz, R. G.
1974-01-01
The velocity lag of micrometer size spherical particles is theoretically determined for gas particle mixtures passing through a stationary normal shock wave and also for particles embedded in an oscillating gas flow. The particle sizes and densities chosen are those considered important for laser Doppler velocimeter applications. The governing equations for each flow system are formulated. The deviation from Stokes flow caused by inertial, compressibility, and rarefaction effects is accounted for in both flow systems by use of an empirical drag coefficient. Graphical results are presented which characterize particle tracking as a function of system parameters.
On the phase behavior of hard aspherical particles
NASA Astrophysics Data System (ADS)
Miller, William L.; Cacciuto, Angelo
2010-12-01
We use numerical simulations to understand how random deviations from the ideal spherical shape affect the ability of hard particles to form fcc crystalline structures. Using a system of hard spheres as a reference, we determine the fluid-solid coexistence pressures of both shape-polydisperse and monodisperse systems of aspherical hard particles. We find that when particles are sufficiently isotropic, the coexistence pressure can be predicted from a linear relation involving the product of two simple geometric parameters characterizing the asphericity of the particles. Finally, our results allow us to gain direct insight into the crystallizability limits of these systems by rationalizing empirical data obtained for analogous monodisperse systems.
The self-propulsion of a helix in granular matter
NASA Astrophysics Data System (ADS)
Valdes, Rogelio; Angeles, Veronica; de La Calleja, Elsa; Zenit, Roberto
2017-11-01
The effect of the shape of helicoidal on the displacement of magnetic robots in granular media is studied experimentally. We quantify the influences of three main parameters of the shape of the helicoidal swimmers: body diameter, step, and the angle. We compare the experimental measurements with an empirically modified resistive force theory prediction that accounts for the static friction coefficient of the particles of the granular material, leading to good agreement. Comparisons are also made with the granular resistive force theory proposed by Goldman and collaborators. We found an optimal helix angle to produce movement and determined a relationship between the swimmer size and speed.
Dolan, Paul; Tsuchiya, Aki
2009-01-01
The literature on income distribution has attempted to evaluate different degrees of inequality using a social welfare function (SWF) approach. However, it has largely ignored the source of such inequalities, and has thus failed to consider different degrees of inequity. The literature on egalitarianism has addressed issues of equity, largely in relation to individual responsibility. This paper builds upon these two literatures, and introduces individual responsibility into the SWF. Results from a small-scale study of people's preferences in relation to the distribution of health benefits are presented to illustrate how the parameter values of a SWF might be determined.
76 FR 50952 - Proposed Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-17
... Suttons Bay, Village of Empire. Lake Michigan Entire shoreline within +583 +584 Village of Suttons Bay..., MI 49682. Village of Empire Maps are available for inspection at the Empire Village Office, 11518 South LaCore Street, Empire, MI 49630. Village of Suttons Bay Maps are available for inspection at the...
Biomolecular Force Field Parameterization via Atoms-in-Molecule Electron Density Partitioning.
Cole, Daniel J; Vilseck, Jonah Z; Tirado-Rives, Julian; Payne, Mike C; Jorgensen, William L
2016-05-10
Molecular mechanics force fields, which are commonly used in biomolecular modeling and computer-aided drug design, typically treat nonbonded interactions using a limited library of empirical parameters that are developed for small molecules. This approach does not account for polarization in larger molecules or proteins, and the parametrization process is labor-intensive. Using linear-scaling density functional theory and atoms-in-molecule electron density partitioning, environment-specific charges and Lennard-Jones parameters are derived directly from quantum mechanical calculations for use in biomolecular modeling of organic and biomolecular systems. The proposed methods significantly reduce the number of empirical parameters needed to construct molecular mechanics force fields, naturally include polarization effects in charge and Lennard-Jones parameters, and scale well to systems comprised of thousands of atoms, including proteins. The feasibility and benefits of this approach are demonstrated by computing free energies of hydration, properties of pure liquids, and the relative binding free energies of indole and benzofuran to the L99A mutant of T4 lysozyme.
Tamjidy, Mehran; Baharudin, B. T. Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz
2017-01-01
The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon’s entropy. PMID:28772893
NASA Astrophysics Data System (ADS)
Wang, Miqi; Zhou, Zehua; Wu, Lintao; Ding, Ying; Xu, Feilong; Wang, Zehua
2018-04-01
A new compound Fe-W-C powder for reactive plasma cladding was fabricated by precursor carbonization process using sucrose as a precursor. The application of quadratic general rotary unitized design was highlighted to develop a mathematical model to predict and accomplish the desired surface hardness of plasma-cladded coating. The microstructure and microhardness of the coating with optimal parameters were also investigated. According to the developed empirical model, the optimal process parameters were determined as follows: 1.4 for C/W atomic ratio, 20 wt.% for W content, 130 A for scanning current and 100 mm/min (1.67 mm/s) for scanning rate. The confidence level of the model was 99% according to the results of the F-test and lack-of-fit test. Microstructural study showed that the dendritic structure was comprised of a mechanical mixture of α-Fe and carbides, while the interdendritic structure was a eutectic of α-Fe and carbides in the composite coating with optimal parameters. WC phase generation can be confirmed from the XRD pattern. Due to good preparation parameters, the average microhardness of cladded coating can reach 1120 HV0.1, which was four times the substrate microhardness.
Tamjidy, Mehran; Baharudin, B T Hang Tuah; Paslar, Shahla; Matori, Khamirul Amin; Sulaiman, Shamsuddin; Fadaeifard, Firouz
2017-05-15
The development of Friction Stir Welding (FSW) has provided an alternative approach for producing high-quality welds, in a fast and reliable manner. This study focuses on the mechanical properties of the dissimilar friction stir welding of AA6061-T6 and AA7075-T6 aluminum alloys. The FSW process parameters such as tool rotational speed, tool traverse speed, tilt angle, and tool offset influence the mechanical properties of the friction stir welded joints significantly. A mathematical regression model is developed to determine the empirical relationship between the FSW process parameters and mechanical properties, and the results are validated. In order to obtain the optimal values of process parameters that simultaneously optimize the ultimate tensile strength, elongation, and minimum hardness in the heat affected zone (HAZ), a metaheuristic, multi objective algorithm based on biogeography based optimization is proposed. The Pareto optimal frontiers for triple and dual objective functions are obtained and the best optimal solution is selected through using two different decision making techniques, technique for order of preference by similarity to ideal solution (TOPSIS) and Shannon's entropy.
Empirically based device modeling of bulk heterojunction organic photovoltaics
NASA Astrophysics Data System (ADS)
Pierre, Adrien; Lu, Shaofeng; Howard, Ian A.; Facchetti, Antonio; Arias, Ana Claudia
2013-04-01
We develop an empirically based optoelectronic model to accurately simulate the photocurrent in organic photovoltaic (OPV) devices with novel materials including bulk heterojunction OPV devices based on a new low band gap dithienothiophene-DPP donor polymer, P(TBT-DPP), blended with PC70BM at various donor-acceptor weight ratios and solvent compositions. Our devices exhibit power conversion efficiencies ranging from 1.8% to 4.7% at AM 1.5G. Electron and hole mobilities are determined using space-charge limited current measurements. Bimolecular recombination coefficients are both analytically calculated using slowest-carrier limited Langevin recombination and measured using an electro-optical pump-probe technique. Exciton quenching efficiencies in the donor and acceptor domains are determined from photoluminescence spectroscopy. In addition, dielectric and optical constants are experimentally determined. The photocurrent and its bias-dependence that we simulate using the optoelectronic model we develop, which takes into account these physically measured parameters, shows less than 7% error with respect to the experimental photocurrent (when both experimentally and semi-analytically determined recombination coefficient is used). Free carrier generation and recombination rates of the photocurrent are modeled as a function of the position in the active layer at various applied biases. These results show that while free carrier generation is maximized in the center of the device, free carrier recombination is most dominant near the electrodes even in high performance devices. Such knowledge of carrier activity is essential for the optimization of the active layer by enhancing light trapping and minimizing recombination. Our simulation program is intended to be freely distributed for use in laboratories fabricating OPV devices.
The Derivation of Sink Functions of Wheat Organs using the GREENLAB Model
Kang, Mengzhen; Evers, Jochem B.; Vos, Jan; de Reffye, Philippe
2008-01-01
Background and Aims In traditional crop growth models assimilate production and partitioning are described with empirical equations. In the GREENLAB functional–structural model, however, allocation of carbon to different kinds of organs depends on the number and relative sink strengths of growing organs present in the crop architecture. The aim of this study is to generate sink functions of wheat (Triticum aestivum) organs by calibrating the GREENLAB model using a dedicated data set, consisting of time series on the mass of individual organs (the ‘target data’). Methods An experiment was conducted on spring wheat (Triticum aestivum, ‘Minaret’), in a growth chamber from, 2004 to, 2005. Four harvests were made of six plants each to determine the size and mass of individual organs, including the root system, leaf blades, sheaths, internodes and ears of the main stem and different tillers. Leaf status (appearance, expansion, maturity and death) of these 24 plants was recorded. With the structures and mass of organs of four individual sample plants, the GREENLAB model was calibrated using a non-linear least-square-root fitting method, the aim of which was to minimize the difference in mass of the organs between measured data and model output, and to provide the parameter values of the model (the sink strengths of organs of each type, age and tiller order, and two empirical parameters linked to biomass production). Key Results and Conclusions The masses of all measured organs from one plant from each harvest were fitted simultaneously. With estimated parameters for sink and source functions, the model predicted the mass and size of individual organs at each position of the wheat structure in a mechanistic way. In addition, there was close agreement between experimentally observed and simulated values of leaf area index. PMID:18045794
NASA Astrophysics Data System (ADS)
Asgari, Jamal; Mohammadloo, Tannaz H.; Amiri-Simkooei, Ali Reza
2015-09-01
GNSS kinematic techniques are capable of providing precise coordinates in extremely short observation time-span. These methods usually determine the coordinates of an unknown station with respect to a reference one. To enhance the precision, accuracy, reliability and integrity of the estimated unknown parameters, GNSS kinematic equations are to be augmented by possible constraints. Such constraints could be derived from the geometric relation of the receiver positions in motion. This contribution presents the formulation of the constrained kinematic global navigation satellite systems positioning. Constraints effectively restrict the definition domain of the unknown parameters from the three-dimensional space to a subspace defined by the equation of motion. To test the concept of the constrained kinematic positioning method, the equation of a circle is employed as a constraint. A device capable of moving on a circle was made and the observations from 11 positions on the circle were analyzed. Relative positioning was conducted by considering the center of the circle as the reference station. The equation of the receiver's motion was rewritten in the ECEF coordinates system. A special attention is drawn onto how a constraint is applied to kinematic positioning. Implementing the constraint in the positioning process provides much more precise results compared to the unconstrained case. This has been verified based on the results obtained from the covariance matrix of the estimated parameters and the empirical results using kinematic positioning samples as well. The theoretical standard deviations of the horizontal components are reduced by a factor ranging from 1.24 to 2.64. The improvement on the empirical standard deviation of the horizontal components ranges from 1.08 to 2.2.
NASA Astrophysics Data System (ADS)
Zheng, WeiKang; Kelly, Patrick L.; Filippenko, Alexei V.
2018-05-01
We examine the relationship between three parameters of Type Ia supernovae (SNe Ia): peak magnitude, rise time, and photospheric velocity at the time of peak brightness. The peak magnitude is corrected for extinction using an estimate determined from MLCS2k2 fitting. The rise time is measured from the well-observed B-band light curve with the first detection at least 1 mag fainter than the peak magnitude, and the photospheric velocity is measured from the strong absorption feature of Si II λ6355 at the time of peak brightness. We model the relationship among these three parameters using an expanding fireball with two assumptions: (a) the optical emission is approximately that of a blackbody, and (b) the photospheric temperatures of all SNe Ia are the same at the time of peak brightness. We compare the precision of the distance residuals inferred using this physically motivated model against those from the empirical Phillips relation and the MLCS2k2 method for 47 low-redshift SNe Ia (0.005 < z < 0.04) and find comparable scatter. However, SNe Ia in our sample with higher velocities are inferred to be intrinsically fainter. Eliminating the high-velocity SNe and applying a more stringent extinction cut to obtain a “low-v golden sample” of 22 SNe, we obtain significantly reduced scatter of 0.108 ± 0.018 mag in the new relation, better than those of the Phillips relation and the MLCS2k2 method. For 250 km s‑1 of residual peculiar motions, we find 68% and 95% upper limits on the intrinsic scatter of 0.07 and 0.10 mag, respectively.
Structure of S-shaped growth in innovation diffusion
NASA Astrophysics Data System (ADS)
Shimogawa, Shinsuke; Shinno, Miyuki; Saito, Hiroshi
2012-05-01
A basic question on innovation diffusion is why the growth curve of the adopter population in a large society is often S shaped. From macroscopic, microscopic, and mesoscopic viewpoints, the growth of the adopter population is observed as the growth curve, individual adoptions, and differences among individual adoptions, respectively. The S shape can be explained if an empirical model of the growth curve can be deduced from models of microscopic and mesoscopic structures. However, even the structure of growth curve has not been revealed yet because long-term extrapolations by proposed models of S-shaped curves are unstable and it has been very difficult to predict the long-term growth and final adopter population. This paper studies the S-shaped growth from the viewpoint of social regularities. Simple methods to analyze power laws enable us to extract the structure of the growth curve directly from the growth data of recent basic telecommunication services. This empirical model of growth curve is singular at the inflection point and a logarithmic function of time after this point, which explains the unstable extrapolations obtained using previously proposed models and the difficulty in predicting the final adopter population. Because the empirical S curve can be expressed in terms of two power laws of the regularity found in social performances of individuals, we propose the hypothesis that the S shape represents the heterogeneity of the adopter population, and the heterogeneity parameter is distributed under the regularity in social performances of individuals. This hypothesis is so powerful as to yield models of microscopic and mesoscopic structures. In the microscopic model, each potential adopter adopts the innovation when the information accumulated by the learning about the innovation exceeds a threshold. The accumulation rate of information is heterogeneous among the adopter population, whereas the threshold is a constant, which is the opposite of previously proposed models. In the mesoscopic model, flows of innovation information incoming to individuals are organized as dimorphic and partially clustered. These microscopic and mesoscopic models yield the empirical model of the S curve and explain the S shape as representing the regularities of information flows generated through a social self-organization. To demonstrate the validity and importance of the hypothesis, the models of three level structures are applied to reveal the mechanism determining and differentiating diffusion speeds. The empirical model of S curves implies that the coefficient of variation of the flow rates determines the diffusion speed for later adopters. Based on this property, a model describing the inside of information flow clusters can be given, which provides a formula interconnecting the diffusion speed, cluster populations, and a network topological parameter of the flow clusters. For two recent basic telecommunication services in Japan, the formula represents the variety of speeds in different areas and enables us to explain speed gaps between urban and rural areas and between the two services. Furthermore, the formula provides a method to estimate the final adopter population.
NASA Technical Reports Server (NTRS)
Dewan, Mohammad W.; Huggett, Daniel J.; Liao, T. Warren; Wahab, Muhammad A.; Okeil, Ayman M.
2015-01-01
Friction-stir-welding (FSW) is a solid-state joining process where joint properties are dependent on welding process parameters. In the current study three critical process parameters including spindle speed (??), plunge force (????), and welding speed (??) are considered key factors in the determination of ultimate tensile strength (UTS) of welded aluminum alloy joints. A total of 73 weld schedules were welded and tensile properties were subsequently obtained experimentally. It is observed that all three process parameters have direct influence on UTS of the welded joints. Utilizing experimental data, an optimized adaptive neuro-fuzzy inference system (ANFIS) model has been developed to predict UTS of FSW joints. A total of 1200 models were developed by varying the number of membership functions (MFs), type of MFs, and combination of four input variables (??,??,????,??????) utilizing a MATLAB platform. Note EFI denotes an empirical force index derived from the three process parameters. For comparison, optimized artificial neural network (ANN) models were also developed to predict UTS from FSW process parameters. By comparing ANFIS and ANN predicted results, it was found that optimized ANFIS models provide better results than ANN. This newly developed best ANFIS model could be utilized for prediction of UTS of FSW joints.
NASA Astrophysics Data System (ADS)
Bora, Sanjay; Scherbaum, Frank; Kuehn, Nicolas; Stafford, Peter; Edwards, Benjamin
2016-04-01
The current practice of deriving empirical ground motion prediction equations (GMPEs) involves using ground motions recorded at multiple sites. However, in applications like site-specific (e.g., critical facility) hazard ground motions obtained from the GMPEs are need to be adjusted/corrected to a particular site/site-condition under investigation. This study presents a complete framework for developing a response spectral GMPE, within which the issue of adjustment of ground motions is addressed in a manner consistent with the linear system framework. The present approach is a two-step process in which the first step consists of deriving two separate empirical models, one for Fourier amplitude spectra (FAS) and the other for a random vibration theory (RVT) optimized duration (Drvto) of ground motion. In the second step the two models are combined within the RVT framework to obtain full response spectral amplitudes. Additionally, the framework also involves a stochastic model based extrapolation of individual Fourier spectra to extend the useable frequency limit of the empirically derived FAS model. The stochastic model parameters were determined by inverting the Fourier spectral data using an approach similar to the one as described in Edwards and Faeh (2013). Comparison of median predicted response spectra from present approach with those from other regional GMPEs indicates that the present approach can also be used as a stand-alone model. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, the Middle East and the Mediterranean region.
NASA Astrophysics Data System (ADS)
Rosso, M.; Sesenna, R.; Magni, L.; Demurtas, L.; Uras, G.
2009-04-01
Debris flows represents serious hazards in mountainous regions. For engineers it is important to know the quantitative analysis of the flow in terms of volumes, velocities and front height, and it is significant to predict possible triggering and deposition areas. In order to predict flow and deposition behaviour, debris flows traditionally have been regarded as homogenous fluids and bulk flow behaviour that was considered to be controlled by the rheological properties of the matrix. Flow mixtures with a considerable fraction of fines particles typically show a viscoplastic flow behaviour but due to the high variability of the material composition, complex physical interactions on the particle scale and time dependent effects, no generally applicable models are at time capable to cover the full range of all possible flow types. A first category of models, mostly of academic origin, uses a rigorous methodological approach, directed to describe to the phenomenon characterizing all the main parameters that regulate the origin and the propagation of the debris flow, with detail attention to rheology. A second category, which are referred mainly to the commercial environment, has as first objective the versatility and the simplicity of use, introducing theoretical simplifications in the definition of the rheology and in the propagation of the debris flow. The physical variables connected to the rheology are often difficult to determine and involve complex procedures of calibration of the model or long and expensive campaigns of measure, whose application can turn out not suitable to the engineering environment. The rheological parameters of the debris are however to the base of the codes of calculation mainly used in commerce. The necessary data to the implementation of the model refer mainly to the dynamic viscosity, to the shear stress, to the volumetric mass and to the volumetric concentration, that are linked variables. Through the application of various bidimensional and monodimensional commercial models for the simulation of debris flow, in particular because of the reconstruction of famous and expected events in the river basin of the Comboè torrent (Aosta Valley, Italy), it has been possible to reach careful consideration about the calibration of the rheological parameters and the sensitivity of simulation models, specifically about the variability of them. The geomechanical and volumetric characteristics of the sediment at the bottom of the debris could produce uncertainties in model implementation, above all in not exclusively cinematic models, mostly influenced by the rheological parameters. The parameter that mainly influences the final result of the applied numerical models is the volumetric solid concentration that is variable in space and time during the debris flow propagation. In fact rheological parameters are described by a power equation of volumetric concentration. The potentiality and the suitability of a numerical code in the engineering environmental application have to be consider not referring only to the quality and amount of results, but also to the sensibility regarding the parameters variability that are bases of the inner ruotines of the program. Therefore, a suitable model will have to be sensitive to the variability of parameters that the customer can calculate with greater precision. On the other side, it will have to be sufficiently stable to the variation of those parameters that the customer cannot define univocally, but only by range of variation. One of the models utilized for the simulation of debris flow on the Comboè Torrent has been demonstrated as an heavy influenced example by small variation of rheological parameters. Consequently, in spite of the possibility to lead accurate procedures of back-analysis about a recent intense event, it has been found a difficulty in the calibration of the concentration for new expected events. That involved an extreme variability of the final results. In order to achieve more accuracy in the numerical simulation, the rheological parameters were estimated by an implicit way, proceeding to their determination through the application of simple numerical iteration. In fact they can link the obtained values of velocity and hydraulic levels. Literature formulations were used in order to determine rheological parameters. The parameters and ? wowere correlated to velocity and to empirical parameters that have small range of variability. This approach allows to produce a control of input parameters in the calculation models, comparing the obtained base result (velocity and water surface elevation). The implementation of numerical models for engineering profession must be carried out in order that aleatory variables, that are difficult to determine, do not involve an extreme variability of the final result. However, it's a good manner to proceed to the determination of interested variables by means of empirical formulations and through the comparison between different simplified models, including in the analysis pure cinematic models.
Accurate Temperature Feedback Control for MRI-Guided, Phased Array HICU Endocavitary Therapy
NASA Astrophysics Data System (ADS)
Salomir, Rares; Rata, Mihaela; Cadis, Daniela; Lafon, Cyril; Chapelon, Jean Yves; Cotton, François; Bonmartin, Alain; Cathignol, Dominique
2007-05-01
Effective treatment of malignant tumours demands well controlled energy deposition in the region of interest. Generally, two major steps must be fulfilled: 1. pre-operative optimal planning of the thermal dosimetry and 2. per-operative active spatial-and-temporal control of the delivered thermal dose. The second issue is made possible by using fast MR thermometry data and adjusting on line the sonication parameters. This approach is addressed here in the particular case of the ultrasound therapy for endocavitary tumours (oesophagus, colon or rectum) with phased array cylindrical applicators of High Intensity Contact Ultrasound (HICU). Two specific methodological objectives have been defined for this study: 1. to implement a robust and effective temperature controller for the specific geometry of endocavitary HICU and 2. to determine the stability (ie convergence) domain of the controller with respect to possible errors affecting the empirical parameters of the underlying physical model. Experimental setup included a Philips 1.5T clinical MR scanner and a cylindrical phased array transducer (64 elements) driven by a computer-controlled multi-channel generator. Performance of the temperature controller was tested ex vivo on fresh meat samples with planar and slightly focused beams, for a temperature elevation range from 10°C to 30°C. During the steady state regime, typical error of the temperature mean value was inferior to 1%, while the typical standard deviation of the temperature was inferior to 2% (relative to the targeted temperature elevation). Further, the empirical parameters of the physical model have been deliberately set to erroneous values and the impact on the controller stability was evaluated. Excellent tolerance of the controller was demonstrated, as this one failed to performed stable feedback only in the extreme case of a strong underestimation for the ultrasound absorption parameter by a factor of 4 or more.
NASA Astrophysics Data System (ADS)
Heinkelmann, Robert; Dick, Galina; Nilsson, Tobias; Soja, Benedikt; Wickert, Jens; Zus, Florian; Schuh, Harald
2015-04-01
Observations from space-geodetic techniques are nowadays increasingly used to derive atmospheric information for various commercial and scientific applications. A prominent example is the operational use of GNSS data to improve global and regional weather forecasts, which was started in 2006. Atmosphere gradients describe the azimuthal asymmetry of zenith delays. Estimates of geodetic and other parameters significantly improve when atmosphere gradients are determined in addition. Here we assess the capability of several space geodetic techniques (GNSS, VLBI, DORIS) to determine atmosphere gradients of refractivity. For this purpose we implement and compare various strategies for gradient estimation, such as different values for the temporal resolution and the corresponding parameter constraints. Applying least squares estimation the gradients are usually deterministically modelled as constants or piece-wise linear functions. In our study we compare this approach with a stochastic approach modelling atmosphere gradients as random walk processes and applying a Kalman Filter for parameter estimation. The gradients, derived from space geodetic techniques are verified by comparison with those derived from Numerical Weather Models (NWM). These model data were generated using raytracing calculations based on European Centre for Medium-Range Weather Forecast (ECMWF) and National Centers for Environmental Prediction (NCEP) analyses with different spatial resolutions. The investigation of the differences between the ECMWF and NCEP gradients hereby in addition allow for an empirical assessment of the quality of model gradients and how suitable the NWM data are for verification. CONT14 (2014-05-06 until 2014-05-20) is the youngest two week long continuous VLBI campaign carried out by IVS (International VLBI Service for Geodesy and Astrometry). It presents the state-of-the-art VLBI performance in terms of number of stations and number of observations and presents thus an excellent test period for comparisons with other space geodetic techniques. During the VLBI campaign CONT14 the HOBART12 and HOBART26 (Hobart, Tasmania, Australia) VLBI antennas were involved that co-locate with each other. The investigation of the gradient estimate differences from these co-located antennas allows for a valuable empirical quality assessment. Another quality criterion for gradient estimates are the differences of parameters at the borders of adjacent 24h-sessions. Both are investigated in our study.
Process-based soil erodibility estimation for empirical water erosion models
USDA-ARS?s Scientific Manuscript database
A variety of modeling technologies exist for water erosion prediction each with specific parameters. It is of interest to scrutinize parameters of a particular model from the point of their compatibility with dataset of other models. In this research, functional relationships between soil erodibilit...
Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values that is value of the physical and chemical constants that govern reactivity. Although empirical structure activity relationships have been developed th...
Dlubac, Katherine; Knight, Rosemary; Song, Yi-Qiao; Bachman, Nate; Grau, Ben; Cannia, Jim; Williams, John
2013-01-01
Hydraulic conductivity (K) is one of the most important parameters of interest in groundwater applications because it quantifies the ease with which water can flow through an aquifer material. Hydraulic conductivity is typically measured by conducting aquifer tests or wellbore flow (WBF) logging. Of interest in our research is the use of proton nuclear magnetic resonance (NMR) logging to obtain information about water-filled porosity and pore space geometry, the combination of which can be used to estimate K. In this study, we acquired a suite of advanced geophysical logs, aquifer tests, WBF logs, and sidewall cores at the field site in Lexington, Nebraska, which is underlain by the High Plains aquifer. We first used two empirical equations developed for petroleum applications to predict K from NMR logging data: the Schlumberger Doll Research equation (KSDR) and the Timur-Coates equation (KT-C), with the standard empirical constants determined for consolidated materials. We upscaled our NMR-derived K estimates to the scale of the WBF-logging K(KWBF-logging) estimates for comparison. All the upscaled KT-C estimates were within an order of magnitude of KWBF-logging and all of the upscaled KSDR estimates were within 2 orders of magnitude of KWBF-logging. We optimized the fit between the upscaled NMR-derived K and KWBF-logging estimates to determine a set of site-specific empirical constants for the unconsolidated materials at our field site. We conclude that reliable estimates of K can be obtained from NMR logging data, thus providing an alternate method for obtaining estimates of K at high levels of vertical resolution.
Pinisetty, D; Huang, C; Dong, Q; Tiersch, T R; Devireddy, R V
2005-06-01
This study reports the subzero water transport characteristics (and empirically determined optimal rates for freezing) of sperm cells of live-bearing fishes of the genus Xiphophorus, specifically those of the southern platyfish Xiphophorus maculatus. These fishes are valuable models for biomedical research and are commercially raised as ornamental fish for use in aquariums. Water transport during freezing of X. maculatus sperm cell suspensions was obtained using a shape-independent differential scanning calorimeter technique in the presence of extracellular ice at a cooling rate of 20 degrees C/min in three different media: (1) Hanks' balanced salt solution (HBSS) without cryoprotective agents (CPAs); (2) HBSS with 14% (v/v) glycerol, and (3) HBSS with 10% (v/v) dimethyl sulfoxide (DMSO). The sperm cell was modeled as a cylinder with a length of 52.35 microm and a diameter of 0.66 microm with an osmotically inactive cell volume (Vb) of 0.6 V0, where V0 is the isotonic or initial cell volume. This translates to a surface area, SA to initial water volume, WV ratio of 15.15 microm(-1). By fitting a model of water transport to the experimentally determined volumetric shrinkage data, the best fit membrane permeability parameters (reference membrane permeability to water at 0 degrees C, Lpg or Lpg [cpa] and the activation energy, E(Lp) or E(Lp) [cpa]) were found to range from: Lpg or Lpg [cpa] = 0.0053-0.0093 microm/minatm; E(Lp) or E(Lp) [cpa] = 9.79-29.00 kcal/mol. By incorporating these membrane permeability parameters in a recently developed generic optimal cooling rate equation (optimal cooling rate, [Formula: see text] where the units of B(opt) are degrees C/min, E(Lp) or E(Lp) [cpa] are kcal/mol, L(pg) or L(pg) [cpa] are microm/minatm and SA/WV are microm(-1)), we determined the optimal rates of freezing X. maculatus sperm cells to be 28 degrees C/min (in HBSS), 47 degrees C/min (in HBSS+14% glycerol) and 36 degrees C/min (in HBSS+10% DMSO). Preliminary empirical experiments suggest that the optimal rate of freezing X. maculatus sperm in the presence of 14% glycerol to be approximately 25 degrees C/min. Possible reasons for the observed discrepancy between the theoretically predicted and experimentally determined optimal rates of freezing X. maculatus sperm cells are discussed.
NASA Astrophysics Data System (ADS)
Jang, S.; Moon, Y.; Na, H.
2012-12-01
We have made a comparison of CME-associated shock arrival times at the earth based on the WSA-ENLIL model with three cone models using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters from Michalek et al. (2007) as well as their associated interplanetary (IP) shocks. For this study we consider three different cone models (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine CME cone parameters (radial velocity, angular width and source location), which are used for input parameters of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the elliptical cone model is 10 hours, which is about 2 hours smaller than those of the other models. However, this value is still larger than that (8.7 hours) of an empirical model by Kim et al. (2007). We are investigating several possibilities on relatively large errors of the WSA-ENLIL cone model, which may be caused by CME-CME interaction, background solar wind speed, and/or CME density enhancement.
Optimal Energy Consumption Analysis of Natural Gas Pipeline
Liu, Enbin; Li, Changjun; Yang, Yi
2014-01-01
There are many compressor stations along long-distance natural gas pipelines. Natural gas can be transported using different boot programs and import pressures, combined with temperature control parameters. Moreover, different transport methods have correspondingly different energy consumptions. At present, the operating parameters of many pipelines are determined empirically by dispatchers, resulting in high energy consumption. This practice does not abide by energy reduction policies. Therefore, based on a full understanding of the actual needs of pipeline companies, we introduce production unit consumption indicators to establish an objective function for achieving the goal of lowering energy consumption. By using a dynamic programming method for solving the model and preparing calculation software, we can ensure that the solution process is quick and efficient. Using established optimization methods, we analyzed the energy savings for the XQ gas pipeline. By optimizing the boot program, the import station pressure, and the temperature parameters, we achieved the optimal energy consumption. By comparison with the measured energy consumption, the pipeline now has the potential to reduce energy consumption by 11 to 16 percent. PMID:24955410
Collective behaviour in vertebrates: a sensory perspective
Collignon, Bertrand; Fernández-Juricic, Esteban
2016-01-01
Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616
Labrague, Leodoro J; McEnroe-Petitte, Denise M
2016-04-01
The aim of this study was to determine the influence of music on anxiety levels and physiologic parameters in women undergoing gynecologic surgery. This study employed a pre- and posttest experimental design with nonrandom assignment. Ninety-seven women undergoing gynecologic surgery were included in the study, where 49 were allocated to the control group (nonmusic group) and 48 were assigned to the experimental group (music group). Preoperative anxiety was measured using the State Trait Anxiety Inventory (STAI) while noninvasive instruments were used in measuring the patients' physiologic parameters (blood pressure [BP], pulse [P], and respiration [R]) at two time periods. Women allocated in the experimental group had lower STAI scores (t = 17.41, p < .05), systolic (t = 6.45, p < .05) and diastolic (t = 2.80, p < .006) BP, and P rate (PR; t = 7.32, p < .05) than in the control group. This study provides empirical evidence to support the use of music during the preoperative period in reducing anxiety and unpleasant symptoms in women undergoing gynecologic surgery. © The Author(s) 2014.
DIF Testing with an Empirical-Histogram Approximation of the Latent Density for Each Group
ERIC Educational Resources Information Center
Woods, Carol M.
2011-01-01
This research introduces, illustrates, and tests a variation of IRT-LR-DIF, called EH-DIF-2, in which the latent density for each group is estimated simultaneously with the item parameters as an empirical histogram (EH). IRT-LR-DIF is used to evaluate the degree to which items have different measurement properties for one group of people versus…
NASA Astrophysics Data System (ADS)
Shibata, Kenichiro; Adhiperdana, Billy G.; Ito, Makoto
2018-01-01
Reconstructions of the dimensions and hydrological features of ancient fluvial channels, such as bankfull depth, bankfull width, and water discharges, have used empirical equations developed from compiled data-sets, mainly from modern meandering rivers, in various tectonic and climatic settings. However, the application of the proposed empirical equations to an ancient fluvial succession should be carefully examined with respect to the tectonic and climatic settings of the objective deposits. In this study, we developed empirical relationships among the mean bankfull channel depth, bankfull channel depth, drainage area, bankfull channel width, mean discharge, and bankfull discharge using data from 24 observation sites of modern gravelly rivers in the Kanto region, central Japan. Some of the equations among these parameters are different from those proposed by previous studies. The discrepancies are considered to reflect tectonic and climatic settings of the present river systems, which are characterized by relatively steeper valley slope, active supply of volcaniclastic sediments, and seasonal precipitation in the Kanto region. The empirical relationships derived from the present study can be applied to modern and ancient gravelly fluvial channels with multiple and alternate bars, developed in convergent margin settings under a temperate climatic condition. The developed empirical equations were applied to a transgressive gravelly fluvial succession of the Paleogene Iwaki Formation, Northeast Japan as a case study. Stratigraphic thicknesses of bar deposits were used for estimation of the bankfull channel depth. In addition, some other geomorphological and hydrological parameters were calculated using the empirical equations developed by the present study. The results indicate that the Iwaki Formation fluvial deposits were formed by a fluvial system that was represented by the dimensions and discharges of channels similar to those of the middle to lower reaches of the modern Kuji River, northern Kanto region. In addition, no distinct temporal changes in paleochannel dimensions and discharges were observed in an overall transgressive Iwaki Formation fluvial system. This implies that a rise in relative sea level did not affect the paleochannel dimensions within a sequence stratigraphic framework.
Zuthi, M F R; Ngo, H H; Guo, W S; Nghiem, L D; Hai, F I; Xia, S Q; Zhang, Z Q; Li, J X
2015-08-01
This study investigates the influence of key biomass parameters on specific oxygen uptake rate (SOUR) in a sponge submerged membrane bioreactor (SSMBR) to develop mathematical models of biomass viability. Extra-cellular polymeric substances (EPS) were considered as a lumped parameter of bound EPS (bEPS) and soluble microbial products (SMP). Statistical analyses of experimental results indicate that the bEPS, SMP, mixed liquor suspended solids and volatile suspended solids (MLSS and MLVSS) have functional relationships with SOUR and their relative influence on SOUR was in the order of EPS>bEPS>SMP>MLVSS/MLSS. Based on correlations among biomass parameters and SOUR, two independent empirical models of biomass viability were developed. The models were validated using results of the SSMBR. However, further validation of the models for different operating conditions is suggested. Copyright © 2015 Elsevier Ltd. All rights reserved.
Entangled Parametric Hierarchies: Problems for an Overspecified Universal Grammar
Boeckx, Cedric; Leivada, Evelina
2013-01-01
This study addresses the feasibility of the classical notion of parameter in linguistic theory from the perspective of parametric hierarchies. A novel program-based analysis is implemented in order to show certain empirical problems related to these hierarchies. The program was developed on the basis of an enriched data base spanning 23 contemporary and 5 ancient languages. The empirical issues uncovered cast doubt on classical parametric models of language acquisition as well as on the conceptualization of an overspecified Universal Grammar that has parameters among its primitives. Pinpointing these issues leads to the proposal that (i) the (bio)logical problem of language acquisition does not amount to a process of triggering innately pre-wired values of parameters and (ii) it paves the way for viewing language, epigenetic (‘parametric’) variation as an externalization-related epiphenomenon, whose learning component may be more important than what sometimes is assumed. PMID:24019867
Auxiliary Parameter MCMC for Exponential Random Graph Models
NASA Astrophysics Data System (ADS)
Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro
2016-11-01
Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.
NASA Astrophysics Data System (ADS)
Hatch, Courtney D.; Greenaway, Ann L.; Christie, Matthew J.; Baltrusaitis, Jonas
2014-04-01
Fresh mineral aerosol has recently been found to be effective cloud condensation nuclei (CCN) and contribute to the number of cloud droplets in the atmosphere due to the effect of water adsorption on CCN activation. The work described here uses experimental water adsorption measurements on Na-montmorillonite and illite clay to determine empirical adsorption parameters that can be used in a recently derived theoretical framework (Frenkel-Halsey-Hill Activation Theory, FHH-AT) that accounts for the effect of water adsorption on CCN activation. Upon fitting the Frenkel-Halsey-Hill (FHH) adsorption model to water adsorption measurements, we find FHH adsorption parameters, AFHH and BFHH, to be 98 ± 22 and 1.79 ± 0.11 for montmorillonite and 75 ± 17 and 1.77 ± 0.11 for illite, respectively. The AFHH and BFHH values obtained from water adsorption measurements differ from values reported previously determined by applying FHH-AT to CCN activation measurements. Differences in FHH adsorption parameters were attributed to different methods used to obtain them and the hydratable nature of the clays. FHH adsorption parameters determined from water adsorption measurements were then used to calculate the critical super-saturation (sc) for CCN activation using FHH-AT. The relationship between sc and the dry particle diameter (Ddry) gave CCN activation curve exponents (xFHH) of -0.61 and -0.64 for montmorillonite and illite, respectively. The xFHH values were slightly lower than reported previously for mineral aerosol. The lower exponent suggests that the CCN activity of hydratable clays is less sensitive to changes in Ddry and the hygroscopicity parameter exhibits a broader variability with Ddry compared to more soluble aerosols. Despite the differences in AFHH, BFHH and xFHH, the FHH-AT derived CCN activities of montmorillonite and illite are quite similar to each other and in excellent agreement with experimental CCN measurements resulting from wet-generated clay aerosol. This study illustrates that FHH-AT using adsorption parameters constrained by water adsorption is a simple, valid method for predicting CCN activation of fresh clay minerals and provides parameters that can be used in atmospheric models to study the effect of mineral dust aerosol on cloud formation and climate.
Form drag in rivers due to small-scale natural topographic features: 1. Regular sequences
Kean, J.W.; Smith, J.D.
2006-01-01
Small-scale topographic features are commonly found on the boundaries of natural rivers, streams, and floodplains. A simple method for determining the form drag on these features is presented, and the results of this model are compared to laboratory measurements. The roughness elements are modeled as Gaussian-shaped features defined in terms of three parameters: a protrusion height, H; a streamwise length scale, ??; and a spacing between crests, ??. This shape is shown to be a good approximation to a wide variety of natural topographic bank features. The form drag on an individual roughness element embedded in a series of identical elements is determined using the drag coefficient of the individual element and a reference velocity that includes the effects of roughness elements further upstream. In addition to calculating the drag on each element, the model determines the spatially averaged total stress, skin friction stress, and roughness height of the boundary. The effects of bank roughness on patterns of velocity and boundary shear stress are determined by combining the form drag model with a channel flow model. The combined model shows that drag on small-scale topographic features substantially alters the near-bank flow field. These methods can be used to improve predictions of flow resistance in rivers and to form the basis for fully predictive (no empirically adjusted parameters) channel flow models. They also provide a foundation for calculating the near-bank boundary shear stress fields necessary for determining rates of sediment transport and lateral erosion.
Thermal Conductivity of Metallic Uranium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hin, Celine
This project has developed a modeling and simulation approaches to predict the thermal conductivity of metallic fuels and their alloys. We focus on two methods. The first method has been developed by the team at the University of Wisconsin Madison. They developed a practical and general modeling approach for thermal conductivity of metals and metal alloys that integrates ab-initio and semi-empirical physics-based models to maximize the strengths of both techniques. The second method has been developed by the team at Virginia Tech. This approach consists of a determining the thermal conductivity using only ab-initio methods without any fitting parameters. Bothmore » methods were complementary. The models incorporated both phonon and electron contributions. Good agreement with experimental data over a wide temperature range were found. The models also provided insight into the different physical factors that govern the thermal conductivity under different temperatures. The models were general enough to incorporate more complex effects like additional alloying species, defects, transmutation products and noble gas bubbles to predict the behavior of complex metallic alloys like U-alloy fuel systems under burnup. 3 Introduction Thermal conductivity is an important thermal physical property affecting the performance and efficiency of metallic fuels [1]. Some experimental measurement of thermal conductivity and its correlation with composition and temperature from empirical fitting are available for U, Zr and their alloys with Pu and other minor actinides. However, as reviewed in by Kim, Cho and Sohn [2], due to the difficulty in doing experiments on actinide materials, thermal conductivities of metallic fuels have only been measured at limited alloy compositions and temperatures, some of them even being negative and unphysical. Furthermore, the correlations developed so far are empirical in nature and may not be accurate when used for prediction at conditions far from those used in the original fitting. Moreover, as fuels burn up in the reactor and fission products are built up, thermal conductivity is also significantly changed [3]. Unfortunately, fundamental understanding of the effect of fission products is also currently lacking. In this project, we probe thermal conductivity of metallic fuels with ab initio calculations, a theoretical tool with the potential to yield better accuracy and predictive power than empirical fitting. This work will both complement experimental data by determining thermal conductivity in wider composition and temperature ranges than is available experimentally, and also develop mechanistic understanding to guide better design of metallic fuels in the future. So far, we focused on α-U perfect crystal, the ground-state phase of U metal. We focus on two methods. The first method has been developed by the team at the University of Wisconsin Madison. They developed a practical and general modeling approach for thermal conductivity of metals and metal alloys that integrates ab-initio and semi-empirical physics-based models to maximize the strengths of both techniques. The second method has been developed by the team at Virginia Tech. This approach consists of a determining the thermal conductivity using only ab-initio methods without any fitting parameters. Both methods were complementary and very helpful to understand the physics behind the thermal conductivity in metallic uranium and other materials with similar characteristics. In Section I, the combined model developed at UWM is explained. In Section II, the ab-initio method developed at VT is described along with the uranium pseudo-potential and its validation. Section III is devoted to the work done by Jianguo Yu at INL. Finally, we will present the performance of the project in terms of milestones, publications, and presentations.« less
Sensitivity Analysis of Empirical Results on Civil War Onset
ERIC Educational Resources Information Center
Hegre, Havard; Sambanis, Nicholas
2006-01-01
In the literature on civil war onset, several empirical results are not robust or replicable across studies. Studies use different definitions of civil war and analyze different time periods, so readers cannot easily determine if differences in empirical results are due to those factors or if most empirical results are just not robust. The authors…
Option price and market instability
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Yu, Miao
2017-04-01
An option pricing formula, for which the price of an option depends on both the value of the underlying security as well as the velocity of the security, has been proposed in Baaquie and Yang (2014). The FX (foreign exchange) options price was empirically studied in Baaquie et al., (2014), and it was found that the model in general provides an excellent fit for all strike prices with a fixed model parameters-unlike the Black-Scholes option price Hull and White (1987) that requires the empirically determined implied volatility surface to fit the option data. The option price proposed in Baaquie and Cao Yang (2014) did not fit the data during the crisis of 2007-2008. We make a hypothesis that the failure of the option price to fit data is an indication of the market's large deviation from its near equilibrium behavior due to the market's instability. Furthermore, our indicator of market's instability is shown to be more accurate than the option's observed volatility. The market prices of the FX option for various currencies are studied in the light of our hypothesis.
[Aluminum mobilization models of forest yellow earth in South China].
Xin, Yan; Zhao, Yu; Duan, Lei
2009-07-15
For the application of acidification models in predicting effects of acid deposition and formulating control strategy in China, it is important selecting regionally applicable models of soil aluminum mobilization and determining their parameters. Based on the long-term monitoring results of soil water chemistry from four forested watersheds in South China, the applicability of a range of equilibriums describing aluminum mobilization was evaluated. The tested equilibriums included those for gibbsite, jurbanite, kaolinite, imogolite, and SOM-Al: Results show that the gibbsite equilibrium commonly used in several acidification models is not suitable for the typical forest soil in South China, while the modified empirical gibbsite equation is applicable with pK = - 2.40, a = 1.65 (for upper layer) and pK = - 2.82, a = 1.66 (for lower layers) at only pH > or = 4. Comparing with the empirical gibbsite equation, the other equilibriums do not perform better. It can also be seen that pAl varies slightly with pH decreases at pH < 4, which is unexplainable by any of these suggested equilibriums.
Detection of Operator Performance Breakdown as an Automation Triggering Mechanism
NASA Technical Reports Server (NTRS)
Yoo, Hyo-Sang; Lee, Paul U.; Landry, Steven J.
2015-01-01
Performance breakdown (PB) has been anecdotally described as a state where the human operator "loses control of context" and "cannot maintain required task performance." Preventing such a decline in performance is critical to assure the safety and reliability of human-integrated systems, and therefore PB could be useful as a point at which automation can be applied to support human performance. However, PB has never been scientifically defined or empirically demonstrated. Moreover, there is no validated objective way of detecting such a state or the transition to that state. The purpose of this work is: 1) to empirically demonstrate a PB state, and 2) to develop an objective way of detecting such a state. This paper defines PB and proposes an objective method for its detection. A human-in-the-loop study was conducted: 1) to demonstrate PB by increasing workload until the subject reported being in a state of PB, and 2) to identify possible parameters of a detection method for objectively identifying the subjectively-reported PB point, and 3) to determine if the parameters are idiosyncratic to an individual/context or are more generally applicable. In the experiment, fifteen participants were asked to manage three concurrent tasks (one primary and two secondary) for 18 minutes. The difficulty of the primary task was manipulated over time to induce PB while the difficulty of the secondary tasks remained static. The participants' task performance data was collected. Three hypotheses were constructed: 1) increasing workload will induce subjectively-identified PB, 2) there exists criteria that identifies the threshold parameters that best matches the subjectively-identified PB point, and 3) the criteria for choosing the threshold parameters is consistent across individuals. The results show that increasing workload can induce subjectively-identified PB, although it might not be generalizable-only 12 out of 15 participants declared PB. The PB detection method based on signal detection analysis was applied to the performance data and the results showed that PB can be identified using the method, particularly when the values of the parameters for the detection method were calibrated individually.
Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an
2013-01-01
Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present.
Wang, Huifang; Xiao, Bo; Wang, Mingyu; Shao, Ming'an
2013-01-01
Soil water retention parameters are critical to quantify flow and solute transport in vadose zone, while the presence of rock fragments remarkably increases their variability. Therefore a novel method for determining water retention parameters of soil-gravel mixtures is required. The procedure to generate such a model is based firstly on the determination of the quantitative relationship between the content of rock fragments and the effective saturation of soil-gravel mixtures, and then on the integration of this relationship with former analytical equations of water retention curves (WRCs). In order to find such relationships, laboratory experiments were conducted to determine WRCs of soil-gravel mixtures obtained with a clay loam soil mixed with shale clasts or pebbles in three size groups with various gravel contents. Data showed that the effective saturation of the soil-gravel mixtures with the same kind of gravels within one size group had a linear relation with gravel contents, and had a power relation with the bulk density of samples at any pressure head. Revised formulas for water retention properties of the soil-gravel mixtures are proposed to establish the water retention curved surface models of the power-linear functions and power functions. The analysis of the parameters obtained by regression and validation of the empirical models showed that they were acceptable by using either the measured data of separate gravel size group or those of all the three gravel size groups having a large size range. Furthermore, the regression parameters of the curved surfaces for the soil-gravel mixtures with a large range of gravel content could be determined from the water retention data of the soil-gravel mixtures with two representative gravel contents or bulk densities. Such revised water retention models are potentially applicable in regional or large scale field investigations of significantly heterogeneous media, where various gravel sizes and different gravel contents are present. PMID:23555040
Synergistic effects in threshold models on networks.
Juul, Jonas S; Porter, Mason A
2018-01-01
Network structure can have a significant impact on the propagation of diseases, memes, and information on social networks. Different types of spreading processes (and other dynamical processes) are affected by network architecture in different ways, and it is important to develop tractable models of spreading processes on networks to explore such issues. In this paper, we incorporate the idea of synergy into a two-state ("active" or "passive") threshold model of social influence on networks. Our model's update rule is deterministic, and the influence of each meme-carrying (i.e., active) neighbor can-depending on a parameter-either be enhanced or inhibited by an amount that depends on the number of active neighbors of a node. Such a synergistic system models social behavior in which the willingness to adopt either accelerates or saturates in a way that depends on the number of neighbors who have adopted that behavior. We illustrate that our model's synergy parameter has a crucial effect on system dynamics, as it determines whether degree-k nodes are possible or impossible to activate. We simulate synergistic meme spreading on both random-graph models and networks constructed from empirical data. Using a heterogeneous mean-field approximation, which we derive under the assumption that a network is locally tree-like, we are able to determine which synergy-parameter values allow degree-k nodes to be activated for many networks and for a broad family of synergistic models.
NASA Astrophysics Data System (ADS)
Açıkgöz, Muhammed; Rudowicz, Czesław; Gnutek, Paweł
2017-11-01
Theoretical investigations are carried out to determine the temperature dependence of the local structural parameters of Cr3+ and Mn2+ ions doped into RAl3(BO3)4 (RAB, R = Y, Eu, Tm) crystals. The zero-field splitting (ZFS) parameters (ZFSPs) obtained from the spin Hamiltonian (SH) analysis of EMR (EPR) spectra serve for fine-tuning the theoretically predicted ZFSPs obtained using the semi-empirical superposition model (SPM). The SPM analysis enables to determine the local structure changes around Cr3+ and Mn2+ centers in RAB crystals and explain the observed temperature dependence of the ZFSPs. The local monoclinic C2 site symmetry of all Al sites in YAB necessitates consideration of one non-zero monoclinic ZFSP (in the Stevens notation, b21) for Cr3+ ions. However, the experimental second-rank ZFSPs (D =b20 , E = 1 / 3b22) were expressed in a nominal principal axis system. To provide additional insight into low symmetry aspects, the distortions (ligand's distances ΔRi and angular distortions Δθi) have been varied while preserving monoclinic site symmetry, in such way as to obtain the calculated values (D, E) close to the experimental ones, while keeping b21 close to zero. This procedure yields good matching of the calculated ZFSPs and the experimental ones, and enables determination of the corresponding local distortions. The present results may be useful in future studies aimed at technological applications of the Huntite-type borates with the formula RM3(BO3)4. The model parameters determined here may be utilized for ZFSP calculations for Cr3+ and Mn2+ ions at octahedral sites in single-molecule magnets and single-chain magnets.
A new UK fission yield evaluation UKFY3.7
NASA Astrophysics Data System (ADS)
Mills, Robert William
2017-09-01
The JEFF neutron induced and spontaneous fission product yield evaluation is currently unchanged from JEFF-3.1.1, also known by its UK designation UKFY3.6A. It is based upon experimental data combined with empirically fitted mass, charge and isomeric state models which are then adjusted within the experimental and model uncertainties to conform to the physical constraints of the fission process. A new evaluation has been prepared for JEFF, called UKFY3.7, that incorporates new experimental data and replaces the current empirical models (multi-Gaussian fits of mass distribution and Wahl Zp model for charge distribution combined with parameter extrapolation), with predictions from GEF. The GEF model has the advantage that one set of parameters allows the prediction of many different fissioning nuclides at different excitation energies unlike previous models where each fissioning nuclide at a specific excitation energy had to be fitted individually to the relevant experimental data. The new UKFY3.7 evaluation, submitted for testing as part of JEFF-3.3, is described alongside initial results of testing. In addition, initial ideas for future developments allowing inclusion of new measurements types and changing from any neutron spectrum type to true neutron energy dependence are discussed. Also, a method is proposed to propagate uncertainties of fission product yields based upon the experimental data that underlies the fission yield evaluation. The covariance terms being determined from the evaluated cumulative and independent yields combined with the experimental uncertainties on the cumulative yield measurements.
Testing the Goodwin growth-cycle macroeconomic dynamics in Brazil
NASA Astrophysics Data System (ADS)
Moura, N. J.; Ribeiro, Marcelo B.
2013-05-01
This paper discusses the empirical validity of Goodwin’s (1967) macroeconomic model of growth with cycles by assuming that the individual income distribution of the Brazilian society is described by the Gompertz-Pareto distribution (GPD). This is formed by the combination of the Gompertz curve, representing the overwhelming majority of the population (˜99%), with the Pareto power law, representing the tiny richest part (˜1%). In line with Goodwin’s original model, we identify the Gompertzian part with the workers and the Paretian component with the class of capitalists. Since the GPD parameters are obtained for each year and the Goodwin macroeconomics is a time evolving model, we use previously determined, and further extended here, Brazilian GPD parameters, as well as unemployment data, to study the time evolution of these quantities in Brazil from 1981 to 2009 by means of the Goodwin dynamics. This is done in the original Goodwin model and an extension advanced by Desai et al. (2006). As far as Brazilian data is concerned, our results show partial qualitative and quantitative agreement with both models in the studied time period, although the original one provides better data fit. Nevertheless, both models fall short of a good empirical agreement as they predict single center cycles which were not found in the data. We discuss the specific points where the Goodwin dynamics must be improved in order to provide a more realistic representation of the dynamics of economic systems.
NASA Astrophysics Data System (ADS)
Takahashi, Takuya; Sugiura, Junnnosuke; Nagayama, Kuniaki
2002-05-01
To investigate the role hydration plays in the electrostatic interactions of proteins, the time-averaged electrostatic potential of the B1 domain of protein G in an aqueous solution was calculated with full atomic molecular dynamics simulations that explicitly considers every atom (i.e., an all atom model). This all atom calculated potential was compared with the potential obtained from an electrostatic continuum model calculation. In both cases, the charge-screening effect was fairly well formulated with an effective relative dielectric constant which increased linearly with increasing charge-charge distance. This simulated linear dependence agrees with the experimentally determined linear relation proposed by Pickersgill. Cut-off approximations for Coulomb interactions failed to reproduce this linear relation. Correlation between the all atom model and the continuum models was found to be better than the respective correlation calculated for linear fitting to the two models. This confirms that the continuum model is better at treating the complicated shapes of protein conformations than the simple linear fitting empirical model. We have tried a sigmoid fitting empirical model in addition to the linear one. When weights of all data were treated equally, the sigmoid model, which requires two fitting parameters, fits results of both the all atom and the continuum models less accurately than the linear model which requires only one fitting parameter. When potential values are chosen as weighting factors, the fitting error of the sigmoid model became smaller, and the slope of both linear fitting curves became smaller. This suggests the screening effect of an aqueous medium within a short range, where potential values are relatively large, is smaller than that expected from the linear fitting curve whose slope is almost 4. To investigate the linear increase of the effective relative dielectric constant, the Poisson equation of a low-dielectric sphere in a high-dielectric medium was solved and charges distributed near the molecular surface were indicated as leading to the apparent linearity.
Semertzidou, P; Piliposian, G T; Appleby, P G
2016-08-01
The residence time of (210)Pb created in the atmosphere by the decay of gaseous (222)Rn is a key parameter controlling its distribution and fallout onto the landscape. These in turn are key parameters governing the use of this natural radionuclide for dating and interpreting environmental records stored in natural archives such as lake sediments. One of the principal methods for estimating the atmospheric residence time is through measurements of the activities of the daughter radionuclides (210)Bi and (210)Po, and in particular the (210)Bi/(210)Pb and (210)Po/(210)Pb activity ratios. Calculations used in early empirical studies assumed that these were governed by a simple series of equilibrium equations. This approach does however have two failings; it takes no account of the effect of global circulation on spatial variations in the activity ratios, and no allowance is made for the impact of transport processes across the tropopause. This paper presents a simple model for calculating the distributions of (210)Pb, (210)Bi and (210)Po at northern mid-latitudes (30°-65°N), a region containing almost all the available empirical data. By comparing modelled (210)Bi/(210)Pb activity ratios with empirical data a best estimate for the tropospheric residence time of around 10 days is obtained. This is significantly longer than earlier estimates of between 4 and 7 days. The process whereby (210)Pb is transported into the stratosphere when tropospheric concentrations are high and returned from it when they are low, significantly increases the effective residence time in the atmosphere as a whole. The effect of this is to significantly enhance the long range transport of (210)Pb from its source locations. The impact is illustrated by calculations showing the distribution of (210)Pb fallout versus longitude at northern mid-latitudes. Copyright © 2016 Elsevier Ltd. All rights reserved.
An Empirical Spectroscopic Database for Acetylene in the Regions of 5850-9415 CM^{-1}
NASA Astrophysics Data System (ADS)
Campargue, Alain; Lyulin, Oleg
2017-06-01
Six studies have been recently devoted to a systematic analysis of the high-resolution near infrared absorption spectrum of acetylene recorded by Cavity Ring Down spectroscopy (CRDS) in Grenoble and by Fourier-transform spectroscopy (FTS) in Brussels and Hefei. On the basis of these works, in the present contribution, we construct an empirical database for acetylene in the 5850 - 9415 \\wn region excluding the 6341-7000 \\wn interval corresponding to the very strong νb{1}+ νb{3} manifold. The database gathers and extends information included in our CRDS and FTS studies. In particular, the intensities of about 1700 lines measured by CRDS in the 7244-7920 \\wn are reported for the first time together with those of several bands of ^{12}C^{13}CH_{2} present in natural isotopic abundance in the acetylene sample. The Herman-Wallis coefficients of most of the bands are derived from a fit of the measured intensity values. A recommended line list is provided with positions calculated using empirical spectroscopic parameters of the lower and upper energy vibrational levels and intensities calculated using the derived Herman-Wallis coefficients. This approach allows completing the experimental list by adding missing lines and improving poorly determined positions and intensities. As a result the constructed line list includes a total of 10973 lines belonging to 146 bands of ^{12}C_{2}H_{2} and 29 bands of ^{12}C^{13}CH_{2}. For comparison the HITRAN2012 database in the same region includes 869 lines of 14 bands, all belonging to ^{12}C_{2}H_{2}. Our weakest lines have an intensity on the order of 10^{-29} cm/molecule,about three orders of magnitude smaller than the HITRAN intensity cut off. Line profile parameters are added to the line list which is provided in HITRAN format. The comparison to the HITRAN2012 line list or to results obtained using the global effective operator approach is discussed in terms of completeness and accuracy.
Labyrinth Seal Flutter Analysis and Test Validation in Support of Robust Rocket Engine Design
NASA Technical Reports Server (NTRS)
El-Aini, Yehia; Park, John; Frady, Greg; Nesman, Tom
2010-01-01
High energy-density turbomachines, like the SSME turbopumps, utilize labyrinth seals, also referred to as knife-edge seals, to control leakage flow. The pressure drop for such seals is order of magnitude higher than comparable jet engine seals. This is aggravated by the requirement of tight clearances resulting in possible unfavorable fluid-structure interaction of the seal system (seal flutter). To demonstrate these characteristics, a benchmark case of a High Pressure Oxygen Turbopump (HPOTP) outlet Labyrinth seal was studied in detail. First, an analytical assessment of the seal stability was conducted using a Pratt & Whitney legacy seal flutter code. Sensitivity parameters including pressure drop, rotor-to-stator running clearances and cavity volumes were examined and modeling strategies established. Second, a concurrent experimental investigation was undertaken to validate the stability of the seal at the equivalent operating conditions of the pump. Actual pump hardware was used to construct the test rig, also referred to as the (Flutter Rig). The flutter rig did not include rotational effects or temperature. However, the use of Hydrogen gas at high inlet pressure provided good representation of the critical parameters affecting flutter especially the speed of sound. The flutter code predictions showed consistent trends in good agreement with the experimental data. The rig test program produced a stability threshold empirical parameter that separated operation with and without flutter. This empirical parameter was used to establish the seal build clearances to avoid flutter while providing the required cooling flow metering. The calibrated flutter code along with the empirical flutter parameter was used to redesign the baseline seal resulting in a flutter-free robust configuration. Provisions for incorporation of mechanical damping devices were introduced in the redesigned seal to ensure added robustness
Development of a new family of normalized modulus reduction and material damping curves
NASA Astrophysics Data System (ADS)
Darendeli, Mehmet Baris
2001-12-01
As part of various research projects [including the SRS (Savannah River Site) Project AA891070, EPRI (Electric Power Research Institute) Project 3302, and ROSRINE (Resolution of Site Response Issues from the Northridge Earthquake) Project], numerous geotechnical sites were drilled and sampled. Intact soil samples over a depth range of several hundred meters were recovered from 20 of these sites. These soil samples were tested in the laboratory at The University of Texas at Austin (UTA) to characterize the materials dynamically. The presence of a database accumulated from testing these intact specimens motivated a re-evaluation of empirical curves employed in the state of practice. The weaknesses of empirical curves reported in the literature were identified and the necessity of developing an improved set of empirical curves was recognized. This study focused on developing the empirical framework that can be used to generate normalized modulus reduction and material damping curves. This framework is composed of simple equations, which incorporate the key parameters that control nonlinear soil behavior. The data collected over the past decade at The University of Texas at Austin are statistically analyzed using First-order, Second-moment Bayesian Method (FSBM). The effects of various parameters (such as confining pressure and soil plasticity) on dynamic soil properties are evaluated and quantified within this framework. One of the most important aspects of this study is estimating not only the mean values of the empirical curves but also estimating the uncertainty associated with these values. This study provides the opportunity to handle uncertainty in the empirical estimates of dynamic soil properties within the probabilistic seismic hazard analysis framework. A refinement in site-specific probabilistic seismic hazard assessment is expected to materialize in the near future by incorporating the results of this study into state of practice.
Tomczewski, Andrzej
2014-01-01
The paper presents the issues of a wind turbine-flywheel energy storage system (WT-FESS) operation under real conditions. Stochastic changes of wind energy in time cause significant fluctuations of the system output power and as a result have a negative impact on the quality of the generated electrical energy. In the author's opinion it is possible to reduce the aforementioned effects by using an energy storage of an appropriate type and capacity. It was assumed that based on the technical parameters of a wind turbine-energy storage system and its geographical location one can determine the boundary capacity of the storage, which helps prevent power cuts to the grid at the assumed probability. Flywheel energy storage was selected due to its characteristics and technical parameters. The storage capacity was determined based on an empirical relationship using the results of the proposed statistical and energetic analysis of the measured wind velocity courses. A detailed algorithm of the WT-FESS with the power grid system was developed, eliminating short-term breaks in the turbine operation and periods when the wind turbine power was below the assumed level.
2014-01-01
The paper presents the issues of a wind turbine-flywheel energy storage system (WT-FESS) operation under real conditions. Stochastic changes of wind energy in time cause significant fluctuations of the system output power and as a result have a negative impact on the quality of the generated electrical energy. In the author's opinion it is possible to reduce the aforementioned effects by using an energy storage of an appropriate type and capacity. It was assumed that based on the technical parameters of a wind turbine-energy storage system and its geographical location one can determine the boundary capacity of the storage, which helps prevent power cuts to the grid at the assumed probability. Flywheel energy storage was selected due to its characteristics and technical parameters. The storage capacity was determined based on an empirical relationship using the results of the proposed statistical and energetic analysis of the measured wind velocity courses. A detailed algorithm of the WT-FESS with the power grid system was developed, eliminating short-term breaks in the turbine operation and periods when the wind turbine power was below the assumed level. PMID:25215326
Thermal conductivity of silicon using reverse non-equilibrium molecular dynamics
NASA Astrophysics Data System (ADS)
El-Genk, Mohamed S.; Talaat, Khaled; Cowen, Benjamin J.
2018-05-01
Simulations are performed using the reverse non-equilibrium molecular dynamics (rNEMD) method and the Stillinger-Weber (SW) potential to determine the input parameters for achieving ±1% convergence of the calculated thermal conductivity of silicon. These parameters are then used to investigate the effects of the interatomic potentials of SW, Tersoff II, Environment Dependent Interatomic Potential (EDIP), Second Nearest Neighbor, Modified Embedded-Atom Method (MEAM), and Highly Optimized Empirical Potential MEAM on determining the bulk thermal conductivity as a function of temperature (400-1000 K). At temperatures > 400 K, data collection and swap periods of 15 ns and 150 fs, system size ≥6 × 6 UC2 and system lengths ≥192 UC are adequate for ±1% convergence with all potentials, regardless of the time step size (0.1-0.5 fs). This is also true at 400 K, except for the SW potential, which requires a data collection period ≥30 ns. The calculated bulk thermal conductivities using the rNEMD method and the EDIP potential are close to, but lower than experimental values. The 10% difference at 400 K increases gradually to 20% at 1000 K.
Cano-García, Angel E.; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe
2012-01-01
In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information. PMID:22778608
Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe
2012-01-01
In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.
Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs.
Shen, Jieliang; Su, Yan; Liang, Qing; Zhu, Xinhua
2018-01-13
The establishment of the Aircraft Dynamic Model(ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter(EKF), with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters.
Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs
Shen, Jieliang; Su, Yan; Liang, Qing; Zhu, Xinhua
2018-01-01
The establishment of the Aircraft Dynamic Model (ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter (EKF), with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters. PMID:29342856
Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald
2018-10-01
In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.
Modeling multivariate time series on manifolds with skew radial basis functions.
Jamshidi, Arta A; Kirby, Michael J
2011-01-01
We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.
NASA Astrophysics Data System (ADS)
Bochet, Esther; García-Fayos, Patricio; José Molina, Maria; Moreno de las Heras, Mariano; Espigares, Tíscar; Nicolau, Jose Manuel; Monleon, Vicente
2017-04-01
Theoretical models predict that drylands are particularly prone to suffer critical transitions with abrupt non-linear changes in their structure and functions as a result of the existing complex interactions between climatic fluctuations and human disturbances. However, so far, few studies provide empirical data to validate these models. We aim at determining how holm oak (Quercus ilex) woodlands undergo changes in their functions in response to human disturbance along an aridity gradient (from semi-arid to sub-humid conditions), in eastern Spain. For that purpose, we used (a) remote-sensing estimations of precipitation-use-efficiency (PUE) from enhanced vegetation index (EVI) observations performed in 231x231 m plots of the Moderate Resolution Imaging Spectroradiometer (MODIS); (b) biological and chemical soil parameter determinations (extracellular soil enzyme activity, soil respiration, nutrient cycling processes) from soil sampled in the same plots; (c) vegetation parameter determinations (ratio of functional groups) from vegetation surveys performed in the same plots. We analyzed and compared the shape of the functional change (in terms of PUE and soil and vegetation parameters) in response to human disturbance intensity for our holm oak sites along the aridity gradient. Overall, our results evidenced important differences in the shape of the functional change in response to human disturbance between climatic conditions. Semi-arid areas experienced a more accelerated non-linear decrease with an increasing disturbance intensity than sub-humid ones. The proportion of functional groups (herbaceous vs. woody cover) played a relevant role in the shape of the functional response of the holm oak sites to human disturbance.
Seismic‐wave attenuation determined from tectonic tremor in multiple subduction zones
Yabe, Suguru; Baltay, Annemarie S.; Ide, Satoshi; Beroza, Gregory C.
2014-01-01
Tectonic tremor provides a new source of observations that can be used to constrain the seismic attenuation parameter for ground‐motion prediction and hazard mapping. Traditionally, recorded earthquakes of magnitude ∼3–8 are used to develop ground‐motion prediction equations; however, typical earthquake records may be sparse in areas of high hazard. In this study, we constrain the distance decay of seismic waves using measurements of the amplitude decay of tectonic tremor, which is plentiful in some regions. Tectonic tremor occurs in the frequency band of interest for ground‐motion prediction (i.e., ∼2–8 Hz) and is located on the subducting plate interface, at the lower boundary of where future large earthquakes are expected. We empirically fit the distance decay of peak ground velocity from tremor to determine the attenuation parameter in four subduction zones: Nankai, Japan; Cascadia, United States–Canada; Jalisco, Mexico; and southern Chile. With the large amount of data available from tremor, we show that in the upper plate, the lower crust is less attenuating than the upper crust. We apply the same analysis to intraslab events in Nankai and show the possibility that waves traveling from deeper intraslab events experience more attenuation than those from the shallower tremor due to ray paths that pass through the subducting and highly attenuating oceanic crust. This suggests that high pore‐fluid pressure is present in the tremor source region. These differences imply that the attenuation parameter determined from intraslab earthquakes may underestimate ground motion for future large earthquakes on the plate interface.
A parametric approach to irregular fatigue prediction
NASA Technical Reports Server (NTRS)
Erismann, T. H.
1972-01-01
A parametric approach to irregular fatigue protection is presented. The method proposed consists of two parts: empirical determination of certain characteristics of a material by means of a relatively small number of well-defined standard tests, and arithmetical application of the results obtained to arbitrary loading histories. The following groups of parameters are thus taken into account: (1) the variations of the mean stress, (2) the interaction of these variations and the superposed oscillating stresses, (3) the spectrum of the oscillating-stress amplitudes, and (4) the sequence of the oscillating-stress amplitudes. It is pointed out that only experimental verification can throw sufficient light upon possibilities and limitations of this (or any other) prediction method.
Toward an improvement over Kerner-Klenov-Wolf three-phase cellular automaton model.
Jiang, Rui; Wu, Qing-Song
2005-12-01
The Kerner-Klenov-Wolf (KKW) three-phase cellular automaton model has a nonrealistic velocity of the upstream front in widening synchronized flow pattern which separates synchronized flow downstream and free flow upstream. This paper presents an improved model, which is a combination of the initial KKW model and a modified Nagel-Schreckenberg (MNS) model. In the improved KKW model, a parameter is introduced to determine the vehicle moves according to the MNS model or the initial KKW model. The improved KKW model can not only simulate the empirical observations as the initial KKW model, but also overcome the nonrealistic velocity problem. The mechanism of the improvement is discussed.
Influence of Context on Item Parameters in Forced-Choice Personality Assessments
ERIC Educational Resources Information Center
Lin, Yin; Brown, Anna
2017-01-01
A fundamental assumption in computerized adaptive testing is that item parameters are invariant with respect to context--items surrounding the administered item. This assumption, however, may not hold in forced-choice (FC) assessments, where explicit comparisons are made between items included in the same block. We empirically examined the…
Psychological Stress and the Human Immune System: A Meta-Analytic Study of 30 Years of Inquiry
ERIC Educational Resources Information Center
Segerstrom, Suzanne C.; Miller, Gregory E.
2004-01-01
The present report meta-analyzes more than 300 empirical articles describing a relationship between psychological stress and parameters of the immune system in human participants. Acute stressors (lasting minutes) were associated with potentially adaptive upregulation of some parameters of natural immunity and downregulation of some functions of…
USDA-ARS?s Scientific Manuscript database
Empirical and mechanistic modeling indicate that aerially transmitted pathogens follow a power law, resulting in dispersive epidemic waves. The spread parameter (b) of the power law model, which defines the distance travelled by the epidemic wave front, has been found to be approximately 2 for sever...
Some Empirical Evidence for Latent Trait Model Selection.
ERIC Educational Resources Information Center
Hutten, Leah R.
The results of this study suggest that for purposes of estimating ability by latent trait methods, the Rasch model compares favorably with the three-parameter logistic model. Using estimated parameters to make predictions about 25 actual number-correct score distributions with samples of 1,000 cases each, those predicted by the Rasch model fit the…
Dynamic Bayesian wavelet transform: New methodology for extraction of repetitive transients
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2017-05-01
Thanks to some recent research works, dynamic Bayesian wavelet transform as new methodology for extraction of repetitive transients is proposed in this short communication to reveal fault signatures hidden in rotating machine. The main idea of the dynamic Bayesian wavelet transform is to iteratively estimate posterior parameters of wavelet transform via artificial observations and dynamic Bayesian inference. First, a prior wavelet parameter distribution can be established by one of many fast detection algorithms, such as the fast kurtogram, the improved kurtogram, the enhanced kurtogram, the sparsogram, the infogram, continuous wavelet transform, discrete wavelet transform, wavelet packets, multiwavelets, empirical wavelet transform, empirical mode decomposition, local mean decomposition, etc.. Second, artificial observations can be constructed based on one of many metrics, such as kurtosis, the sparsity measurement, entropy, approximate entropy, the smoothness index, a synthesized criterion, etc., which are able to quantify repetitive transients. Finally, given artificial observations, the prior wavelet parameter distribution can be posteriorly updated over iterations by using dynamic Bayesian inference. More importantly, the proposed new methodology can be extended to establish the optimal parameters required by many other signal processing methods for extraction of repetitive transients.
Study of a generalized birks formula for the scintillation response of a CaMoO4 crystal
NASA Astrophysics Data System (ADS)
Lee, J. Y.; Kim, H. J.; Kang, Sang Jun; Lee, M. H.
2017-12-01
We have investigated the scintillation characteristics of CaMoO4 (CMO) crystals by using a gamma source and various internal alpha sources. A 137Cs source with 662-keV gamma-rays was used for the gamma-quanta light yield calibration. Internal radioactive contaminations provided alpha particles with different energies from 5.41 to 7.88 MeV. We developed a C++ program based on the ROOT package for the fitting of parameters in a generalized Birks semi-empirical formula by combining the experimental and the simulation data. Results for the fitted Birks parameters are k b1 = 3.3 × 10 -3 (g/MeVcm2) for the 1st parameter and k b2 = 7.9 × 10 -5 (g/MeVcm2)2 for the 2nd parameter. The χ2/n.d.f. (Number of Degree of Freedom) is calculated as 0.1/4. We were able to estimate the 238U and 234U contaminations in a CMO crystal by using the generalized Birks semi-empirical formula.
NASA Astrophysics Data System (ADS)
Sethuramalingam, Prabhu; Vinayagam, Babu Kupusamy
2016-07-01
Carbon nanotube mixed grinding wheel is used in the grinding process to analyze the surface characteristics of AISI D2 tool steel material. Till now no work has been carried out using carbon nanotube based grinding wheel. Carbon nanotube based grinding wheel has excellent thermal conductivity and good mechanical properties which are used to improve the surface finish of the workpiece. In the present study, the multi response optimization of process parameters like surface roughness and metal removal rate of grinding process of single wall carbon nanotube (CNT) in mixed cutting fluids is undertaken using orthogonal array with grey relational analysis. Experiments are performed with designated grinding conditions obtained using the L9 orthogonal array. Based on the results of the grey relational analysis, a set of optimum grinding parameters is obtained. Using the analysis of variance approach the significant machining parameters are found. Empirical model for the prediction of output parameters has been developed using regression analysis and the results are compared empirically, for conditions of with and without CNT grinding wheel in grinding process.
NASA Technical Reports Server (NTRS)
Wilson, R. M.; Reichmann, E. J.; Teuber, D. L.
1984-01-01
An empirical method is developed to predict certain parameters of future solar activity cycles. Sunspot cycle statistics are examined, and curve fitting and linear regression analysis techniques are utilized.
VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data
Daunizeau, Jean; Adam, Vincent; Rigoux, Lionel
2014-01-01
This work is in line with an on-going effort tending toward a computational (quantitative and refutable) understanding of human neuro-cognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspecified (i.e. they have unknown parameters) and nonlinear. This makes them difficult to peer with a formal statistical data analysis framework. In turn, this compromises the reproducibility of model-based empirical studies. This work exposes a software toolbox that provides generic, efficient and robust probabilistic solutions to the three problems of model-based analysis of empirical data: (i) data simulation, (ii) parameter estimation/model selection, and (iii) experimental design optimization. PMID:24465198
NASA Astrophysics Data System (ADS)
Maier, Andrea; Baur, Oliver
2016-03-01
We present results for Precise Orbit Determination (POD) of the Lunar Reconnaissance Orbiter (LRO) based on two-way Doppler range-rates over a time span of ~13 months (January 3, 2011 to February 9, 2012). Different orbital arc lengths and various sets of empirical parameters were tested to seek optimal parametrization. An overlap analysis covering three months of Doppler data shows that the most precise orbits are obtained using an arc length of 2.5 days and estimating arc-wise constant empirical accelerations in along track direction. The overlap analysis over the entire investigated time span of 13 months indicates an orbital precision of 13.79 m, 14.17 m, and 1.28 m in along track, cross track, and radial direction, respectively, with 21.32 m in total position. We compare our orbits to the official science orbits released by the US National Aeronautics and Space Administration (NASA). The differences amount to 9.50 m, 6.98 m, and 1.50 m in along track, cross track, and radial direction, respectively, as well as 12.71 m in total position. Based on the reconstructed LRO orbits, we estimated lunar gravity field coefficients up to spherical harmonic degree and order 60. The results are compared to gravity field solutions derived from data collected by other lunar missions.
Offermann, Lesa R; He, John Z; Mank, Nicholas J; Booth, William T; Chruszcz, Maksymilian
2014-03-01
The production of macromolecular crystals suitable for structural analysis is one of the most important and limiting steps in the structure determination process. Often, preliminary crystallization trials are performed using hundreds of empirically selected conditions. Carboxylic acids and/or their salts are one of the most popular components of these empirically derived crystallization conditions. Our findings indicate that almost 40 % of entries deposited to the Protein Data Bank (PDB) reporting crystallization conditions contain at least one carboxylic acid. In order to analyze the role of carboxylic acids in macromolecular crystallization, a large-scale analysis of the successful crystallization experiments reported to the PDB was performed. The PDB is currently the largest source of crystallization data, however it is not easily searchable. These complications are due to a combination of a free text format, which is used to capture information on the crystallization experiments, and the inconsistent naming of chemicals used in crystallization experiments. Despite these difficulties, our approach allows for the extraction of over 47,000 crystallization conditions from the PDB. Initially, the selected conditions were investigated to determine which carboxylic acids or their salts are most often present in crystallization solutions. From this group, selected sets of crystallization conditions were analyzed in detail, assessing parameters such as concentration, pH, and precipitant used. Our findings will lead to the design of new crystallization screens focused around carboxylic acids.
The V-band Empirical Mass-luminosity Relation for Main Sequence Stars
NASA Astrophysics Data System (ADS)
Xia, Fang; Fu, Yan-Ning
2010-07-01
Stellar mass is an indispensable parameter in the studies of stellar physics and stellar dynamics. On the one hand, the most reliable way to determine the stellar dynamical mass is via orbital determinations of binaries. On the other hand, however, most stellar masses have to be estimated by using the mass luminosity relation (MLR). Therefore, it is important to obtain the empirical MLR through fitting the data of stellar dynamical mass and luminosity. The effect of metallicity can make this relation disperse in the V-band, but studies show that this is mainly limited to the case when the stellar mass is less than 0.6M⊙ Recently, many relevant data have been accumulated for main sequence stars with larger masses, which make it possible to significantly improve the corresponding MLR. Using a fitting method which can reasonably assign weights to the observational data including two quantities with different dimensions, we obtain a V-band MLR based on the dynamical masses and luminosities of 203 main sequence stars. In comparison with the previous work, the improved MLR is statistically significant, and the relative error of mass estimation reaches about 5%. Therefore, our MLR is useful not only in the studies of statistical nature, but also in the studies of concrete stellar systems, such as the long-term dynamical study and the short-term positioning study of a specific multiple star system.
The V Band Empirical Mass-Luminosity Relation for Main Sequence Stars
NASA Astrophysics Data System (ADS)
Xia, F.; Fu, Y. N.
2010-01-01
Stellar mass is an indispensable parameter in the studies of stellar physics and stellar dynamics. On the one hand, the most reliable way to determine the stellar dynamical mass is via orbital determination of binaries. On the other hand, however, most stellar masses have to be estimated by using the mass-luminosity relation (MLR). Therefore, it is important to obtain the empirical MLR through fitting the data of stellar dynamical mass and luminosity. The effect of metallicity can make this relation disperse in the V-band, but studies show that this is mainly limited to the case when the stellar mass is less than 0.6M⊙. Recently, many relevant data have been accumulated for main sequence stars with larger mass, which make it possible to significantly improve the corresponding MLR. Using a fitting method which can reasonably assign weight to the observational data including two quantities with different dimensions, we obtain a V-band MLR based on the dynamical masses and luminosities of 203 main sequence stars. Compared with the previous work, the improved MLR is statistically significant, and the relative error of mass estimation reaches about 5%. Therefore, our MLR is useful not only in studies of statistical nature, but also in studies of concrete stellar systems, such as the long-term dynamical study and the short-term positioning study of a specific multiple star system.
Variability of rainfall over small areas
NASA Technical Reports Server (NTRS)
Runnels, R. C.
1983-01-01
A preliminary investigation was made to determine estimates of the number of raingauges needed in order to measure the variability of rainfall in time and space over small areas (approximately 40 sq miles). The literature on rainfall variability was examined and the types of empirical relationships used to relate rainfall variations to meteorological and catchment-area characteristics were considered. Relations between the coefficient of variation and areal-mean rainfall and area have been used by several investigators. These parameters seemed reasonable ones to use in any future study of rainfall variations. From a knowledge of an appropriate coefficient of variation (determined by the above-mentioned relations) the number rain gauges needed for the precise determination of areal-mean rainfall may be calculated by statistical estimation theory. The number gauges needed to measure the coefficient of variation over a 40 sq miles area, with varying degrees of error, was found to range from 264 (10% error, mean precipitation = 0.1 in) to about 2 (100% error, mean precipitation = 0.1 in).
Probability bounds analysis for nonlinear population ecology models.
Enszer, Joshua A; Andrei Măceș, D; Stadtherr, Mark A
2015-09-01
Mathematical models in population ecology often involve parameters that are empirically determined and inherently uncertain, with probability distributions for the uncertainties not known precisely. Propagating such imprecise uncertainties rigorously through a model to determine their effect on model outputs can be a challenging problem. We illustrate here a method for the direct propagation of uncertainties represented by probability bounds though nonlinear, continuous-time, dynamic models in population ecology. This makes it possible to determine rigorous bounds on the probability that some specified outcome for a population is achieved, which can be a core problem in ecosystem modeling for risk assessment and management. Results can be obtained at a computational cost that is considerably less than that required by statistical sampling methods such as Monte Carlo analysis. The method is demonstrated using three example systems, with focus on a model of an experimental aquatic food web subject to the effects of contamination by ionic liquids, a new class of potentially important industrial chemicals. Copyright © 2015. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Pujos, Cyril; Regnier, Nicolas; Mousseau, Pierre; Defaye, Guy; Jarny, Yvon
2007-05-01
Simulation quality is determined by the knowledge of the parameters of the model. Yet the rheological models for polymer are often not very accurate, since the viscosity measurements are made under approximations as homogeneous temperature and empirical corrections as Bagley one. Furthermore rheological behaviors are often traduced by mathematical laws as the Cross or the Carreau-Yasuda ones, whose parameters are fitted from viscosity values, obtained with corrected experimental data, and not appropriate for each polymer. To correct these defaults, a table-like rheological model is proposed. This choice makes easier the estimation of model parameters, since each parameter has the same order of magnitude. As the mathematical shape of the model is not imposed, the estimation process is appropriate for each polymer. The proposed method consists in minimizing the quadratic norm of the difference between calculated variables and measured data. In this study an extrusion die is simulated, in order to provide us temperature along the extrusion channel, pressure and flow references. These data allow to characterize thermal transfers and flow phenomena, in which the viscosity is implied. Furthermore the different natures of data allow to estimate viscosity for a large range of shear rates. The estimated rheological model improves the agreement between measurements and simulation: for numerical cases, the error on the flow becomes less than 0.1% for non-Newtonian rheology. This method couples measurements and simulation, constitutes a very accurate mean of rheology determination, and allows to improve the prediction abilities of the model.
Neuro-genetic system for optimization of GMI samples sensitivity.
Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E
2016-03-01
Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cheong, Chin Wen
2008-02-01
This article investigated the influences of structural breaks on the fractionally integrated time-varying volatility model in the Malaysian stock markets which included the Kuala Lumpur composite index and four major sectoral indices. A fractionally integrated time-varying volatility model combined with sudden changes is developed to study the possibility of structural change in the empirical data sets. Our empirical results showed substantial reduction in fractional differencing parameters after the inclusion of structural change during the Asian financial and currency crises. Moreover, the fractionally integrated model with sudden change in volatility performed better in the estimation and specification evaluations.
Galindo-Romero, Marta; Lippert, Tristan; Gavrilov, Alexander
2015-12-01
This paper presents an empirical linear equation to predict peak pressure level of anthropogenic impulsive signals based on its correlation with the sound exposure level. The regression coefficients are shown to be weakly dependent on the environmental characteristics but governed by the source type and parameters. The equation can be applied to values of the sound exposure level predicted with a numerical model, which provides a significant improvement in the prediction of the peak pressure level. Part I presents the analysis for airgun arrays signals, and Part II considers the application of the empirical equation to offshore impact piling noise.
NASA Astrophysics Data System (ADS)
Mia, Mozammel; Al Bashir, Mahmood; Dhar, Nikhil Ranjan
2016-10-01
Hard turning is increasingly employed in machining, lately, to replace time-consuming conventional turning followed by grinding process. An excessive amount of tool wear in hard turning is one of the main hurdles to be overcome. Many researchers have developed tool wear model, but most of them developed it for a particular work-tool-environment combination. No aggregate model is developed that can be used to predict the amount of principal flank wear for specific machining time. An empirical model of principal flank wear (VB) has been developed for the different hardness of workpiece (HRC40, HRC48 and HRC56) while turning by coated carbide insert with different configurations (SNMM and SNMG) under both dry and high pressure coolant conditions. Unlike other developed model, this model includes the use of dummy variables along with the base empirical equation to entail the effect of any changes in the input conditions on the response. The base empirical equation for principal flank wear is formulated adopting the Exponential Associate Function using the experimental results. The coefficient of dummy variable reflects the shifting of the response from one set of machining condition to another set of machining condition which is determined by simple linear regression. The independent cutting parameters (speed, rate, depth of cut) are kept constant while formulating and analyzing this model. The developed model is validated with different sets of machining responses in turning hardened medium carbon steel by coated carbide inserts. For any particular set, the model can be used to predict the amount of principal flank wear for specific machining time. Since the predicted results exhibit good resemblance with experimental data and the average percentage error is <10 %, this model can be used to predict the principal flank wear for stated conditions.
Karimi, Leila; Ghassemi, Abbas
2016-07-01
Among the different technologies developed for desalination, the electrodialysis/electrodialysis reversal (ED/EDR) process is one of the most promising for treating brackish water with low salinity when there is high risk of scaling. Multiple researchers have investigated ED/EDR to optimize the process, determine the effects of operating parameters, and develop theoretical/empirical models. Previously published empirical/theoretical models have evaluated the effect of the hydraulic conditions of the ED/EDR on the limiting current density using dimensionless numbers. The reason for previous studies' emphasis on limiting current density is twofold: 1) to maximize ion removal, most ED/EDR systems are operated close to limiting current conditions if there is not a scaling potential in the concentrate chamber due to a high concentration of less-soluble salts; and 2) for modeling the ED/EDR system with dimensionless numbers, it is more accurate and convenient to use limiting current density, where the boundary layer's characteristics are known at constant electrical conditions. To improve knowledge of ED/EDR systems, ED/EDR models should be also developed for the Ohmic region, where operation reduces energy consumption, facilitates targeted ion removal, and prolongs membrane life compared to limiting current conditions. In this paper, theoretical/empirical models were developed for ED/EDR performance in a wide range of operating conditions. The presented ion removal and selectivity models were developed for the removal of monovalent ions and divalent ions utilizing the dominant dimensionless numbers obtained from laboratory scale electrodialysis experiments. At any system scale, these models can predict ED/EDR performance in terms of monovalent and divalent ion removal. Copyright © 2016 Elsevier Ltd. All rights reserved.
Box-wing model approach for solar radiation pressure modelling in a multi-GNSS scenario
NASA Astrophysics Data System (ADS)
Tobias, Guillermo; Jesús García, Adrián
2016-04-01
The solar radiation pressure force is the largest orbital perturbation after the gravitational effects and the major error source affecting GNSS satellites. A wide range of approaches have been developed over the years for the modelling of this non gravitational effect as part of the orbit determination process. These approaches are commonly divided into empirical, semi-analytical and analytical, where their main difference relies on the amount of knowledge of a-priori physical information about the properties of the satellites (materials and geometry) and their attitude. It has been shown in the past that the pre-launch analytical models fail to achieve the desired accuracy mainly due to difficulties in the extrapolation of the in-orbit optical and thermic properties, the perturbations in the nominal attitude law and the aging of the satellite's surfaces, whereas empirical models' accuracies strongly depend on the amount of tracking data used for deriving the models, and whose performances are reduced as the area to mass ratio of the GNSS satellites increases, as it happens for the upcoming constellations such as BeiDou and Galileo. This paper proposes to use basic box-wing model for Galileo complemented with empirical parameters, based on the limited available information about the Galileo satellite's geometry. The satellite is modelled as a box, representing the satellite bus, and a wing representing the solar panel. The performance of the model will be assessed for GPS, GLONASS and Galileo constellations. The results of the proposed approach have been analyzed over a one year period. In order to assess the results two different SRP models have been used. Firstly, the proposed box-wing model and secondly, the new CODE empirical model, ECOM2. The orbit performances of both models are assessed using Satellite Laser Ranging (SLR) measurements, together with the evaluation of the orbit prediction accuracy. This comparison shows the advantages and disadvantages of taking the physical interactions between satellite and solar radiation into account in an empirical model with respect to a pure empirical model.
NASA Astrophysics Data System (ADS)
Reyer, D.; Philipp, S. L.
2014-09-01
Information about geomechanical and physical rock properties, particularly uniaxial compressive strength (UCS), are needed for geomechanical model development and updating with logging-while-drilling methods to minimise costs and risks of the drilling process. The following parameters with importance at different stages of geothermal exploitation and drilling are presented for typical sedimentary and volcanic rocks of the Northwest German Basin (NWGB): physical (P wave velocities, porosity, and bulk and grain density) and geomechanical parameters (UCS, static Young's modulus, destruction work and indirect tensile strength both perpendicular and parallel to bedding) for 35 rock samples from quarries and 14 core samples of sandstones and carbonate rocks. With regression analyses (linear- and non-linear) empirical relations are developed to predict UCS values from all other parameters. Analyses focus on sedimentary rocks and were repeated separately for clastic rock samples or carbonate rock samples as well as for outcrop samples or core samples. Empirical relations have high statistical significance for Young's modulus, tensile strength and destruction work; for physical properties, there is a wider scatter of data and prediction of UCS is less precise. For most relations, properties of core samples plot within the scatter of outcrop samples and lie within the 90% prediction bands of developed regression functions. The results indicate the applicability of empirical relations that are based on outcrop data on questions related to drilling operations when the database contains a sufficient number of samples with varying rock properties. The presented equations may help to predict UCS values for sedimentary rocks at depth, and thus develop suitable geomechanical models for the adaptation of the drilling strategy on rock mechanical conditions in the NWGB.
Empirical Determination of Competence Areas to Computer Science Education
ERIC Educational Resources Information Center
Zendler, Andreas; Klaudt, Dieter; Seitz, Cornelia
2014-01-01
The authors discuss empirically determined competence areas to K-12 computer science education, emphasizing the cognitive level of competence. The results of a questionnaire with 120 professors of computer science serve as a database. By using multi-dimensional scaling and cluster analysis, four competence areas to computer science education…
Procedures for Empirical Determination of En-Route Criterion Levels.
ERIC Educational Resources Information Center
Moncrief, Michael H.
En-route Criterion Levels (ECLs) are defined as decision rules for predicting pupil readiness to advance through an instructional sequence. This study investigated the validity of present ELCs in an individualized mathematics program and tested procedures for empirically determining optimal ECLs. Retest scores and subsequent progress were…
Clusters of Colleges and Universities: An Empirically Determined System.
ERIC Educational Resources Information Center
Korb, Roslyn
A technique for classifying higher education institutions was developed in order to identify homogenous subsets of institutions and to compare an institution with its empirically determined peers. The majority of the data were obtained from a 4-year longitudinal file that merged the finance, faculty, enrollment, and institutional characteristics…
Fung, Monica; Kim, Jane; Marty, Francisco M; Schwarzinger, Michaël; Koo, Sophia
2015-01-01
Invasive fungal disease (IFD) causes significant morbidity and mortality in hematologic malignancy patients with high-risk febrile neutropenia (FN). These patients therefore often receive empirical antifungal therapy. Diagnostic test-guided pre-emptive antifungal therapy has been evaluated as an alternative treatment strategy in these patients. We conducted an electronic search for literature comparing empirical versus pre-emptive antifungal strategies in FN among adult hematologic malignancy patients. We systematically reviewed 9 studies, including randomized-controlled trials, cohort studies, and feasibility studies. Random and fixed-effect models were used to generate pooled relative risk estimates of IFD detection, IFD-related mortality, overall mortality, and rates and duration of antifungal therapy. Heterogeneity was measured via Cochran's Q test, I2 statistic, and between study τ2. Incorporating these parameters and direct costs of drugs and diagnostic testing, we constructed a comparative costing model for the two strategies. We conducted probabilistic sensitivity analysis on pooled estimates and one-way sensitivity analyses on other key parameters with uncertain estimates. Nine published studies met inclusion criteria. Compared to empirical antifungal therapy, pre-emptive strategies were associated with significantly lower antifungal exposure (RR 0.48, 95% CI 0.27-0.85) and duration without an increase in IFD-related mortality (RR 0.82, 95% CI 0.36-1.87) or overall mortality (RR 0.95, 95% CI 0.46-1.99). The pre-emptive strategy cost $324 less (95% credible interval -$291.88 to $418.65 pre-emptive compared to empirical) than the empirical approach per FN episode. However, the cost difference was influenced by relatively small changes in costs of antifungal therapy and diagnostic testing. Compared to empirical antifungal therapy, pre-emptive antifungal therapy in patients with high-risk FN may decrease antifungal use without increasing mortality. We demonstrate a state of economic equipoise between empirical and diagnostic-directed pre-emptive antifungal treatment strategies, influenced by small changes in cost of antifungal therapy and diagnostic testing, in the current literature. This work emphasizes the need for optimization of existing fungal diagnostic strategies, development of more efficient diagnostic strategies, and less toxic and more cost-effective antifungals.
NASA Technical Reports Server (NTRS)
Forbes, G. S.; Pielke, R. A.
1985-01-01
Various empirical and statistical weather-forecasting studies which utilize stratification by weather regime are described. Objective classification was used to determine weather regime in some studies. In other cases the weather pattern was determined on the basis of a parameter representing the physical and dynamical processes relevant to the anticipated mesoscale phenomena, such as low level moisture convergence and convective precipitation, or the Froude number and the occurrence of cold-air damming. For mesoscale phenomena already in existence, new forecasting techniques were developed. The use of cloud models in operational forecasting is discussed. Models to calculate the spatial scales of forcings and resultant response for mesoscale systems are presented. The use of these models to represent the climatologically most prevalent systems, and to perform case-by-case simulations is reviewed. Operational implementation of mesoscale data into weather forecasts, using both actual simulation output and method-output statistics is discussed.
BOND: A quantum of solace for nebular abundance determinations
NASA Astrophysics Data System (ADS)
Vale Asari, N.; Stasińska, G.; Morisset, C.; Cid Fernandes, R.
2017-11-01
The abundances of chemical elements other than hydrogen and helium in a galaxy are the fossil record of its star formation history. Empirical relations such as mass-metallicity relation are thus seen as guides for studies on the history and chemical evolution of galaxies. Those relations usually rely on nebular metallicities measured with strong-line methods, which assume that H II regions are a one- (or at most two-) parameter family where the oxygen abundance is the driving quantity. Nature is however much more complex than that, and metallicities from strong lines may be strongly biased. We have developed the method BOND (Bayesian Oxygen and Nitrogen abundance Determinations) to simultaneously derive oxygen and nitrogen abundances in giant H II regions by comparing strong and semi-strong observed emission lines to a carefully-defined, finely-meshed grid of photoionization models. Our code and results are public and available at http://bond.ufsc.br.
Oscillator strengths and branching fractions of 4d75p-4d75s Rh II transitions
NASA Astrophysics Data System (ADS)
Bouazza, Safa
2017-01-01
This work reports semi-empirical determination of oscillator strengths, transition probabilities and branching fractions for Rh II 4d75p-4d75s transitions in a wide wavelength range. The angular coefficients of the transition matrix, beforehand obtained in pure SL coupling with help of Racah algebra are transformed into intermediate coupling using eigenvector amplitudes of these two configuration levels determined for this purpose; The transition integral was treated as free parameter in the least squares fit to experimental oscillator strength (gf) values found in literature. The extracted value: <4d75s|r1|4d75p> =2.7426 ± 0.0007 is slightly smaller than that computed by means of ab-initio method. Subsequently to oscillator strength evaluations, transition probabilities and branching fractions were deduced and compared to those obtained experimentally or through another approach like pseudo-relativistic Hartree-Fock model including core-polarization effects.
NASA Astrophysics Data System (ADS)
da Silva, Wilton Pereira; Nunes, Jarderlany Sousa; Gomes, Josivanda Palmeira; de Araújo, Auryclennedy Calou; e Silva, Cleide M. D. P. S.
2018-05-01
Anthocyanin extraction kinetics was described for jambolan fruits. The spherical granules obtained were dried at 40 °C and the average radius of the sphere equivalent to the granules was determined. Solid-solvent ratio was fixed at 1:20 and temperature at 35 °C. A mixture of ethyl alcohol and hydrochloric acid (85:15) was used as solvent. Experiments were conducted with the following stirring frequencies: 0, 50, 100 and 150 rpm. Two diffusion models were used to describe the extraction process. The first one used an analytical solution, with boundary condition of the first kind. The second one used a numerical solution, with boundary condition of the third kind. The second model was the most adequate, and its results were used to determine empirical equations relating the process parameters with the stirring frequency, allowing to simulate new extraction kinetics.
Rapidly Measuring the Speed of Unconscious Learning: Amnesics Learn Quickly and Happy People Slowly
Dienes, Zoltan; Baddeley, Roland J.; Jansari, Ashok
2012-01-01
Background We introduce a method for quickly determining the rate of implicit learning. Methodology/Principal Findings The task involves making a binary prediction for a probabilistic sequence over 10 minutes; from this it is possible to determine the influence of events of a different number of trials in the past on the current decision. This profile directly reflects the learning rate parameter of a large class of learning algorithms including the delta and Rescorla-Wagner rules. To illustrate the use of the method, we compare a person with amnesia with normal controls and we compare people with induced happy and sad moods. Conclusions/Significance Learning on the task is likely both associative and implicit. We argue theoretically and demonstrate empirically that both amnesia and also transient negative moods can be associated with an especially large learning rate: People with amnesia can learn quickly and happy people slowly. PMID:22457759
NASA Astrophysics Data System (ADS)
Bozorgzadeh, Nezam; Yanagimura, Yoko; Harrison, John P.
2017-12-01
The Hoek-Brown empirical strength criterion for intact rock is widely used as the basis for estimating the strength of rock masses. Estimations of the intact rock H-B parameters, namely the empirical constant m and the uniaxial compressive strength σc, are commonly obtained by fitting the criterion to triaxial strength data sets of small sample size. This paper investigates how such small sample sizes affect the uncertainty associated with the H-B parameter estimations. We use Monte Carlo (MC) simulation to generate data sets of different sizes and different combinations of H-B parameters, and then investigate the uncertainty in H-B parameters estimated from these limited data sets. We show that the uncertainties depend not only on the level of variability but also on the particular combination of parameters being investigated. As particular combinations of H-B parameters can informally be considered to represent specific rock types, we discuss that as the minimum number of required samples depends on rock type it should correspond to some acceptable level of uncertainty in the estimations. Also, a comparison of the results from our analysis with actual rock strength data shows that the probability of obtaining reliable strength parameter estimations using small samples may be very low. We further discuss the impact of this on ongoing implementation of reliability-based design protocols and conclude with suggestions for improvements in this respect.
Wilczura-Wachnik, Hanna; Jónsdóttir, Svava Osk
2003-04-01
A method for calculating interaction parameters traditionally used in phase-equilibrium computations in low-molecular systems has been extended for the prediction of solvent activities of aromatic polymer solutions (polystyrene+methylcyclohexane). Using ethylbenzene as a model compound for the repeating unit of the polymer, the intermolecular interaction energies between the solvent molecule and the polymer were simulated. The semiempirical quantum chemical method AM1, and a method for sampling relevant internal orientations for a pair of molecules developed previously were used. Interaction energies are determined for three molecular pairs, the solvent and the model molecule, two solvent molecules and two model molecules, and used to calculated UNIQUAC interaction parameters, a(ij) and a(ji). Using these parameters, the solvent activities of the polystyrene 90,000 amu+methylcyclohexane system, and the total vapor pressures of the methylcyclohexane+ethylbenzene system were calculated. The latter system was compared to experimental data, giving qualitative agreement. Figure Solvent activities for the methylcylcohexane(1)+polystyrene(2) system at 316 K. Parameters aij (blue line) obtained with the AM1 method; parameters aij (pink line) from VLE data for the ethylbenzene+methylcyclohexane system. The abscissa is the polymer weight fraction defined as y2(x1)=(1mx1)M2/[x1M1+(1mx1)M2], where x1 is the solvent mole fraction and Mi are the molecular weights of the components.
Apparent cosmic acceleration from Type Ia supernovae
NASA Astrophysics Data System (ADS)
Dam, Lawrence H.; Heinesen, Asta; Wiltshire, David L.
2017-11-01
Parameters that quantify the acceleration of cosmic expansion are conventionally determined within the standard Friedmann-Lemaître-Robertson-Walker (FLRW) model, which fixes spatial curvature to be homogeneous. Generic averages of Einstein's equations in inhomogeneous cosmology lead to models with non-rigidly evolving average spatial curvature, and different parametrizations of apparent cosmic acceleration. The timescape cosmology is a viable example of such a model without dark energy. Using the largest available supernova data set, the JLA catalogue, we find that the timescape model fits the luminosity distance-redshift data with a likelihood that is statistically indistinguishable from the standard spatially flat Λ cold dark matter cosmology by Bayesian comparison. In the timescape case cosmic acceleration is non-zero but has a marginal amplitude, with best-fitting apparent deceleration parameter, q_{0}=-0.043^{+0.004}_{-0.000}. Systematic issues regarding standardization of supernova light curves are analysed. Cuts of data at the statistical homogeneity scale affect light-curve parameter fits independent of cosmology. A cosmological model dependence of empirical changes to the mean colour parameter is also found. Irrespective of which model ultimately fits better, we argue that as a competitive model with a non-FLRW expansion history, the timescape model may prove a useful diagnostic tool for disentangling selection effects and astrophysical systematics from the underlying expansion history.
NASA Astrophysics Data System (ADS)
Perrier, C.; Breysacher, J.; Rauw, G.
2009-09-01
Aims: We present a technique to determine the orbital and physical parameters of eclipsing eccentric Wolf-Rayet + O-star binaries, where one eclipse is produced by the absorption of the O-star light by the stellar wind of the W-R star. Methods: Our method is based on the use of the empirical moments of the light curve that are integral transforms evaluated from the observed light curves. The optical depth along the line of sight and the limb darkening of the W-R star are modelled by simple mathematical functions, and we derive analytical expressions for the moments of the light curve as a function of the orbital parameters and the key parameters of the transparency and limb-darkening functions. These analytical expressions are then inverted in order to derive the values of the orbital inclination, the stellar radii, the fractional luminosities, and the parameters of the wind transparency and limb-darkening laws. Results: The method is applied to the SMC W-R eclipsing binary HD 5980, a remarkable object that underwent an LBV-like event in August 1994. The analysis refers to the pre-outburst observational data. A synthetic light curve based on the elements derived for the system allows a quality assessment of the results obtained.
[Crop geometry identification based on inversion of semiempirical BRDF models].
Huang, Wen-jiang; Wang, Jin-di; Mu, Xi-han; Wang, Ji-hua; Liu, Liang-yun; Liu, Qiang; Niu, Zheng
2007-10-01
Investigations have been made on identification of erective and horizontal varieties by bidirectional canopy reflected spectrum and semi-empirical bidirectional reflectance distribution function (BRDF) models. The qualitative effect of leaf area index (LAI) and average leaf angle (ALA) on crop canopy reflected spectrum was studied. The structure parameter sensitive index (SPEI) based on the weight for the volumetric kernel (fvol), the weight for the geometric kernel (fgeo), and the weight for constant corresponding to isotropic reflectance (fiso), was defined in the present study for crop geometry identification. However, the weights associated with the kernels of semi-empirical BRDF model do not have a direct relationship with measurable biophysical parameters. Therefore, efforts have focused on trying to find the relation between these semi-empirical BRDF kernel weights and various vegetation structures. SPEI was proved to be more sensitive to identify crop geometry structures than structural scattering index (SSI) and normalized difference f-index (NDFI), SPEI could be used to distinguish erective and horizontal geometry varieties. So, it is feasible to identify horizontal and erective varieties of wheat by bidirectional canopy reflected spectrum.
Kabore, Achille; Biritwum, Nana-Kwadwo; Downs, Philip W.; Soares Magalhaes, Ricardo J.; Zhang, Yaobi; Ottesen, Eric A.
2013-01-01
Background Mapping the distribution of schistosomiasis is essential to determine where control programs should operate, but because it is impractical to assess infection prevalence in every potentially endemic community, model-based geostatistics (MBG) is increasingly being used to predict prevalence and determine intervention strategies. Methodology/Principal Findings To assess the accuracy of MBG predictions for Schistosoma haematobium infection in Ghana, school surveys were evaluated at 79 sites to yield empiric prevalence values that could be compared with values derived from recently published MBG predictions. Based on these findings schools were categorized according to WHO guidelines so that practical implications of any differences could be determined. Using the mean predicted values alone, 21 of the 25 empirically determined ‘high-risk’ schools requiring yearly praziquantel would have been undertreated and almost 20% of the remaining schools would have been treated despite empirically-determined absence of infection – translating into 28% of the children in the 79 schools being undertreated and 12% receiving treatment in the absence of any demonstrated need. Conclusions/Significance Using the current predictive map for Ghana as a spatial decision support tool by aggregating prevalence estimates to the district level was clearly not adequate for guiding the national program, but the alternative of assessing each school in potentially endemic areas of Ghana or elsewhere is not at all feasible; modelling must be a tool complementary to empiric assessments. Thus for practical usefulness, predictive risk mapping should not be thought of as a one-time exercise but must, as in the current study, be an iterative process that incorporates empiric testing and model refining to create updated versions that meet the needs of disease control operational managers. PMID:23505584
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1978-01-01
This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.
NASA Technical Reports Server (NTRS)
Peters, B. C., Jr.; Walker, H. F.
1976-01-01
The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.
Ahluwalia, Arti; De Rossi, Danilo; Giusto, Giuseppe; Chen, Oren; Papper, Vladislav; Likhtenshtein, Gertz I
2002-06-15
A fluorescent-photochrome method of quantifying the orientation and surface density of solid phase antibodies is described. The method is based on measurements of quenching and rates of cis-trans photoisomerization and photodestruction of a stilbene-labeled hapten by a quencher in solution. These experimental parameters enable a quantitative description of the order of binding sites of antibodies immobilized on a surface and can be used to characterize the microviscosity and steric hindrance in the vicinity of the binding site. Furthermore, a theoretical method for the determination of the depth of immersion of the fluorescent label in a two-phase system was developed. The model exploits the concept of dynamic interactions and is based on the empirical dependence of parameters of static exchange interactions on distances between exchangeable centers. In the present work, anti-dinitrophenyl (DNP) antibodies and stilbene-labeled DNP were used to investigate three different protein immobilization methods: physical adsorption, covalent binding, and the Langmuir-Blodgett technique. Copyright 2002 Elsevier Science (USA).
Prediction of Backbreak in Open-Pit Blasting Operations Using the Machine Learning Method
NASA Astrophysics Data System (ADS)
Khandelwal, Manoj; Monjezi, M.
2013-03-01
Backbreak is an undesirable phenomenon in blasting operations. It can cause instability of mine walls, falling down of machinery, improper fragmentation, reduced efficiency of drilling, etc. The existence of various effective parameters and their unknown relationships are the main reasons for inaccuracy of the empirical models. Presently, the application of new approaches such as artificial intelligence is highly recommended. In this paper, an attempt has been made to predict backbreak in blasting operations of Soungun iron mine, Iran, incorporating rock properties and blast design parameters using the support vector machine (SVM) method. To investigate the suitability of this approach, the predictions by SVM have been compared with multivariate regression analysis (MVRA). The coefficient of determination (CoD) and the mean absolute error (MAE) were taken as performance measures. It was found that the CoD between measured and predicted backbreak was 0.987 and 0.89 by SVM and MVRA, respectively, whereas the MAE was 0.29 and 1.07 by SVM and MVRA, respectively.
Zhai, Xiaochun; Wu, Songhua; Liu, Bingyi
2017-06-12
Four field experiments based on Pulsed Coherent Doppler Lidar with different surface roughness have been carried out in 2013-2015 to study the turbulent wind field in the vicinity of operating wind turbine in the onshore and offshore wind parks. The turbulence characteristics in ambient atmosphere and wake area was analyzed using transverse structure function based on Plane Position Indicator scanning mode. An automatic wake processing procedure was developed to determine the wake velocity deficit by considering the effect of ambient velocity disturbance and wake meandering with the mean wind direction. It is found that the turbine wake obviously enhances the atmospheric turbulence mixing, and the difference in the correlation of turbulence parameters under different surface roughness is significant. The dependence of wake parameters including the wake velocity deficit and wake length on wind velocity and turbulence intensity are analyzed and compared with other studies, which validates the empirical model and simulation of a turbine wake for various atmosphere conditions.
Analysis of airborne MAIS imaging spectrometric data for mineral exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Jinnian; Zheng Lanfen; Tong Qingxi
1996-11-01
The high spectral resolution imaging spectrometric system made quantitative analysis and mapping of surface composition possible. The key issue will be the quantitative approach for analysis of surface parameters for imaging spectrometer data. This paper describes the methods and the stages of quantitative analysis. (1) Extracting surface reflectance from imaging spectrometer image. Lab. and inflight field measurements are conducted for calibration of imaging spectrometer data, and the atmospheric correction has also been used to obtain ground reflectance by using empirical line method and radiation transfer modeling. (2) Determining quantitative relationship between absorption band parameters from the imaging spectrometer data andmore » chemical composition of minerals. (3) Spectral comparison between the spectra of spectral library and the spectra derived from the imagery. The wavelet analysis-based spectrum-matching techniques for quantitative analysis of imaging spectrometer data has beer, developed. Airborne MAIS imaging spectrometer data were used for analysis and the analysis results have been applied to the mineral and petroleum exploration in Tarim Basin area china. 8 refs., 8 figs.« less
The impulsive hard X-rays from solar flares
NASA Technical Reports Server (NTRS)
Leach, J.
1984-01-01
A technique for determining the physical arrangement of a solar flare during the impulsive phase was developed based upon a nonthermal model interpretation of the emitted hard X-rays. Accurate values are obtained for the flare parameters, including those which describe the magnetic field structure and the beaming of the energetic electrons, parameters which have hitherto been mostly inaccessible. The X-ray intensity height structure can be described readily with a single expression based upon a semi-empirical fit to the results from many models. Results show that the degree of linear polarization of the X-rays from a flaring loop does not exceed 25 percent and can easily and naturally be as low as the polarization expected from a thermal model. This is a highly significant result in that it supersedes those based upon less thorough calculations of the electron beam dynamics and requires that a reevaluation of hopes of using polarization measurements to discriminate between categories of flare models.
Transient photoresponse in amorphous In-Ga-Zn-O thin films under stretched exponential analysis
NASA Astrophysics Data System (ADS)
Luo, Jiajun; Adler, Alexander U.; Mason, Thomas O.; Bruce Buchholz, D.; Chang, R. P. H.; Grayson, M.
2013-04-01
We investigated transient photoresponse and Hall effect in amorphous In-Ga-Zn-O thin films and observed a stretched exponential response which allows characterization of the activation energy spectrum with only three fit parameters. Measurements of as-grown films and 350 K annealed films were conducted at room temperature by recording conductivity, carrier density, and mobility over day-long time scales, both under illumination and in the dark. Hall measurements verify approximately constant mobility, even as the photoinduced carrier density changes by orders of magnitude. The transient photoconductivity data fit well to a stretched exponential during both illumination and dark relaxation, but with slower response in the dark. The inverse Laplace transforms of these stretched exponentials yield the density of activation energies responsible for transient photoconductivity. An empirical equation is introduced, which determines the linewidth of the activation energy band from the stretched exponential parameter β. Dry annealing at 350 K is observed to slow the transient photoresponse.
Non-LTE analysis of the Ofpe/WN9 star HDE 269227 (R84)
NASA Technical Reports Server (NTRS)
Schmutz, Werner; Leitherer, Claus; Hubeny, Ivan; Vogel, Manfred; Hamann, Wolf-Rainer
1991-01-01
The paper presents the results of a spectral analysis of the Ofpe/WN9 star HD 269227 (R84), which assumes a spherically expanding atmosphere to find solutions for equations of radiative transfer. The spectra of hydrogen and helium were predicted with a non-LTE model. Six stellar parameters were determined for R84. The shape of the velocity law is empirically found, since it can be probed from the terminal velocity of the wind. The six stellar parameters are further employed in a hydrodynamic model where stellar wind is assumed to be directed by radiation pressure, duplicating the mass-loss rate and the terminal wind velocity. The velocity laws found by computation and analysis are found to agree, supporting the theory of radiation-driven stellar wind. R84 is surmised to be a post-red supergiant which lost half of its initial mass, possibly during the red-supergiant phase. This mass loss is also suggested by its spectroscopic similarity to S Doradus.
Seven-panel solar wing deployment and on-orbit maneuvering analyses
NASA Astrophysics Data System (ADS)
Hwang, Earl
2005-05-01
BSS developed a new generation high power (~20kW) solar array to meet the customer demands. The high power solar array had the north and south solar wings of which designs were identical. Each side of the solar wing consists of three main conventional solar panels and the four-side panel swing-out new design. The fully deployed solar array surface area is 966 ft2. It was a quite challenging task to define the solar array's optimum design parameters and deployment scheme for such a huge solar array's successful deployment and on-orbit maneuvering. Hence, a deployable seven-flex-panel solar wing nonlinear math model and a fully deployed solar array/bus-payload math model were developed with the Dynamic Analysis and Design System (DADS) program codes utilizing the inherited and empirical data. Performing extensive parametric analyses with the math model, the optimum design parameters and the orbit maneuvering /deployment schemes were determined to meet all the design requirements, and for the successful solar wing deployment on-orbit.
Weiss, M; Stedtler, C; Roberts, M S
1997-09-01
The dispersion model with mixed boundary conditions uses a single parameter, the dispersion number, to describe the hepatic elimination of xenobiotics and endogenous substances. An implicit a priori assumption of the model is that the transit time density of intravascular indicators is approximately by an inverse Gaussian distribution. This approximation is limited in that the model poorly describes the tail part of the hepatic outflow curves of vascular indicators. A sum of two inverse Gaussian functions is proposed as an alternative, more flexible empirical model for transit time densities of vascular references. This model suggests that a more accurate description of the tail portion of vascular reference curves yields an elimination rate constant (or intrinsic clearance) which is 40% less than predicted by the dispersion model with mixed boundary conditions. The results emphasize the need to accurately describe outflow curves in using them as a basis for determining pharmacokinetic parameters using hepatic elimination models.
Studies of the air plasma spraying of zirconia powder
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varacalle, D.J. Jr.; Wilson, G.C.; Crawmer, D.E.
As part of an investigation of the dynamics that occur in the air plasma spray process, an experimental and analytical study has been accomplished for the deposition of yttria-stabilized zirconia powder using argon-hydrogen and argon-helium working gases. Numerical models of the plasma dynamics and the related plasma-particle interaction are presented. The analytical studies were conducted to determine the parameter space for the empirical studies. Experiments were then conducted using a Box statistical design-of-experiment approach. A substantial range of plasma processing conditions and their effect on the resultant coating is presented. The coatings were characterized by hardness tests and optical metallographymore » (i.e., image analysis). Coating qualities are discussed with respect to hardness, porosity, surface roughness, deposition efficiency, and microstructure. Attributes of the coatings are correlated with the changes in operating parameters. An optimized coating design predicted by the SDE analysis and verified by the calculations is also presented.« less
da Costa, Leonardo Moreira; Carneiro, José Walkimar de Mesquita; Romeiro, Gilberto Alves; Paes, Lilian Weitzel Coelho
2011-02-01
The affinity of the Ca(2+) ion for a set of substituted carbonyl ligands was analyzed with both the DFT (B3LYP/6-31+G(d)) and semi-empirical (PM6) methods. Two types of ligands were studied: a set of monosubstituted [O=CH(R)] and a set of disubstituted ligands [O=C(R)(2)] (R=H, F, Cl, Br, OH, OCH(3), CH(3), CN, NH(2) and NO(2)), with R either directly bound to the carbonyl carbon atom or to the para position of a phenyl ring. The interaction energy was calculated to quantify the affinity of the Ca(2+) cation for the ligands. Geometric and electronic parameters were correlated with the intensity of the metal-ligand interaction. The electronic nature of the substituent is the main parameter that determines the interaction energy. Donor groups make the interaction energy more negative (stabilizing the complex formed), while acceptor groups make the interaction energy less negative (destabilizing the complex formed).
Reproducibility in Psychological Science: When Do Psychological Phenomena Exist?
Iso-Ahola, Seppo E.
2017-01-01
Scientific evidence has recently been used to assert that certain psychological phenomena do not exist. Such claims, however, cannot be made because (1) scientific method itself is seriously limited (i.e., it can never prove a negative); (2) non-existence of phenomena would require a complete absence of both logical (theoretical) and empirical support; even if empirical support is weak, logical and theoretical support can be strong; (3) statistical data are only one piece of evidence and cannot be used to reduce psychological phenomena to statistical phenomena; and (4) psychological phenomena vary across time, situations and persons. The human mind is unreproducible from one situation to another. Psychological phenomena are not particles that can decisively be tested and discovered. Therefore, a declaration that a phenomenon is not real is not only theoretically and empirically unjustified but runs counter to the propositional and provisional nature of scientific knowledge. There are only “temporary winners” and no “final truths” in scientific knowledge. Psychology is a science of subtleties in human affect, cognition and behavior. Its phenomena fluctuate with conditions and may sometimes be difficult to detect and reproduce empirically. When strictly applied, reproducibility is an overstated and even questionable concept in psychological science. Furthermore, statistical measures (e.g., effect size) are poor indicators of the theoretical importance and relevance of phenomena (cf. “deliberate practice” vs. “talent” in expert performance), not to mention whether phenomena are real or unreal. To better understand psychological phenomena, their theoretical and empirical properties should be examined via multiple parameters and criteria. Ten such parameters are suggested. PMID:28626435
2013-06-01
or indicators are used as long range memory measurements. Hurst and Holder exponents are the most important and popular parameters. Traditionally...the relation between two important parameters, the Hurst exponent (measurement of global long range memory) and the Entropy (measurement of...empirical results and future study. II. BACKGROUND We recall briey the mathematical and statistical definitions and properties of the Hurst exponents
A microwave method for measuring moisture content, density, and grain angle of wood
W. L. James; Y.-H. Yen; R. J. King
1985-01-01
The attenuation, phase shift and depolarization of a polarized 4.81-gigahertz wave as it is transmitted through a wood specimen can provide estimates of the moisture content (MC), density, and grain angle of the specimen. Calibrations are empirical, and computations are complicated, with considerable interaction between parameters. Measured dielectric parameters,...
Estimation of Boreal Forest Biomass Using Spaceborne SAR Systems
NASA Technical Reports Server (NTRS)
Saatchi, Sassan; Moghaddam, Mahta
1995-01-01
In this paper, we report on the use of a semiempirical algorithm derived from a two layer radar backscatter model for forest canopies. The model stratifies the forest canopy into crown and stem layers, separates the structural and biometric attributes of the canopy. The structural parameters are estimated by training the model with polarimetric SAR (synthetic aperture radar) data acquired over homogeneous stands with known above ground biomass. Given the structural parameters, the semi-empirical algorithm has four remaining parameters, crown biomass, stem biomass, surface soil moisture, and surface rms height that can be estimated by at least four independent SAR measurements. The algorithm has been used to generate biomass maps over the entire images acquired by JPL AIRSAR and SIR-C SAR systems. The semi-empirical algorithms are then modified to be used by single frequency radar systems such as ERS-1, JERS-1, and Radarsat. The accuracy. of biomass estimation from single channel radars is compared with the case when the channels are used together in synergism or in a polarimetric system.
An empirical study of scanner system parameters
NASA Technical Reports Server (NTRS)
Landgrebe, D.; Biehl, L.; Simmons, W.
1976-01-01
The selection of the current combination of parametric values (instantaneous field of view, number and location of spectral bands, signal-to-noise ratio, etc.) of a multispectral scanner is a complex problem due to the strong interrelationship these parameters have with one another. The study was done with the proposed scanner known as Thematic Mapper in mind. Since an adequate theoretical procedure for this problem has apparently not yet been devised, an empirical simulation approach was used with candidate parameter values selected by the heuristic means. The results obtained using a conventional maximum likelihood pixel classifier suggest that although the classification accuracy declines slightly as the IFOV is decreased this is more than made up by an improved mensuration accuracy. Further, the use of a classifier involving both spatial and spectral features shows a very substantial tendency to resist degradation as the signal-to-noise ratio is decreased. And finally, further evidence is provided of the importance of having at least one spectral band in each of the major available portions of the optical spectrum.
Seven-parameter statistical model for BRDF in the UV band.
Bai, Lu; Wu, Zhensen; Zou, Xiren; Cao, Yunhua
2012-05-21
A new semi-empirical seven-parameter BRDF model is developed in the UV band using experimentally measured data. The model is based on the five-parameter model of Wu and the fourteen-parameter model of Renhorn and Boreman. Surface scatter, bulk scatter and retro-reflection scatter are considered. An optimizing modeling method, the artificial immune network genetic algorithm, is used to fit the BRDF measurement data over a wide range of incident angles. The calculation time and accuracy of the five- and seven-parameter models are compared. After fixing the seven parameters, the model can well describe scattering data in the UV band.
ERIC Educational Resources Information Center
Baehr, Melany E.
1984-01-01
An empirical procedure to determine areas of required development for personnel in three management hierarchies (line, professional, and sales) involves a job analysis of nine key positions in these hierarchies, determination of learning needs for each job function, and development of program curricula for each need. (SK)
A Bayesian approach to parameter and reliability estimation in the Poisson distribution.
NASA Technical Reports Server (NTRS)
Canavos, G. C.
1972-01-01
For life testing procedures, a Bayesian analysis is developed with respect to a random intensity parameter in the Poisson distribution. Bayes estimators are derived for the Poisson parameter and the reliability function based on uniform and gamma prior distributions of that parameter. A Monte Carlo procedure is implemented to make possible an empirical mean-squared error comparison between Bayes and existing minimum variance unbiased, as well as maximum likelihood, estimators. As expected, the Bayes estimators have mean-squared errors that are appreciably smaller than those of the other two.
Equivalent crystal theory of alloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Ferrante, John
1991-01-01
Equivalent Crystal Theory (ECT) is a new, semi-empirical approach to calculating the energetics of a solid with defects. The theory has successfully reproduced surface energies in metals and semiconductors. The theory of binary alloys to date, both with first-principles and semi-empirical models, has not been very successful in predicting the energetics of alloys. This procedure is used to predict the heats of formation, cohesive energy, and lattice parameter of binary alloys of Cu, Ni, Al, Ag, Au, Pd, and Pt as functions of composition. The procedure accurately reproduces the heats of formation versus composition curves for a variety of binary alloys. The results are then compared with other approaches such as the embedded atom and lattice parameters of alloys from pure metal properties more accurately than Vegard's law is presented.
Semi-empirical "leaky-bucket" model of laser-driven x-ray cavities
NASA Astrophysics Data System (ADS)
Moody, J. D.; Landen, O. L.; Divol, L.; LePape, S.; Michel, P.; Town, R. P. J.; Hall, G.; Widmann, K.; Moore, A.
2017-04-01
A semi-empirical analytical model is shown to approximately describe the energy balance in a laser-driven x-ray cavity, such as a hohlraum, for general laser pulse-shapes. Agreement between the model and measurements relies on two scalar parameters, one characterizes the efficiency of x-ray generation for a given laser power and the other represents a characteristic power-loss rate. These parameters, once obtained through estimation or optimization for a particular hohlraum design, can be used to predict either the x-ray flux or the coupled laser power time-history in terms of other quantities for similar hohlraum designs. The value of the model is that it can be used as an approximate "first-look" at hohlraum energy balance prior to a more detailed radiation hydrodynamic modeling.
What is the danger of the anomaly zone for empirical phylogenetics?
Huang, Huateng; Knowles, L Lacey
2009-10-01
The increasing number of observations of gene trees with discordant topologies in phylogenetic studies has raised awareness about the problems of incongruence between species trees and gene trees. Moreover, theoretical treatments focusing on the impact of coalescent variance on phylogenetic study have also identified situations where the most probable gene trees are ones that do not match the underlying species tree (i.e., anomalous gene trees [AGTs]). However, although the theoretical proof of the existence of AGTs is alarming, the actual risk that AGTs pose to empirical phylogenetic study is far from clear. Establishing the conditions (i.e., the branch lengths in a species tree) for which AGTs are possible does not address the critical issue of how prevalent they might be. Furthermore, theoretical characterization of the species trees for which AGTs may pose a problem (i.e., the anomaly zone or the species histories for which AGTs are theoretically possible) is based on consideration of just one source of variance that contributes to species tree and gene tree discord-gene lineage coalescence. Yet, empirical data contain another important stochastic component-mutational variance. Estimated gene trees will differ from the underlying gene trees (i.e., the actual genealogy) because of the random process of mutation. Here, we take a simulation approach to investigate the prevalence of AGTs, among estimated gene trees, thereby characterizing the boundaries of the anomaly zone taking into account both coalescent and mutational variances. We also determine the frequency of realized AGTs, which is critical to putting the theoretical work on AGTs into a realistic biological context. Two salient results emerge from this investigation. First, our results show that mutational variance can indeed expand the parameter space (i.e., the relative branch lengths in a species tree) where AGTs might be observed in empirical data. By exploring the underlying cause for the expanded anomaly zone, we identify aspects of empirical data relevant to avoiding the problems that AGTs pose for species tree inference from multilocus data. Second, for the empirical species histories where AGTs are possible, unresolved trees-not AGTs-predominate the pool of estimated gene trees. This result suggests that the risk of AGTs, while they exist in theory, may rarely be realized in practice. By considering the biological realities of both mutational and coalescent variances, the study has refined, and redefined, what the actual challenges are for empirical phylogenetic study of recently diverged taxa that have speciated rapidly-AGTs themselves are unlikely to pose a significant danger to empirical phylogenetic study.
NASA Astrophysics Data System (ADS)
Badawy, B.; Fletcher, C. G.
2017-12-01
The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.
Models of compacted fine-grained soils used as mineral liner for solid waste
NASA Astrophysics Data System (ADS)
Sivrikaya, Osman
2008-02-01
To prevent the leakage of pollutant liquids into groundwater and sublayers, the compacted fine-grained soils are commonly utilized as mineral liners or a sealing system constructed under municipal solid waste and other containment hazardous materials. This study presents the correlation equations of the compaction parameters required for construction of a mineral liner system. The determination of the characteristic compaction parameters, maximum dry unit weight ( γ dmax) and optimum water content ( w opt) requires considerable time and great effort. In this study, empirical models are described and examined to find which of the index properties correlate well with the compaction characteristics for estimating γ dmax and w opt of fine-grained soils at the standard compactive effort. The compaction data are correlated with different combinations of gravel content ( G), sand content ( S), fine-grained content (FC = clay + silt), plasticity index ( I p), liquid limit ( w L) and plastic limit ( w P) by performing multilinear regression (MLR) analyses. The obtained correlations with statistical parameters are presented and compared with the previous studies. It is found that the maximum dry unit weight and optimum water content have a considerably good correlation with plastic limit in comparison with liquid limit and plasticity index.
A Comparison of Seyfert 1 and 2 Host Galaxies
NASA Astrophysics Data System (ADS)
De Robertis, M.; Virani, S.
2000-12-01
Wide-field, R-band CCD data of 15 Seyfert 1 and 15 Seyfert 2 galaxies taken from the CfA survey were analysed in order to compare the properties of their host galaxies. As well, B-band images for a subset of 12 Seyfert 1s and 7 Seyfert 2s were acquired and analysed in the same way. A robust technique for decomposing the three components---nucleus, bulge and disk---was developed in order determine the structural parameters for each galaxy. In effect, the nuclear contribution was removed empirically by using a spatially nearby, high signal-to-noise ratio point source as a template. Profile fits to the bulge+disk ignored data within three seeing disks of the nucleus. Of the many parameters that were compared between Seyfert 1s and 2s, only two distributions differed at greater than the 95% confidence level for the K-S test: the magnitude of the nuclear component, and the radial color gradient outside the nucleus. The former is expected. The latter could be consistent with some proposed evolutionary models. There is some suggestion that other parameters may differ, but at a lower confidence level.
Characterization of the settling process for wastewater from a combined sewer system.
Piro, P; Carbone, M; Penna, N; Marsalek, J
2011-12-15
Among the methods used for determining the parameters necessary for design of wastewater settling tanks, settling column tests are used most commonly, because of their simplicity and low costs. These tests partly mimic the actual settling processes and allow the evaluation of total suspended solids (TSS) removal by settling. Wastewater samples collected from the Liguori Channel (LC) catchment in Cosenza (Italy) were subject to settling column tests, which yielded iso-removal curves for both dry and wet-weather flow conditions. Such curves were approximated well by the newly proposed power law function containing two empirical parameters, a and b, the first of which is the particle settling velocity and the second one is a flocculation factor accounting for deviations from discrete particle settling. This power law function was tested for both the LC catchment and literature data and yielded a very good fit, with correlation coefficient values (R(2)) ranging from 0.93 to 0.99. Finally, variations in the settling tank TSS removal efficiencies with parameters a and b were also analyzed and provided insight for settling tank design. Copyright © 2011 Elsevier Ltd. All rights reserved.
Validation of Storm Water Management Model Storm Control Measures Modules
NASA Astrophysics Data System (ADS)
Simon, M. A.; Platz, M. C.
2017-12-01
EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.
Resistance formulas in hydraulics-based models for routing debris flows
Chen, Cheng-lung; Ling, Chi-Hai
1997-01-01
The one-dimensional, cross-section-averaged flow equations formulated for routing debris flows down a narrow valley are identical to those for clear-water flow, except for the differences in the values of the flow parameters, such as the momentum (or energy) correction factor, resistance coefficient, and friction slope. Though these flow parameters for debris flow in channels with cross-sections of arbitrary geometric shape can only be determined empirically, the theoretical values of such parameters for debris flow in wide channels exist. This paper aims to derive the theoretical resistance coefficient and friction slope for debris flow in wide channels using a rheological model for highly-concentrated, rapidly-sheared granular flows, such as the generalized viscoplastic fluid (GVF) model. Formulating such resistance coefficient or friction slope is equivalent to developing a generally applicable resistance formula for routing debris flows. Inclusion of a nonuniform term in the expression of the resistance formula proves useful in removing the customary assumption that the spatially varied resistance at any section is equal to what would take place with the same rate of flow passing the same section under conditions of uniformity. This in effect implies an improvement in the accuracy of unsteady debris-flow computation.
Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H
2017-11-01
A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.
A BRDF statistical model applying to space target materials modeling
NASA Astrophysics Data System (ADS)
Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen
2017-10-01
In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.
Development of a new model for short period ocean tidal variations of Earth rotation
NASA Astrophysics Data System (ADS)
Schuh, Harald
2015-08-01
Within project SPOT (Short Period Ocean Tidal variations in Earth rotation) we develop a new high frequency Earth rotation model based on empirical ocean tide models. The main purpose of the SPOT model is its application to space geodetic observations such as GNSS and VLBI.We consider an empirical ocean tide model, which does not require hydrodynamic ocean modeling to determine ocean tidal angular momentum. We use here the EOT11a model of Savcenko & Bosch (2012), which is extended for some additional minor tides (e.g. M1, J1, T2). As empirical tidal models do not provide ocean tidal currents, which are re- quired for the computation of oceanic relative angular momentum, we implement an approach first published by Ray (2001) to estimate ocean tidal current veloci- ties for all tides considered in the extended EOT11a model. The approach itself is tested by application to tidal heights from hydrodynamic ocean tide models, which also provide tidal current velocities. Based on the tidal heights and the associated current velocities the oceanic tidal angular momentum (OTAM) is calculated.For the computation of the related short period variation of Earth rotation, we have re-examined the Euler-Liouville equation for an elastic Earth model with a liquid core. The focus here is on the consistent calculation of the elastic Love num- bers and associated Earth model parameters, which are considered in the Euler- Liouville equation for diurnal and sub-diurnal periods in the frequency domain.
On the Deduction of Galactic Abundances with Evolutionary Neural Networks
NASA Astrophysics Data System (ADS)
Taylor, M.; Diaz, A. I.
2007-12-01
A growing number of indicators are now being used with some confidence to measure the metallicity(Z) of photoionisation regions in planetary nebulae, galactic HII regions(GHIIRs), extra-galactic HII regions(EGHIIRs) and HII galaxies(HIIGs). However, a universal indicator valid also at high metallicities has yet to be found. Here, we report on a new artificial intelligence-based approach to determine metallicity indicators that shows promise for the provision of improved empirical fits. The method hinges on the application of an evolutionary neural network to observational emission line data. The network's DNA, encoded in its architecture, weights and neuron transfer functions, is evolved using a genetic algorithm. Furthermore, selection, operating on a set of 10 distinct neuron transfer functions, means that the empirical relation encoded in the network solution architecture is in functional rather than numerical form. Thus the network solutions provide an equation for the metallicity in terms of line ratios without a priori assumptions. Tapping into the mathematical power offered by this approach, we applied the network to detailed observations of both nebula and auroral emission lines from 0.33μ m-1μ m for a sample of 96 HII-type regions and we were able to obtain an empirical relation between Z and S_{23} with a dispersion of only 0.16 dex. We show how the method can be used to identify new diagnostics as well as the nonlinear relationship supposed to exist between the metallicity Z, ionisation parameter U and effective (or equivalent) temperature T*.
NASA Astrophysics Data System (ADS)
Fatkullin, M. N.; Solodovnikov, G. K.; Trubitsyn, V. M.
2004-01-01
The results of developing the empirical model of parameters of radio signals propagating in the inhomogeneous ionosphere at middle and high latitudes are presented. As the initial data we took the homogeneous data obtained as a result of observations carried out at the Antarctic ``Molodezhnaya'' station by the method of continuous transmission probing of the ionosphere by signals of the satellite radionavigation ``Transit'' system at coherent frequencies of 150 and 400 MHz. The data relate to the summer season period in the Southern hemisphere of the Earth in 1988-1989 during high (F > 160) activity of the Sun. The behavior of the following statistical characteristics of radio signal parameters was analyzed: (a) the interval of correlation of fluctuations of amplitudes at a frequency of 150 MHz (τkA) (b) the interval of correlation of fluctuations of the difference phase (τkϕ) and (c) the parameter characterizing frequency spectra of amplitude (PA) and phase (Pϕ) fluctuations. A third-degree polynomial was used for modeling of propagation parameters. For all above indicated propagation parameters, the coefficients of the third-degree polynomial were calculated as a function of local time and magnetic activity. The results of calculations are tabulated.
Path integral for equities: Dynamic correlation and empirical analysis
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Cao, Yang; Lau, Ada; Tang, Pan
2012-02-01
This paper develops a model to describe the unequal time correlation between rate of returns of different stocks. A non-trivial fourth order derivative Lagrangian is defined to provide an unequal time propagator, which can be fitted to the market data. A calibration algorithm is designed to find the empirical parameters for this model and different de-noising methods are used to capture the signals concealed in the rate of return. The detailed results of this Gaussian model show that the different stocks can have strong correlation and the empirical unequal time correlator can be described by the model's propagator. This preliminary study provides a novel model for the correlator of different instruments at different times.
Empirical Observations on the Sensitivity of Hot Cathode Ionization Type Vacuum Gages
NASA Technical Reports Server (NTRS)
Summers, R. L.
1969-01-01
A study of empirical methods of predicting tile relative sensitivities of hot cathode ionization gages is presented. Using previously published gage sensitivities, several rules for predicting relative sensitivity are tested. The relative sensitivity to different gases is shown to be invariant with gage type, in the linear range of gage operation. The total ionization cross section, molecular and molar polarizability, and refractive index are demonstrated to be useful parameters for predicting relative gage sensitivity. Using data from the literature, the probable error of predictions of relative gage sensitivity based on these molecular properties is found to be about 10 percent. A comprehensive table of predicted relative sensitivities, based on empirical methods, is presented.
Karakurt, Deniz Guven; Demirsoy, Ugur; Corapcioglu, Funda; Oncel, Selim; Karadogan, Meriban; Arisoy, Emin Sami
2014-08-01
Determination of risk of severe bacterial infection complication in children with cancer is important to diminish the cost of hospitalization and therapy. In this study, children with cancer (leukemia excluded) were evaluated for risk of severe infection complication, success of therapy and the relation between clinical and inflammatory parameters during neutropenic fever attacks. Children who fulfilled the criteria of neutropenic fever with cancer were enrolled in the study. During admission, together with clinical and laboratory parameters; interleukin-6, interleukin-8, soluble tumor necrosis factor receptor II, and soluble interleukin 2 reseptor ve procalcitonin levels were detected. Empirical therapy was started with piperacillin/tazobactam and relation between the inflammatory cytokine levels and therapy response parameters were evaluated. The study population included 31 children and 50 neutropenic attacks were studied. In 48% of the attacks, absolute neutrophile count was >100/mm(3) and infectious agents were shown microbiologically in 12% of the attacks. In the study group with piperacillin/tazobactam monotherapy, the success rate without modification was 58%. In the therapy modified group mean duration of fever, antibiotherapy and hospitalization were significantly longer than the group without modification. Inflammatory cytokines' levels during admission (interleukin-6, interleukin-8, soluble tumor necrosis factor reseptor II) were higher in patients with fever >3 days and in multiple regression analysis, it has been shown that they have a determinative role on fever control time. Other cytokines did not show any significant relationship with risk of severe bacterial infection complication and success of therapy.
Bringing Science to Bear: An Empirical Assessment of the Comprehensive Soldier Fitness Program
ERIC Educational Resources Information Center
Lester, Paul B.; McBride, Sharon; Bliese, Paul D.; Adler, Amy B.
2011-01-01
This article outlines the U.S. Army's effort to empirically validate and assess the Comprehensive Soldier Fitness (CSF) program. The empirical assessment includes four major components. First, the CSF scientific staff is currently conducting a longitudinal study to determine if the Master Resilience Training program and the Comprehensive…
Net-infiltration map of the Navajo Sandstone outcrop area in western Washington County, Utah
Heilweil, Victor M.; McKinney, Tim S.
2007-01-01
As populations grow in the arid southwestern United States and desert bedrock aquifers are increasingly targeted for future development, understanding and quantifying the spatial variability of net infiltration and recharge becomes critically important for inventorying groundwater resources and mapping contamination vulnerability. A Geographic Information System (GIS)-based model utilizing readily available soils, topographic, precipitation, and outcrop data has been developed for predicting net infiltration to exposed and soil-covered areas of the Navajo Sandstone outcrop of southwestern Utah. The Navajo Sandstone is an important regional bedrock aquifer. The GIS model determines the net-infiltration percentage of precipitation by using an empirical equation. This relation is derived from least squares linear regression between three surficial parameters (soil coarseness, topographic slope, and downgradient distance from outcrop) and the percentage of estimated net infiltration based on environmental tracer data from excavations and boreholes at Sand Hollow Reservoir in the southeastern part of the study area.Processed GIS raster layers are applied as parameters in the empirical equation for determining net infiltration for soil-covered areas as a percentage of precipitation. This net-infiltration percentage is multiplied by average annual Parameter-elevation Regressions on Independent Slopes Model (PRISM) precipitation data to obtain an infiltration rate for each model cell. Additionally, net infiltration on exposed outcrop areas is set to 10 percent of precipitation on the basis of borehole net-infiltration estimates. Soils and outcrop net-infiltration rates are merged to form a final map.Areas of low, medium, and high potential for ground-water recharge have been identified, and estimates of net infiltration range from 0.1 to 66 millimeters per year (mm/yr). Estimated net-infiltration rates of less than 10 mm/yr are considered low, rates of 10 to 50 mm/yr are considered medium, and rates of more than 50 mm/yr are considered high. A comparison of estimated net-infiltration rates (determined from tritium data) to predicted rates (determined from GIS methods) at 12 sites in Sand Hollow and at Anderson Junction indicates an average difference of about 50 percent. Two of the predicted values were lower, five were higher, and five were within the estimated range. While such uncertainty is relatively small compared with the three order-of-magnitude range in predicted net-infiltration rates, the net-infiltration map is best suited for evaluating relative spatial distribution rather than for precise quantification of recharge to the Navajo aquifer at specific locations. An important potential use for this map is land-use zoning for protecting high net-infiltration parts of the aquifer from potential surface contamination.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699
Spillover effects in epidemiology: parameters, study designs and methodological considerations
Benjamin-Chung, Jade; Arnold, Benjamin F; Berger, David; Luby, Stephen P; Miguel, Edward; Colford Jr, John M; Hubbard, Alan E
2018-01-01
Abstract Many public health interventions provide benefits that extend beyond their direct recipients and impact people in close physical or social proximity who did not directly receive the intervention themselves. A classic example of this phenomenon is the herd protection provided by many vaccines. If these ‘spillover effects’ (i.e. ‘herd effects’) are present in the same direction as the effects on the intended recipients, studies that only estimate direct effects on recipients will likely underestimate the full public health benefits of the intervention. Causal inference assumptions for spillover parameters have been articulated in the vaccine literature, but many studies measuring spillovers of other types of public health interventions have not drawn upon that literature. In conjunction with a systematic review we conducted of spillovers of public health interventions delivered in low- and middle-income countries, we classified the most widely used spillover parameters reported in the empirical literature into a standard notation. General classes of spillover parameters include: cluster-level spillovers; spillovers conditional on treatment or outcome density, distance or the number of treated social network links; and vaccine efficacy parameters related to spillovers. We draw on high quality empirical examples to illustrate each of these parameters. We describe study designs to estimate spillovers and assumptions required to make causal inferences about spillovers. We aim to advance and encourage methods for spillover estimation and reporting by standardizing spillover parameter nomenclature and articulating the causal inference assumptions required to estimate spillovers. PMID:29106568
NASA Astrophysics Data System (ADS)
Li, Zongchao; Chen, Xueliang; Gao, Mengtan; Jiang, Han; Li, Tiefei
2017-03-01
Earthquake engineering parameters are very important in the engineering field, especially engineering anti-seismic design and earthquake disaster prevention. In this study, we focus on simulating earthquake engineering parameters by the empirical Green's function method. The simulated earthquake (MJMA6.5) occurred in Kyushu, Japan, 1997. Horizontal ground motion is separated as fault parallel and fault normal, in order to assess characteristics of two new direction components. Broadband frequency range of ground motion simulation is from 0.1 to 20 Hz. Through comparing observed parameters and synthetic parameters, we analyzed distribution characteristics of earthquake engineering parameters. From the comparison, the simulated waveform has high similarity with the observed waveform. We found the following. (1) Near-field PGA attenuates radically all around with strip radiation patterns in fault parallel while radiation patterns of fault normal is circular; PGV has a good similarity between observed record and synthetic record, but has different distribution characteristic in different components. (2) Rupture direction and terrain have a large influence on 90 % significant duration. (3) Arias Intensity is attenuating with increasing epicenter distance. Observed values have a high similarity with synthetic values. (4) Predominant period is very different in the part of Kyushu in fault normal. It is affected greatly by site conditions. (5) Most parameters have good reference values where the hypo-central is less than 35 km. (6) The GOF values of all these parameters are generally higher than 45 which means a good result according to Olsen's classification criterion. Not all parameters can fit well. Given these synthetic ground motion parameters, seismic hazard analysis can be performed and earthquake disaster analysis can be conducted in future urban planning.
Thermodynamic properties for applications in chemical industry via classical force fields.
Guevara-Carrion, Gabriela; Hasse, Hans; Vrabec, Jadran
2012-01-01
Thermodynamic properties of fluids are of key importance for the chemical industry. Presently, the fluid property models used in process design and optimization are mostly equations of state or G (E) models, which are parameterized using experimental data. Molecular modeling and simulation based on classical force fields is a promising alternative route, which in many cases reasonably complements the well established methods. This chapter gives an introduction to the state-of-the-art in this field regarding molecular models, simulation methods, and tools. Attention is given to the way modeling and simulation on the scale of molecular force fields interact with other scales, which is mainly by parameter inheritance. Parameters for molecular force fields are determined both bottom-up from quantum chemistry and top-down from experimental data. Commonly used functional forms for describing the intra- and intermolecular interactions are presented. Several approaches for ab initio to empirical force field parameterization are discussed. Some transferable force field families, which are frequently used in chemical engineering applications, are described. Furthermore, some examples of force fields that were parameterized for specific molecules are given. Molecular dynamics and Monte Carlo methods for the calculation of transport properties and vapor-liquid equilibria are introduced. Two case studies are presented. First, using liquid ammonia as an example, the capabilities of semi-empirical force fields, parameterized on the basis of quantum chemical information and experimental data, are discussed with respect to thermodynamic properties that are relevant for the chemical industry. Second, the ability of molecular simulation methods to describe accurately vapor-liquid equilibrium properties of binary mixtures containing CO(2) is shown.
ERIC Educational Resources Information Center
Justice, Laura; Logan, Jessica; Kaderavek, Joan; Schmitt, Mary Beth; Tompkins, Virginia; Bartlett, Christopher
2015-01-01
The purpose of this study was to empirically determine whether specific profiles characterize preschool-aged children with language impairment (LI) with respect to their early literacy skills (print awareness, name-writing ability, phonological awareness, alphabet knowledge); the primary interest was to determine if one or more profiles suggested…
ERIC Educational Resources Information Center
Charters, Margaret; And Others
The primary objective of the Syracuse project was to make an empirical determination of the effectiveness of a competency-based (CB) distributive education program by comparing student achievement in three of its major components with similar traditionally organized courses at Syracuse, Buffalo, and Baruch. The three components were retailing,…
Aliabadi, Mohsen; Golmohammadi, Rostam; Mansoorizadeh, Muharram
2014-03-01
It is highly important to analyze the acoustic properties of workrooms in order to identify best noise control measures from the standpoint of noise exposure limits. Due to the fact that sound pressure is dependent upon environments, it cannot be a suitable parameter for determining the share of workroom acoustic characteristics in producing noise pollution. This paper aims to empirically analyze noise source characteristics and acoustic properties of noisy embroidery workrooms based on special parameters. In this regard, reverberation time as the special room acoustic parameter in 30 workrooms was measured based on ISO 3382-2. Sound power quantity of embroidery machines was also determined based on ISO 9614-3. Multiple linear regression was employed for predicting reverberation time based on acoustic features of the workrooms using MATLAB software. The results showed that the measured reverberation times in most of the workrooms were approximately within the ranges recommended by ISO 11690-1. Similarity between reverberation time values calculated by the Sabine formula and measured values was relatively poor (R (2) = 0.39). This can be due to the inaccurate estimation of the acoustic influence of furniture and formula preconditions. Therefore, this value cannot be considered representative of an actual acoustic room. However, the prediction performance of the regression method with root mean square error (RMSE) = 0.23 s and R (2) = 0.69 is relatively acceptable. Because the sound power of the embroidery machines was relatively high, these sources get the highest priority when it comes to applying noise controls. Finally, an objective approach for the determination of the share of workroom acoustic characteristics in producing noise could facilitate the identification of cost-effective noise controls.
Surface correlations of hydrodynamic drag for transitionally rough engineering surfaces
NASA Astrophysics Data System (ADS)
Thakkar, Manan; Busse, Angela; Sandham, Neil
2017-02-01
Rough surfaces are usually characterised by a single equivalent sand-grain roughness height scale that typically needs to be determined from laboratory experiments. Recently, this method has been complemented by a direct numerical simulation approach, whereby representative surfaces can be scanned and the roughness effects computed over a range of Reynolds number. This development raises the prospect over the coming years of having enough data for different types of rough surfaces to be able to relate surface characteristics to roughness effects, such as the roughness function that quantifies the downward displacement of the logarithmic law of the wall. In the present contribution, we use simulation data for 17 irregular surfaces at the same friction Reynolds number, for which they are in the transitionally rough regime. All surfaces are scaled to the same physical roughness height. Mean streamwise velocity profiles show a wide range of roughness function values, while the velocity defect profiles show a good collapse. Profile peaks of the turbulent kinetic energy also vary depending on the surface. We then consider which surface properties are important and how new properties can be incorporated into an empirical model, the accuracy of which can then be tested. Optimised models with several roughness parameters are systematically developed for the roughness function and profile peak turbulent kinetic energy. In determining the roughness function, besides the known parameters of solidity (or frontal area ratio) and skewness, it is shown that the streamwise correlation length and the root-mean-square roughness height are also significant. The peak turbulent kinetic energy is determined by the skewness and root-mean-square roughness height, along with the mean forward-facing surface angle and spanwise effective slope. The results suggest feasibility of relating rough-wall flow properties (throughout the range from hydrodynamically smooth to fully rough) to surface parameters.
Nicholas J. Glidden; Martha E. Lee
2007-01-01
Precision is crucial to campsite monitoring programs. Yet, little empirical research has ever been published on the level of precision of this type of monitoring programs. The purpose of this study was to evaluate the level of agreement between observers of campsite impacts using a multi-parameter campsite monitoring program. Thirteen trained observers assessed 16...
Rafal Podlaski; Francis A. Roesch
2013-01-01
Study assessed the usefulness of various methods for choosing the initial values for the numerical procedures for estimating the parameters of mixture distributions and analysed variety of mixture models to approximate empirical diameter at breast height (dbh) distributions. Two-component mixtures of either the Weibull distribution or the gamma distribution were...
ERIC Educational Resources Information Center
Costrell, Robert M.; McGee, Josh B.
2009-01-01
In this paper, we present an analysis of the Arkansas Teacher Retirement System (ATRS) pension plan and an empirical investigation of the behavioral response to that plan, as well as to a possible reform plan. We begin by describing the plan parameters and discussing the incentives these parameters create. We then estimate the effect of pension…
Quantitative Rheological Model Selection
NASA Astrophysics Data System (ADS)
Freund, Jonathan; Ewoldt, Randy
2014-11-01
The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.
The Mathematics of Psychotherapy: A Nonlinear Model of Change Dynamics.
Schiepek, Gunter; Aas, Benjamin; Viol, Kathrin
2016-07-01
Psychotherapy is a dynamic process produced by a complex system of interacting variables. Even though there are qualitative models of such systems the link between structure and function, between network and network dynamics is still missing. The aim of this study is to realize these links. The proposed model is composed of five state variables (P: problem severity, S: success and therapeutic progress, M: motivation to change, E: emotions, I: insight and new perspectives) interconnected by 16 functions. The shape of each function is modified by four parameters (a: capability to form a trustful working alliance, c: mentalization and emotion regulation, r: behavioral resources and skills, m: self-efficacy and reward expectation). Psychologically, the parameters play the role of competencies or traits, which translate into the concept of control parameters in synergetics. The qualitative model was transferred into five coupled, deterministic, nonlinear difference equations generating the dynamics of each variable as a function of other variables. The mathematical model is able to reproduce important features of psychotherapy processes. Examples of parameter-dependent bifurcation diagrams are given. Beyond the illustrated similarities between simulated and empirical dynamics, the model has to be further developed, systematically tested by simulated experiments, and compared to empirical data.