40 CFR 80.49 - Fuels to be used in augmenting the complex emission model through vehicle testing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... shall be within the blending tolerances defined in this paragraph (a)(4) relative to the values... be within the blending tolerances defined in this paragraph (c) relative to the values specified in... “candidate” level of the parameter shall refer to the most extreme value of the parameter, relative to...
Finding Top-kappa Unexplained Activities in Video
2012-03-09
parameters that define an UAP instance affect the running time by varying the values of each parameter while keeping the others fixed to a default...value. Runtime of Top-k TUA. Table 1 reports the values we considered for each parameter along with the corresponding default value. Parameter Values...Default value k 1, 2, 5, All All τ 0.4, 0.6, 0.8 0.6 L 160, 200, 240, 280 200 # worlds 7 E+04, 4 E+05, 2 E+07 2 E+07 TABLE 1: Parameter values used in
Principles of parametric estimation in modeling language competition
Zhang, Menghan; Gong, Tao
2013-01-01
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka–Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data. PMID:23716678
Principles of parametric estimation in modeling language competition.
Zhang, Menghan; Gong, Tao
2013-06-11
It is generally difficult to define reasonable parameters and interpret their values in mathematical models of social phenomena. Rather than directly fitting abstract parameters against empirical data, we should define some concrete parameters to denote the sociocultural factors relevant for particular phenomena, and compute the values of these parameters based upon the corresponding empirical data. Taking the example of modeling studies of language competition, we propose a language diffusion principle and two language inheritance principles to compute two critical parameters, namely the impacts and inheritance rates of competing languages, in our language competition model derived from the Lotka-Volterra competition model in evolutionary biology. These principles assign explicit sociolinguistic meanings to those parameters and calculate their values from the relevant data of population censuses and language surveys. Using four examples of language competition, we illustrate that our language competition model with thus-estimated parameter values can reliably replicate and predict the dynamics of language competition, and it is especially useful in cases lacking direct competition data.
Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.
2000-01-01
This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.
Reliability-Based Design Optimization of a Composite Airframe Component
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Coroneos, Rula; Patnaik, Surya N.
2011-01-01
A stochastic optimization methodology (SDO) has been developed to design airframe structural components made of metallic and composite materials. The design method accommodates uncertainties in load, strength, and material properties that are defined by distribution functions with mean values and standard deviations. A response parameter, like a failure mode, has become a function of reliability. The primitive variables like thermomechanical loads, material properties, and failure theories, as well as variables like depth of beam or thickness of a membrane, are considered random parameters with specified distribution functions defined by mean values and standard deviations.
NASA Astrophysics Data System (ADS)
Dubolazov, O. V.; Ushenko, V. O.; Trifoniuk, L.; Ushenko, Yu. O.; Zhytaryuk, V. G.; Prydiy, O. G.; Grytsyuk, M.; Kushnerik, L.; Meglinskiy, I.
2017-09-01
A new technique of Mueller-matrix mapping of polycrystalline structure of histological sections of biological tissues is suggested. The algorithms of reconstruction of distribution of parameters of linear and circular birefringence of prostate histological sections are found. The interconnections between such distributions and parameters of linear and circular birefringence of prostate tissue histological sections are defined. The comparative investigations of coordinate distributions of phase anisotropy parameters formed by fibrillar networks of prostate tissues of different pathological states (adenoma and carcinoma) are performed. The values and ranges of change of the statistical (moments of the 1st - 4th order) parameters of coordinate distributions of the value of linear and circular birefringence are defined. The objective criteria of cause of Benign and malignant conditions differentiation are determined.
Min and Max Exponential Extreme Interval Values and Statistics
ERIC Educational Resources Information Center
Jance, Marsha; Thomopoulos, Nick
2009-01-01
The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
Anderman, E.R.; Hill, M.C.
2000-01-01
This report documents the Hydrogeologic-Unit Flow (HUF) Package for the groundwater modeling computer program MODFLOW-2000. The HUF Package is an alternative internal flow package that allows the vertical geometry of the system hydrogeology to be defined explicitly within the model using hydrogeologic units that can be different than the definition of the model layers. The HUF Package works with all the processes of MODFLOW-2000. For the Ground-Water Flow Process, the HUF Package calculates effective hydraulic properties for the model layers based on the hydraulic properties of the hydrogeologic units, which are defined by the user using parameters. The hydraulic properties are used to calculate the conductance coefficients and other terms needed to solve the ground-water flow equation. The sensitivity of the model to the parameters defined within the HUF Package input file can be calculated using the Sensitivity Process, using observations defined with the Observation Process. Optimal values of the parameters can be estimated by using the Parameter-Estimation Process. The HUF Package is nearly identical to the Layer-Property Flow (LPF) Package, the major difference being the definition of the vertical geometry of the system hydrogeology. Use of the HUF Package is illustrated in two test cases, which also serve to verify the performance of the package by showing that the Parameter-Estimation Process produces the true parameter values when exact observations are used.
NASA Astrophysics Data System (ADS)
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty
Azimuthally invariant Mueller-matrix mapping of biological optically anisotropic network
NASA Astrophysics Data System (ADS)
Ushenko, Yu. O.; Vanchuliak, O.; Bodnar, G. B.; Ushenko, V. O.; Grytsyuk, M.; Pavlyukovich, N.; Pavlyukovich, O. V.; Antonyuk, O.
2017-09-01
A new technique of Mueller-matrix mapping of polycrystalline structure of histological sections of biological tissues is suggested. The algorithms of reconstruction of distribution of parameters of linear and circular dichroism of histological sections liver tissue of mice with different degrees of severity of diabetes are found. The interconnections between such distributions and parameters of linear and circular dichroism of liver of mice tissue histological sections are defined. The comparative investigations of coordinate distributions of parameters of amplitude anisotropy formed by Liver tissue with varying severity of diabetes (10 days and 24 days) are performed. The values and ranges of change of the statistical (moments of the 1st - 4th order) parameters of coordinate distributions of the value of linear and circular dichroism are defined. The objective criteria of cause of the degree of severity of the diabetes differentiation are determined.
Real Time Correction of Aircraft Flight Fonfiguration
NASA Technical Reports Server (NTRS)
Schipper, John F. (Inventor)
2009-01-01
Method and system for monitoring and analyzing, in real time, variation with time of an aircraft flight parameter. A time-dependent recovery band, defined by first and second recovery band boundaries that are spaced apart at at least one time point, is constructed for a selected flight parameter and for a selected time recovery time interval length .DELTA.t(FP;rec). A flight parameter, having a value FP(t=t.sub.p) at a time t=t.sub.p, is likely to be able to recover to a reference flight parameter value FP(t';ref), lying in a band of reference flight parameter values FP(t';ref;CB), within a time interval given by t.sub.p.ltoreq.t'.ltoreq.t.sub.p.DELTA.t(FP;rec), if (or only if) the flight parameter value lies between the first and second recovery band boundary traces.
Wada, Yumiko; Furuse, Tamio; Yamada, Ikuko; Masuya, Hiroshi; Kushida, Tomoko; Shibukawa, Yoko; Nakai, Yuji; Kobayashi, Kimio; Kaneda, Hideki; Gondo, Yoichi; Noda, Tetsuo; Shiroishi, Toshihiko; Wakana, Shigeharu
2010-01-01
To establish the cutoff values for screening ENU-induced behavioral mutations, normal variations in mouse behavioral data were examined in home-cage activity (HA), open-field (OF), and passive-avoidance (PA) tests. We defined the normal range as one that included more than 95% of the normal control values. The cutoffs were defined to identify outliers yielding values that deviated from the normal by less than 5% for C57BL/6J, DBA/2J, DBF(1), and N(2) (DXDB) progenies. Cutoff values for G1-phenodeviant (DBF(1)) identification were defined based on values over +/- 3.0 SD from the mean of DBF(1) for all parameters assessed in the HA and OF tests. For the PA test, the cutoff values were defined based on whether the mice met the learning criterion during the 2nd (at a shock intensity of 0.3 mA) or the 3rd (at a shock intensity of 0.15 mA) retention test. For several parameters, the lower outliers were undetectable as the calculated cutoffs were negative values. Based on the cutoff criteria, we identified 275 behavioral phenodeviants among 2,646 G1 progeny. Of these, 64 were crossed with wild-type DBA/2J individuals, and the phenotype transmission was examined in the G2 progeny using the cutoffs defined for N(2) mice. In the G2 mice, we identified 15 novel dominant mutants exhibiting behavioral abnormalities, including hyperactivity in the HA or OF tests, hypoactivity in the OF test, and PA deficits. Genetic and detailed behavioral analysis of these ENU-induced mutants will provide novel insights into the molecular mechanisms underlying behavior.
The shape parameter and its modification for defining coastal profiles
NASA Astrophysics Data System (ADS)
Türker, Umut; Kabdaşli, M. Sedat
2009-03-01
The shape parameter is important for the theoretical description of the sandy coastal profiles. This parameter has previously been defined as a function of the sediment-settling velocity. However, the settling velocity cannot be characterized over a wide range of sediment grains. This, in turn, limits the calculation of the shape parameter over a wide range. This paper provides a simpler and faster analytical equation to describe the shape parameter. The validity of the equation has been tested and compared with the previously estimated values given in both graphical and tabular forms. The results of this study indicate that the analytical solutions of the shape parameter improved the usability of profile better than graphical solutions, predicting better results both at the surf zone and offshore.
NASA Astrophysics Data System (ADS)
Vasić, M.; Radojević, Z.
2017-08-01
One of the main disadvantages of the recently reported method, for setting up the drying regime based on the theory of moisture migration during drying, lies in a fact that it is based on a large number of isothermal experiments. In addition each isothermal experiment requires the use of different drying air parameters. The main goal of this paper was to find a way how to reduce the number of isothermal experiments without affecting the quality of the previously proposed calculation method. The first task was to define the lower and upper inputs as well as the output of the “black box” which will be used in the Box-Wilkinson’s orthogonal multi-factorial experimental design. Three inputs (drying air temperature, humidity and velocity) were used within the experimental design. The output parameter of the model represents the time interval between any two chosen characteristic points presented on the Deff - t. The second task was to calculate the output parameter for each planed experiments. The final output of the model is the equation which can predict the time interval between any two chosen characteristic points as a function of the drying air parameters. This equation is valid for any value of the drying air parameters which are within the defined area designated with lower and upper limiting values.
Recommended Parameter Values for GENII Modeling of Radionuclides in Routine Air and Water Releases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, Sandra F.; Arimescu, Carmen; Napier, Bruce A.
The GENII v2 code is used to estimate dose to individuals or populations from the release of radioactive materials into air or water. Numerous parameter values are required for input into this code. User-defined parameters cover the spectrum from chemical data, meteorological data, agricultural data, and behavioral data. This document is a summary of parameter values that reflect conditions in the United States. Reasonable regional and age-dependent data is summarized. Data availability and quality varies. The set of parameters described address scenarios for chronic air emissions or chronic releases to public waterways. Considerations for the special tritium and carbon-14 modelsmore » are briefly addressed. GENIIv2.10.0 is the current software version that this document supports.« less
NASA Technical Reports Server (NTRS)
Subramanyam, Guru; VanKeuls, Fred W.; Miranda, Felix A.; Canedy, Chadwick L.; Aggarwal, Sanjeev; Venkatesan, Thirumalai; Ramesh, Ramamoorthy
2000-01-01
The correlation of electric field and critical design parameters such as the insertion loss, frequency ability return loss, and bandwidth of conductor/ferroelectric/dielectric microstrip tunable K-band microwave filters is discussed in this work. This work is based primarily on barium strontium titanate (BSTO) ferroelectric thin film based tunable microstrip filters for room temperature applications. Two new parameters which we believe will simplify the evaluation of ferroelectric thin films for tunable microwave filters, are defined. The first of these, called the sensitivity parameter, is defined as the incremental change in center frequency with incremental change in maximum applied electric field (EPEAK) in the filter. The other, the loss parameter, is defined as the incremental or decremental change in insertion loss of the filter with incremental change in maximum applied electric field. At room temperature, the Au/BSTO/LAO microstrip filters exhibited a sensitivity parameter value between 15 and 5 MHz/cm/kV. The loss parameter varied for different bias configurations used for electrically tuning the filter. The loss parameter varied from 0.05 to 0.01 dB/cm/kV at room temperature.
NASA Astrophysics Data System (ADS)
Syafriyono, S.; Caesario, D.; Swastika, A.; Adlan, Q.; Syafri, I.; Abdurrokhim, A.; Mardiana, U.; Mohamad, F.; Alfadli, M. K.; Sari, V. M.
2018-03-01
Rock physical parameters value (Vp and Vs) is one of fundamental aspects in reservoir characterization as a tool to detect rock heterogenity. Its response is depend on several reservoir conditions such as lithology, pressure and reservoir fluids. The value of Vp and Vs is controlled by grain contact and contact stiffness, a function of clay mineral content and porosity also affected by mineral composition. The study about Vp and Vs response within sandstone and its relationship with petrographic parameters has become important to define anisotrophy of reservoir characteristics distribution and could give a better understanding about local diagenesis that influence clastic reservoir properties. Petrographic analysis and Vp-Vs calculation was carried out to 12 core sample which is obtained by hand-drilling of the outcrop in Sukabumi area, West Java as a part of Bayah Formation. Data processing and interpretation of sedimentary vertical succession showing that this outcrop comprises of 3 major sandstone layers indicating fluvial depositional environment. As stated before, there are 4 petrographic parameters (sorting, roundness, clay mineral content, and grain contact) which are responsible to the differences of shear wave and compressional wave value in this outcrop. Lithology with poor-sorted and well- roundness has Vp value lower than well-sorted and poor-roundness (sub-angular) grain. For the sample with high clay content, Vp value is ranging from 1681 to 2000 m/s and could be getting high until 2190 to 2714 m/s in low clay content sample even though the presence of clay minerals cannot be defined neither as matrix nor cement. The whole sample have suture grain contact indicating telogenesis regime whereas facies has no relationship with Vp and Vs value because of the different type of facies show similar petrographic parameters after diagenesis.
Parameter estimation applied to Nimbus 6 wide-angle longwave radiation measurements
NASA Technical Reports Server (NTRS)
Green, R. N.; Smith, G. L.
1978-01-01
A parameter estimation technique was used to analyze the August 1975 Nimbus 6 Earth radiation budget data to demonstrate the concept of deconvolution. The longwave radiation field at the top of the atmosphere is defined from satellite data by a fifth degree and fifth order spherical harmonic representation. The variations of the major features of the radiation field are defined by analyzing the data separately for each two-day duty cycle. A table of coefficient values for each spherical harmonic representation is given along with global mean, gradients, degree variances, and contour plots. In addition, the entire data set is analyzed to define the monthly average radiation field.
2010-01-01
Long-standing clinical and immunologic monitoring and integral evaluation of immune homeostasis (through generalized parameter) in personnel of Center for liquid-fuel rockets liquidation demonstrated diagnostically reliable immunity parameters that enable to forecast changes in the workers' health state. The authors defined boundary values of the generalized parameter to form risk groups for specific entities formation.
T2 values of articular cartilage in clinically relevant subregions of the asymptomatic knee.
Surowiec, Rachel K; Lucas, Erin P; Fitzcharles, Eric K; Petre, Benjamin M; Dornan, Grant J; Giphart, J Erik; LaPrade, Robert F; Ho, Charles P
2014-06-01
In order for T2 mapping to become more clinically applicable, reproducible subregions and standardized T2 parameters must be defined. This study sought to: (1) define clinically relevant subregions of knee cartilage using bone landmarks identifiable on both MR images and during arthroscopy and (2) determine healthy T2 values and T2 texture parameters within these subregions. Twenty-five asymptomatic volunteers (age 18-35) were evaluated with a sagittal T2 mapping sequence. Manual segmentation was performed by three raters, and cartilage was divided into twenty-one subregions modified from the International Cartilage Repair Society Articular Cartilage Mapping System. Mean T2 values and texture parameters (entropy, variance, contrast, homogeneity) were recorded for each subregion, and inter-rater and intra-rater reliability was assessed. The central regions of the condyles had significantly higher T2 values than the posterior regions (P < 0.05) and higher variance than the posterior region on the medial side (P < 0.001). The central trochlea had significantly greater T2 values than the anterior and posterior condyles. The central lateral plateau had lower T2 values, lower variance, higher homogeneity, and lower contrast than nearly all subregions in the tibia. The central patellar regions had higher entropy than the superior and inferior regions (each P ≤ 0.001). Repeatability was good to excellent for all subregions. Significant differences in mean T2 values and texture parameters were found between subregions in this carefully selected asymptomatic population, which suggest that there is normal variation of T2 values within the knee joint. The clinically relevant subregions were found to be robust as demonstrated by the overall high repeatability.
The brilliant blue FCF ion-molecular forms in solutions according to the spectrophotometry data
NASA Astrophysics Data System (ADS)
Chebotarev, A. N.; Bevziuk, K. V.; Snigur, D. V.; Bazel, Ya. R.
2017-10-01
The brilliant blue FCF acid-base properties in aqueous solutions have been studied and its ionization constants have been defined by tristimulus colorimetry and spectrophotometry methods. The scheme of the acid-base dye equilibrium has been proposed and a diagram of the distribution of its ionic-molecular forms has been built. It has been established that the dominant form of the dye was the electroneutral form, which molar absorptivity (ɛ625 = 0.97 × 105) increases with the increase of the dielectric permittivity of the solvent. It has been shown that the replacement of polar solvents by less polar ones is causing a bathochromic shift of the maximum absorption band of the dye, the value of which is correlated with the value of the Hansen parameter. Tautomerization constants have been defined in a number of solvents and associated with the value of the Dimroth-Reichardt parameter.
Black hole complementarity in gravity's rainbow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gim, Yongwan; Kim, Wontae, E-mail: yongwan89@sogang.ac.kr, E-mail: wtkim@sogang.ac.kr
2015-05-01
To see how the gravity's rainbow works for black hole complementary, we evaluate the required energy for duplication of information in the context of black hole complementarity by calculating the critical value of the rainbow parameter in the certain class of the rainbow Schwarzschild black hole. The resultant energy can be written as the well-defined limit for the vanishing rainbow parameter which characterizes the deformation of the relativistic dispersion relation in the freely falling frame. It shows that the duplication of information in quantum mechanics could not be allowed below a certain critical value of the rainbow parameter; however, itmore » might be possible above the critical value of the rainbow parameter, so that the consistent formulation in our model requires additional constraints or any other resolutions for the latter case.« less
DD3MAT - a code for yield criteria anisotropy parameters identification.
NASA Astrophysics Data System (ADS)
Barros, P. D.; Carvalho, P. D.; Alves, J. L.; Oliveira, M. C.; Menezes, L. F.
2016-08-01
This work presents the main strategies and algorithms adopted in the DD3MAT inhouse code, specifically developed for identifying the anisotropy parameters. The algorithm adopted is based on the minimization of an error function, using a downhill simplex method. The set of experimental values can consider yield stresses and r -values obtained from in-plane tension, for different angles with the rolling direction (RD), yield stress and r -value obtained for biaxial stress state, and yield stresses from shear tests performed also for different angles to RD. All these values can be defined for a specific value of plastic work. Moreover, it can also include the yield stresses obtained from in-plane compression tests. The anisotropy parameters are identified for an AA2090-T3 aluminium alloy, highlighting the importance of the user intervention to improve the numerical fit.
Methods for Combining Payload Parameter Variations with Input Environment
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Straayer, J. W.
1975-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.
Moisture parameters and fungal communities associated with gypsum drywall in buildings.
Dedesko, Sandra; Siegel, Jeffrey A
2015-12-08
Uncontrolled excess moisture in buildings is a common problem that can lead to changes in fungal communities. In buildings, moisture parameters can be classified by location and include assessments of moisture in the air, at a surface, or within a material. These parameters are not equivalent in dynamic indoor environments, which makes moisture-induced fungal growth in buildings a complex occurrence. In order to determine the circumstances that lead to such growth, it is essential to have a thorough understanding of in situ moisture measurement, the influence of building factors on moisture parameters, and the levels of these moisture parameters that lead to indoor fungal growth. Currently, there are disagreements in the literature on this topic. A literature review was conducted specifically on moisture-induced fungal growth on gypsum drywall. This review revealed that there is no consistent measurement approach used to characterize moisture in laboratory and field studies, with relative humidity measurements being most common. Additionally, many studies identify a critical moisture value, below which fungal growth will not occur. The values defined by relative humidity encompassed the largest range, while those defined by moisture content exhibited the highest variation. Critical values defined by equilibrium relative humidity were most consistent, and this is likely due to equilibrium relative humidity being the most relevant moisture parameter to microbial growth, since it is a reasonable measure of moisture available at surfaces, where fungi often proliferate. Several sources concur that surface moisture, particularly liquid water, is the prominent factor influencing microbial changes and that moisture in the air and within a material are of lesser importance. However, even if surface moisture is assessed, a single critical moisture level to prevent fungal growth cannot be defined, due to a number of factors, including variations in fungal genera and/or species, temperature, and nutrient availability. Despite these complexities, meaningful measurements can still be made to inform fungal growth by making localised, long-term, and continuous measurements of surface moisture. Such an approach will capture variations in a material's surface moisture, which could provide insight on a number of conditions that could lead to fungal proliferation.
Case studies on optimization problems in MATLAB and COMSOL multiphysics by means of the livelink
NASA Astrophysics Data System (ADS)
Ozana, Stepan; Pies, Martin; Docekal, Tomas
2016-06-01
LiveLink for COMSOL is a tool that integrates COMSOL Multiphysics with MATLAB to extend one's modeling with scripting programming in the MATLAB environment. It allows user to utilize the full power of MATLAB and its toolboxes in preprocessing, model manipulation, and post processing. At first, the head script launches COMSOL with MATLAB and defines initial value of all parameters, refers to the objective function J described in the objective function and creates and runs the defined optimization task. Once the task is launches, the COMSOL model is being called in the iteration loop (from MATLAB environment by use of API interface), changing defined optimization parameters so that the objective function is minimized, using fmincon function to find a local or global minimum of constrained linear or nonlinear multivariable function. Once the minimum is found, it returns exit flag, terminates optimization and returns the optimized values of the parameters. The cooperation with MATLAB via LiveLink enhances a powerful computational environment with complex multiphysics simulations. The paper will introduce using of the LiveLink for COMSOL for chosen case studies in the field of technical cybernetics and bioengineering.
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using
Computational tools for fitting the Hill equation to dose-response curves.
Gadagkar, Sudhindra R; Call, Gerald B
2015-01-01
Many biological response curves commonly assume a sigmoidal shape that can be approximated well by means of the 4-parameter nonlinear logistic equation, also called the Hill equation. However, estimation of the Hill equation parameters requires access to commercial software or the ability to write computer code. Here we present two user-friendly and freely available computer programs to fit the Hill equation - a Solver-based Microsoft Excel template and a stand-alone GUI-based "point and click" program, called HEPB. Both computer programs use the iterative method to estimate two of the Hill equation parameters (EC50 and the Hill slope), while constraining the values of the other two parameters (the minimum and maximum asymptotes of the response variable) to fit the Hill equation to the data. In addition, HEPB draws the prediction band at a user-defined confidence level, and determines the EC50 value for each of the limits of this band to give boundary values that help objectively delineate sensitive, normal and resistant responses to the drug being tested. Both programs were tested by analyzing twelve datasets that varied widely in data values, sample size and slope, and were found to yield estimates of the Hill equation parameters that were essentially identical to those provided by commercial software such as GraphPad Prism and nls, the statistical package in the programming language R. The Excel template provides a means to estimate the parameters of the Hill equation and plot the regression line in a familiar Microsoft Office environment. HEPB, in addition to providing the above results, also computes the prediction band for the data at a user-defined level of confidence, and determines objective cut-off values to distinguish among response types (sensitive, normal and resistant). Both programs are found to yield estimated values that are essentially the same as those from standard software such as GraphPad Prism and the R-based nls. Furthermore, HEPB also has the option to simulate 500 response values based on the range of values of the dose variable in the original data and the fit of the Hill equation to that data. Copyright © 2014. Published by Elsevier Inc.
An Extreme-Value Approach to Anomaly Vulnerability Identification
NASA Technical Reports Server (NTRS)
Everett, Chris; Maggio, Gaspare; Groen, Frank
2010-01-01
The objective of this paper is to present a method for importance analysis in parametric probabilistic modeling where the result of interest is the identification of potential engineering vulnerabilities associated with postulated anomalies in system behavior. In the context of Accident Precursor Analysis (APA), under which this method has been developed, these vulnerabilities, designated as anomaly vulnerabilities, are conditions that produce high risk in the presence of anomalous system behavior. The method defines a parameter-specific Parameter Vulnerability Importance measure (PVI), which identifies anomaly risk-model parameter values that indicate the potential presence of anomaly vulnerabilities, and allows them to be prioritized for further investigation. This entails analyzing each uncertain risk-model parameter over its credible range of values to determine where it produces the maximum risk. A parameter that produces high system risk for a particular range of values suggests that the system is vulnerable to the modeled anomalous conditions, if indeed the true parameter value lies in that range. Thus, PVI analysis provides a means of identifying and prioritizing anomaly-related engineering issues that at the very least warrant improved understanding to reduce uncertainty, such that true vulnerabilities may be identified and proper corrective actions taken.
Renard Penna, Raphaele; Cancel-Tassin, Geraldine; Comperat, Eva; Mozer, Pierre; Léon, Priscilla; Varinot, Justine; Roupret, Morgan; Bitker, Marc-Olivier; Lucidarme, Olivier; Cussenot, Olivier
2016-10-01
To evaluate the use of multiparametric MRI (mp MRI) parameters in order to predict prostate cancer aggressiveness as defined by pathological Gleason score or molecular markers in a cohort of patients defined with a Gleason score of 6 at biopsy. Sixty-seven men treated by radical prostatectomy (RP) for a low grade (Gleason 6) on biopsy and mp MRI before biopsy were selected. The cycle cell proliferation (CCP) score assessed by the Prolaris test and Ki-67/PTEN expression assessed by immunohistochemistry were quantified on the RP specimens. 49.25 % of the cancers were undergraded on biopsy compared to the RP specimens. Apparent diffusion coefficient (ADC) < 0.80 × 10(-3) mm(2)/s (P value 0.003), Likert score >4 (P value 0.003) and PSA density >0.15 ng/ml/cc (P value 0.035) were significantly associated with a higher RP Gleason score. Regarding molecular markers of aggressiveness, ADC < 0.80 × 10(-3) mm(2)/s and Likert score >4 were also significantly associated with a positive staining for Ki-67 (P value 0.039 and 0.01, respectively). No association was found between any analyzed MRI or clinical parameter and the CCP score. Decreasing ADC value is a stronger indicator of aggressive prostate cancer as defined by molecular markers or postsurgical histology than biopsy characteristics.
Code of Federal Regulations, 2012 CFR
2012-07-01
... selected for initial performance testing and defined within a group of similar emission units in accordance... similar air pollution control device applied to each similar emission unit within a defined group using... emission units within group “k”; Pi = Daily average parametric monitoring parameter value corresponding to...
Code of Federal Regulations, 2014 CFR
2014-07-01
... selected for initial performance testing and defined within a group of similar emission units in accordance... similar air pollution control device applied to each similar emission unit within a defined group using... emission units within group “k”; Pi = Daily average parametric monitoring parameter value corresponding to...
Classification of materials using nuclear magnetic resonance dispersion and/or x-ray absorption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Espy, Michelle A.; Matlashov, Andrei N.; Schultz, Larry J.
Methods for determining the identity of a substance are provided. A classification parameter set is defined to allow identification of substances that previously could not be identified or to allow identification of substances with a higher degree of confidence. The classification parameter set may include at least one of relative nuclear susceptibility (RNS) or an x-ray linear attenuation coefficient (LAC). RNS represents the density of hydrogen nuclei present in a substance relative to the density of hydrogen nuclei present in water. The extended classification parameter set may include T.sub.1, T.sub.2, and/or T.sub.1.rho. as well as at least one additional classificationmore » parameter comprising one of RNS or LAC. Values obtained for additional classification parameters as well as values obtained for T.sub.1, T.sub.2, and T.sub.1.rho. can be compared to known classification parameter values to determine whether a particular substance is a known material.« less
Zhou, Qian-Jun; Zheng, Zhi-Chun; Zhu, Yong-Qiao; Lu, Pei-Ji; Huang, Jia; Ye, Jian-Ding; Zhang, Jie; Lu, Shun; Luo, Qing-Quan
2017-05-01
To investigate the potential value of CT parameters to differentiate ground-glass nodules between noninvasive adenocarcinoma and invasive pulmonary adenocarcinoma (IPA) as defined by IASLC/ATS/ERS classification. We retrospectively reviewed 211 patients with pathologically proved stage 0-IA lung adenocarcinoma which appeared as subsolid nodules, from January 2012 to January 2013 including 137 pure ground glass nodules (pGGNs) and 74 part-solid nodules (PSNs). Pathological data was classified under the 2011 IASLC/ATS/ERS classification. Both quantitative and qualitative CT parameters were used to determine the tumor invasiveness between noninvasive adenocarcinomas and IPAs. There were 154 noninvasive adenocarcinomas and 57 IPAs. In pGGNs, CT size and area, one-dimensional mean CT value and bubble lucency were significantly different between noninvasive adenocarcinomas and IPAs on univariate analysis. Multivariate regression and ROC analysis revealed that CT size and one-dimensional mean CT value were predictive of noninvasive adenocarcinomas compared to IPAs. Optimal cutoff value was 13.60 mm (sensitivity, 75.0%; specificity, 99.6%), and -583.60 HU (sensitivity, 68.8%; specificity, 66.9%). In PSNs, there were significant differences in CT size and area, solid component area, solid proportion, one-dimensional mean and maximum CT value, three-dimensional (3D) mean CT value between noninvasive adenocarcinomas and IPAs on univariate analysis. Multivariate and ROC analysis showed that CT size and 3D mean CT value were significantly differentiators. Optimal cutoff value was 19.64 mm (sensitivity, 53.7%; specificity, 93.9%), -571.63 HU (sensitivity, 85.4%; specificity, 75.8%). For pGGNs, CT size and one-dimensional mean CT value are determinants for tumor invasiveness. For PSNs, tumor invasiveness can be predicted by CT size and 3D mean CT value.
Impact of orbit modeling on DORIS station position and Earth rotation estimates
NASA Astrophysics Data System (ADS)
Štěpánek, Petr; Rodriguez-Solano, Carlos Javier; Hugentobler, Urs; Filler, Vratislav
2014-04-01
The high precision of estimated station coordinates and Earth rotation parameters (ERP) obtained from satellite geodetic techniques is based on the precise determination of the satellite orbit. This paper focuses on the analysis of the impact of different orbit parameterizations on the accuracy of station coordinates and the ERPs derived from DORIS observations. In a series of experiments the DORIS data from the complete year 2011 were processed with different orbit model settings. First, the impact of precise modeling of the non-conservative forces on geodetic parameters was compared with results obtained with an empirical-stochastic modeling approach. Second, the temporal spacing of drag scaling parameters was tested. Third, the impact of estimating once-per-revolution harmonic accelerations in cross-track direction was analyzed. And fourth, two different approaches for solar radiation pressure (SRP) handling were compared, namely adjusting SRP scaling parameter or fixing it on pre-defined values. Our analyses confirm that the empirical-stochastic orbit modeling approach, which does not require satellite attitude information and macro models, results for most of the monitored station parameters in comparable accuracy as the dynamical model that employs precise non-conservative force modeling. However, the dynamical orbit model leads to a reduction of the RMS values for the estimated rotation pole coordinates by 17% for x-pole and 12% for y-pole. The experiments show that adjusting atmospheric drag scaling parameters each 30 min is appropriate for DORIS solutions. Moreover, it was shown that the adjustment of cross-track once-per-revolution empirical parameter increases the RMS of the estimated Earth rotation pole coordinates. With recent data it was however not possible to confirm the previously known high annual variation in the estimated geocenter z-translation series as well as its mitigation by fixing the SRP parameters on pre-defined values.
Two statistics for evaluating parameter identifiability and error reduction
Doherty, John; Hunt, Randall J.
2009-01-01
Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
2004-01-01
Modern engineering design practices are tending more toward the treatment of design parameters as random variables as opposed to fixed, or deterministic, values. The probabilistic design approach attempts to account for the uncertainty in design parameters by representing them as a distribution of values rather than as a single value. The motivations for this effort include preventing excessive overdesign as well as assessing and assuring reliability, both of which are important for aerospace applications. However, the determination of the probability distribution is a fundamental problem in reliability analysis. A random variable is often defined by the parameters of the theoretical distribution function that gives the best fit to experimental data. In many cases the distribution must be assumed from very limited information or data. Often the types of information that are available or reasonably estimated are the minimum, maximum, and most likely values of the design parameter. For these situations the beta distribution model is very convenient because the parameters that define the distribution can be easily determined from these three pieces of information. Widely used in the field of operations research, the beta model is very flexible and is also useful for estimating the mean and standard deviation of a random variable given only the aforementioned three values. However, an assumption is required to determine the four parameters of the beta distribution from only these three pieces of information (some of the more common distributions, like the normal, lognormal, gamma, and Weibull distributions, have two or three parameters). The conventional method assumes that the standard deviation is a certain fraction of the range. The beta parameters are then determined by solving a set of equations simultaneously. A new method developed in-house at the NASA Glenn Research Center assumes a value for one of the beta shape parameters based on an analogy with the normal distribution (ref.1). This new approach allows for a very simple and direct algebraic solution without restricting the standard deviation. The beta parameters obtained by the new method are comparable to the conventional method (and identical when the distribution is symmetrical). However, the proposed method generally produces a less peaked distribution with a slightly larger standard deviation (up to 7 percent) than the conventional method in cases where the distribution is asymmetric or skewed. The beta distribution model has now been implemented into the Fast Probability Integration (FPI) module used in the NESSUS computer code for probabilistic analyses of structures (ref. 2).
An Examination of Two Procedures for Identifying Consequential Item Parameter Drift
ERIC Educational Resources Information Center
Wells, Craig S.; Hambleton, Ronald K.; Kirkpatrick, Robert; Meng, Yu
2014-01-01
The purpose of the present study was to develop and evaluate two procedures flagging consequential item parameter drift (IPD) in an operational testing program. The first procedure was based on flagging items that exhibit a meaningful magnitude of IPD using a critical value that was defined to represent barely tolerable IPD. The second procedure…
NASA Technical Reports Server (NTRS)
Dudkin, V. E.; Kovalev, E. E.; Nefedov, N. A.; Antonchik, V. A.; Bogdanov, S. D.; Kosmach, V. F.; Likhachev, A. YU.; Benton, E. V.; Crawford, H. J.
1995-01-01
A method is proposed for finding the dependence of mean multiplicities of secondaries on the nucleus-collision impact parameter from the data on the total interaction ensemble. The impact parameter has been shown to completely define the mean characteristics of an individual interaction event. A difference has been found between experimental results and the data calculated in terms of the cascade-evaporation model at impact-parameter values below 3 fm.
Transformation of Galilean satellite parameters to J2000
NASA Astrophysics Data System (ADS)
Lieske, J. H.
1998-09-01
The so-called galsat software has the capability of computing Earth-equatorial coordinates of Jupiter's Galilean satellies in an arbitrary reference frame, not just that of B1950. The 50 parameters which define the theory of motion of the Galilean satellites (Lieske 1977, Astron. Astrophys. 56, 333--352) could also be transformed in a manner such that the same galsat computer program can be employed to compute rectangular coordinates with their values being in the J2000 system. One of the input parameters (varepsilon_ {27}) is related to the obliquity of the ecliptic and its value is normally zero in the B1950 frame. If that parameter is changed from 0 to -0.0002771, and if other input parameters are changed in a prescribed manner, then the same galsat software can be employed to produce ephemerides on the J2000 system for any of the ephemerides which employ the galsat parameters, such as those of Arlot (1982), Vasundhara (1994) and Lieske. In this paper we present the parameters whose values must be altered in order for the software to produce coordinates directly in the J2000 system.
NASA Astrophysics Data System (ADS)
Nikolaev, A. V.; Alymenko, N. I.; Kamenskikh, A. A.; Alymenko, D. N.; Nikolaev, V. A.; Petrov, A. I.
2017-10-01
The article specifies measuring data of air parameters and its volume flow in the shafts and on the surface, collected in BKPRU-2 (Berezniki potash plant and mine 2) («Uralkali» PJSC) in normal operation mode, after shutdown of the main mine fan (GVU) and within several hours. As a result of the test it has been established that thermal pressure between the mine shafts is active continuously regardless of the GVU operation mode or other draught sources. Also it has been discovered that depth of the mine shafts has no impact on thermal pressure value. By the same difference of shaft elevation marks and parameters of outer air between the shafts, by their different depth, thermal pressure of the same value will be active. Value of the general mine natural draught defined as an algebraic sum of thermal pressure values between the shafts depends only on the difference of temperature and pressure of outer air and air in the shaft bottoms on condition of shutdown of the air handling system (unit-heaters, air conditioning systems).
Monitoring and analysis of data in cyberspace
NASA Technical Reports Server (NTRS)
Schwuttke, Ursula M. (Inventor); Angelino, Robert (Inventor)
2001-01-01
Information from monitored systems is displayed in three dimensional cyberspace representations defining a virtual universe having three dimensions. Fixed and dynamic data parameter outputs from the monitored systems are visually represented as graphic objects that are positioned in the virtual universe based on relationships to the system and to the data parameter categories. Attributes and values of the data parameters are indicated by manipulating properties of the graphic object such as position, color, shape, and motion.
Monte Carlo Solution to Find Input Parameters in Systems Design Problems
NASA Astrophysics Data System (ADS)
Arsham, Hossein
2013-06-01
Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.
Impulse Current Waveform Compliance with IEC 60060-1
NASA Astrophysics Data System (ADS)
Sato, Shuji; Harada, Tatsuya; Yokoyama, Taizou; Sakaguchi, Sumiko; Ebana, Takao; Saito, Tatsunori
After numerous simulations, authors could unsuccessfully design an impulse current calibrator, whose output's time parameters (front time, T1 and time to half the peak, T2 ) are quite close to ideals defined in IEC 60060-1. The investigation for the failed trial was commenced. Using normalized damped oscillating waveform e-tsin(ωt), a relationship of the ratio T2/T1 and undershoot value are studied for all possible value for . With this relationship, it is derived that 1) one cannot generate an ideal wave form unless one has to accept a certain margin for the two parameters, 2) even with the allowable margin, one can generate a wave form only in a case a value for T1 is smaller and T2 is larger than standard values. In the paper, possible time parameter combination, which fulfils IEC 60060-1 requirements, is illustrated for a calibrator design.
Markov Chain Monte Carlo Used in Parameter Inference of Magnetic Resonance Spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hock, Kiel; Earle, Keith
2016-02-06
In this paper, we use Boltzmann statistics and the maximum likelihood distribution derived from Bayes’ Theorem to infer parameter values for a Pake Doublet Spectrum, a lineshape of historical significance and contemporary relevance for determining distances between interacting magnetic dipoles. A Metropolis Hastings Markov Chain Monte Carlo algorithm is implemented and designed to find the optimum parameter set and to estimate parameter uncertainties. In conclusion, the posterior distribution allows us to define a metric on parameter space that induces a geometry with negative curvature that affects the parameter uncertainty estimates, particularly for spectra with low signal to noise.
Control of complex dynamics and chaos in distributed parameter systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakravarti, S.; Marek, M.; Ray, W.H.
This paper discusses a methodology for controlling complex dynamics and chaos in distributed parameter systems. The reaction-diffusion system with Brusselator kinetics, where the torus-doubling or quasi-periodic (two characteristic incommensurate frequencies) route to chaos exists in a defined range of parameter values, is used as an example. Poincare maps are used for characterization of quasi-periodic and chaotic attractors. The dominant modes or topos, which are inherent properties of the system, are identified by means of the Singular Value Decomposition. Tested modal feedback control schemas based on identified dominant spatial modes confirm the possibility of stabilization of simple quasi-periodic trajectories in themore » complex quasi-periodic or chaotic spatiotemporal patterns.« less
Essa, Khalid S
2014-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.
Essa, Khalid S.
2013-01-01
A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472
Criteria for the use of regression analysis for remote sensing of sediment and pollutants
NASA Technical Reports Server (NTRS)
Whitlock, C. H.; Kuo, C. Y.; Lecroy, S. R.
1982-01-01
An examination of limitations, requirements, and precision of the linear multiple-regression technique for quantification of marine environmental parameters is conducted. Both environmental and optical physics conditions have been defined for which an exact solution to the signal response equations is of the same form as the multiple regression equation. Various statistical parameters are examined to define a criteria for selection of an unbiased fit when upwelled radiance values contain error and are correlated with each other. Field experimental data are examined to define data smoothing requirements in order to satisfy the criteria of Daniel and Wood (1971). Recommendations are made concerning improved selection of ground-truth locations to maximize variance and to minimize physical errors associated with the remote sensing experiment.
NASA Astrophysics Data System (ADS)
Norton, P. A., II; Haj, A. E., Jr.
2014-12-01
The United States Geological Survey is currently developing a National Hydrologic Model (NHM) to support and facilitate coordinated and consistent hydrologic modeling efforts at the scale of the continental United States. As part of this effort, the Geospatial Fabric (GF) for the NHM was created. The GF is a database that contains parameters derived from datasets that characterize the physical features of watersheds. The GF was used to aggregate catchments and flowlines defined in the National Hydrography Dataset Plus dataset for more than 100,000 hydrologic response units (HRUs), and to establish initial parameter values for input to the Precipitation-Runoff Modeling System (PRMS). Many parameter values are adjusted in PRMS using an automated calibration process. Using these adjusted parameter values, the PRMS model estimated variables such as evapotranspiration (ET), potential evapotranspiration (PET), snow-covered area (SCA), and snow water equivalent (SWE). In order to evaluate the effectiveness of parameter calibration, and model performance in general, several satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) gridded datasets including ET, PET, SCA, and SWE were compared to PRMS-simulated values. The MODIS and SNODAS data were spatially averaged for each HRU, and compared to PRMS-simulated ET, PET, SCA, and SWE values for each HRU in the Upper Missouri River watershed. Default initial GF parameter values and PRMS calibration ranges were evaluated. Evaluation results, and the use of MODIS and SNODAS datasets to update GF parameter values and PRMS calibration ranges, are presented and discussed.
Kaiser, W; Faber, T S; Findeis, M
1996-01-01
The authors developed a computer program that detects myocardial infarction (MI) and left ventricular hypertrophy (LVH) in two steps: (1) by extracting parameter values from a 10-second, 12-lead electrocardiogram, and (2) by classifying the extracted parameter values with rule sets. Every disease has its dedicated set of rules. Hence, there are separate rule sets for anterior MI, inferior MI, and LVH. If at least one rule is satisfied, the disease is said to be detected. The computer program automatically develops these rule sets. A database (learning set) of healthy subjects and patients with MI, LVH, and mixed MI+LVH was used. After defining the rule type, initial limits, and expected quality of the rules (positive predictive value, minimum number of patients), the program creates a set of rules by varying the limits. The general rule type is defined as: disease = lim1l < p1 < or = lim1u and lim2l < p2 < or = lim2u and ... limnl < pn < or = limnu. When defining the rule types, only the parameters (p1 ... pn) that are known as clinical electrocardiographic criteria (amplitudes [mV] of Q, R, and T waves and ST-segment; duration [ms] of Q wave; frontal angle [degrees]) were used. This allowed for submitting the learned rule sets to an independent investigator for medical verification. It also allowed the creation of explanatory texts with the rules. These advantages are not offered by the neurons of a neural network. The learned rules were checked against a test set and the following results were obtained: MI: sensitivity 76.2%, positive predictive value 98.6%; LVH: sensitivity 72.3%, positive predictive value 90.9%. The specificity ratings for MI are better than 98%; for LVH, better than 90%.
Generic NICA-Donnan model parameters for metal-ion binding by humic substances.
Milne, Christopher J; Kinniburgh, David G; van Riemsdijk, Willem H; Tipping, Edward
2003-03-01
A total of 171 datasets of literature and experimental data for metal-ion binding by fulvic and humic acids have been digitized and re-analyzed using the NICA-Donnan model. Generic parameter values have been derived that can be used for modeling in the absence of specific metalion binding measurements. These values complement the previously derived generic descriptions of proton binding. For ions where the ranges of pH, concentration, and ionic strength conditions are well covered by the available data,the generic parameters successfully describe the metalion binding behavior across a very wide range of conditions and for different humic and fulvic acids. Where published data for other metal ions are too sparse to constrain the model well, generic parameters have been estimated by interpolating trends observable in the parameter values of the well-defined data. Recommended generic NICA-Donnan model parameters are provided for 23 metal ions (Al, Am, Ba, Ca, Cd, Cm, Co, CrIII, Cu, Dy, Eu, FeII, FeIII, Hg, Mg, Mn, Ni, Pb, Sr, Thv, UVIO2, VIIIO, and Zn) for both fulvic and humic acids. These parameters probably represent the best NICA-Donnan description of metal-ion binding that can be achieved using existing data.
NASA Astrophysics Data System (ADS)
Chan, C. H.; Brown, G.; Rikvold, P. A.
2017-05-01
A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.
Davidson, P; Bigerelle, M; Bounichane, B; Giazzon, M; Anselme, K
2010-07-01
Contact guidance is generally evaluated by measuring the orientation angle of cells. However, statistical analyses are rarely performed on these parameters. Here we propose a statistical analysis based on a new parameter sigma, the orientation parameter, defined as the dispersion of the distribution of orientation angles. This parameter can be used to obtain a truncated Gaussian distribution that models the distribution of the data between -90 degrees and +90 degrees. We established a threshold value of the orientation parameter below which the data can be considered to be aligned within a 95% confidence interval. Applying our orientation parameter to cells on grooves and using a modelling approach, we established the relationship sigma=alpha(meas)+(52 degrees -alpha(meas))/(1+C(GDE)R) where the parameter C(GDE) represents the sensitivity of cells to groove depth, and R the groove depth. The values of C(GDE) obtained allowed us to compare the contact guidance of human osteoprogenitor (HOP) cells across experiments involving different groove depths, times in culture and inoculation densities. We demonstrate that HOP cells are able to identify and respond to the presence of grooves 30, 100, 200 and 500 nm deep and that the deeper the grooves, the higher the cell orientation. The evolution of the sensitivity (C(GDE)) with culture time is roughly sigmoidal with an asymptote, which is a function of inoculation density. The sigma parameter defined here is a universal parameter that can be applied to all orientation measurements and does not require a mathematical background or knowledge of directional statistics. Copyright 2010 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Identification of atypical flight patterns
NASA Technical Reports Server (NTRS)
Statler, Irving C. (Inventor); Ferryman, Thomas A. (Inventor); Amidan, Brett G. (Inventor); Whitney, Paul D. (Inventor); White, Amanda M. (Inventor); Willse, Alan R. (Inventor); Cooley, Scott K. (Inventor); Jay, Joseph Griffith (Inventor); Lawrence, Robert E. (Inventor); Mosbrucker, Chris (Inventor)
2005-01-01
Method and system for analyzing aircraft data, including multiple selected flight parameters for a selected phase of a selected flight, and for determining when the selected phase of the selected flight is atypical, when compared with corresponding data for the same phase for other similar flights. A flight signature is computed using continuous-valued and discrete-valued flight parameters for the selected flight parameters and is optionally compared with a statistical distribution of other observed flight signatures, yielding atypicality scores for the same phase for other similar flights. A cluster analysis is optionally applied to the flight signatures to define an optimal collection of clusters. A level of atypicality for a selected flight is estimated, based upon an index associated with the cluster analysis.
Moon, Joon Ho; Kim, Kyoung Min; Kim, Jung Hee; Moon, Jae Hoon; Choi, Sung Hee; Lim, Soo; Lim, Jae-Young; Kim, Ki Woong; Park, Kyong Soo; Jang, Hak Chul
2016-01-01
We evaluated the Foundation for the National Institutes of Health (FNIH) Sarcopenia Project's recommended criteria for sarcopenia's association with mortality among older Korean adults. We conducted a community-based prospective cohort study which included 560 (285 men and 275 women) older Korean adults aged ≥65 years. Muscle mass (appendicular skeletal muscle mass-to-body mass index ratio (ASM/BMI)), handgrip strength, and walking velocity were evaluated in association with all-cause mortality during 6-year follow-up. Both the lowest quintile for each parameter (ethnic-specific cutoff) and FNIH-recommended values were used as cutoffs. Forty men (14.0%) and 21 women (7.6%) died during 6-year follow-up. The deceased subjects were older and had lower ASM, handgrip strength, and walking velocity. Sarcopenia defined by both low lean mass and weakness had a 4.13 (95% CI, 1.69-10.11) times higher risk of death, and sarcopenia defined by a combination of low lean mass, weakness, and slowness had a 9.56 (3.16-28.90) times higher risk of death after adjusting for covariates in men. However, these significant associations were not observed in women. In terms of cutoffs of each parameter, using the lowest quintile showed better predictive values in mortality than using the FNIH-recommended values. Moreover, new muscle mass index, ASM/BMI, provided better prognostic values than ASM/height2 in all associations. New sarcopenia definition by FNIH was better able to predict 6-year mortality among Korean men. Moreover, ethnic-specific cutoffs, the lowest quintile for each parameter, predicted the higher risk of mortality than the FNIH-recommended values.
Kim, Jung Hee; Moon, Jae Hoon; Choi, Sung Hee; Lim, Soo; Lim, Jae-Young; Kim, Ki Woong; Park, Kyong Soo; Jang, Hak Chul
2016-01-01
Objective We evaluated the Foundation for the National Institutes of Health (FNIH) Sarcopenia Project’s recommended criteria for sarcopenia’s association with mortality among older Korean adults. Methods We conducted a community-based prospective cohort study which included 560 (285 men and 275 women) older Korean adults aged ≥65 years. Muscle mass (appendicular skeletal muscle mass-to-body mass index ratio (ASM/BMI)), handgrip strength, and walking velocity were evaluated in association with all-cause mortality during 6-year follow-up. Both the lowest quintile for each parameter (ethnic-specific cutoff) and FNIH-recommended values were used as cutoffs. Results Forty men (14.0%) and 21 women (7.6%) died during 6-year follow-up. The deceased subjects were older and had lower ASM, handgrip strength, and walking velocity. Sarcopenia defined by both low lean mass and weakness had a 4.13 (95% CI, 1.69–10.11) times higher risk of death, and sarcopenia defined by a combination of low lean mass, weakness, and slowness had a 9.56 (3.16–28.90) times higher risk of death after adjusting for covariates in men. However, these significant associations were not observed in women. In terms of cutoffs of each parameter, using the lowest quintile showed better predictive values in mortality than using the FNIH-recommended values. Moreover, new muscle mass index, ASM/BMI, provided better prognostic values than ASM/height2 in all associations. Conclusions New sarcopenia definition by FNIH was better able to predict 6-year mortality among Korean men. Moreover, ethnic-specific cutoffs, the lowest quintile for each parameter, predicted the higher risk of mortality than the FNIH-recommended values. PMID:27832145
Bi, Qiu; Xiao, Zhibo; Lv, Fajin; Liu, Yao; Zou, Chunxia; Shen, Yiqing
2018-02-05
The objective of this study was to find clinical parameters and qualitative and quantitative magnetic resonance imaging (MRI) features for differentiating uterine sarcoma from atypical leiomyoma (ALM) preoperatively and to calculate predictive values for uterine sarcoma. Data from 60 patients with uterine sarcoma and 88 patients with ALM confirmed by surgery and pathology were collected. Clinical parameters, qualitative MRI features, diffusion-weighted imaging with apparent diffusion coefficient values, and quantitative parameters of dynamic contrast-enhanced MRI of these two tumor types were compared. Predictive values for uterine sarcoma were calculated using multivariable logistic regression. Patient clinical manifestations, tumor locations, margins, T2-weighted imaging signals, mean apparent diffusion coefficient values, minimum apparent diffusion coefficient values, and time-signal intensity curves of solid tumor components were obvious significant parameters for distinguishing between uterine sarcoma and ALM (all P <.001). Abnormal vaginal bleeding, tumors located mainly in the uterine cavity, ill-defined tumor margins, and mean apparent diffusion coefficient values of <1.272 × 10 -3 mm 2 /s were significant preoperative predictors of uterine sarcoma. When the overall scores of these four predictors were greater than or equal to 7 points, the sensitivity, the specificity, the accuracy, and the positive and negative predictive values were 88.9%, 99.9%, 95.7%, 97.0%, and 95.1%, respectively. The use of clinical parameters and multiparametric MRI as predictive factors was beneficial for diagnosing uterine sarcoma preoperatively. These findings could be helpful for guiding treatment decisions. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Path loss variation of on-body UWB channel in the frequency bands of IEEE 802.15.6 standard.
Goswami, Dayananda; Sarma, Kanak C; Mahanta, Anil
2016-06-01
The wireless body area network (WBAN) has gaining tremendous attention among researchers and academicians for its envisioned applications in healthcare service. Ultra wideband (UWB) radio technology is considered as excellent air interface for communication among body area network devices. Characterisation and modelling of channel parameters are utmost prerequisite for the development of reliable communication system. The path loss of on-body UWB channel for each frequency band defined in IEEE 802.15.6 standard is experimentally determined. The parameters of path loss model are statistically determined by analysing measurement data. Both the line-of-sight and non-line-of-sight channel conditions are considered in the measurement. Variations of parameter values with the size of human body are analysed along with the variation of parameter values with the surrounding environments. It is observed that the parameters of the path loss model vary with the frequency band as well as with the body size and surrounding environment. The derived parameter values are specific to the particular frequency bands of IEEE 802.15.6 standard, which will be useful for the development of efficient UWB WBAN system.
Selection of entropy-measure parameters for knowledge discovery in heart rate variability data
2014-01-01
Background Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. Methods This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. Results The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Conclusions Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary. PMID:25078574
Selection of entropy-measure parameters for knowledge discovery in heart rate variability data.
Mayer, Christopher C; Bachler, Martin; Hörtenhuber, Matthias; Stocker, Christof; Holzinger, Andreas; Wassertheurer, Siegfried
2014-01-01
Heart rate variability is the variation of the time interval between consecutive heartbeats. Entropy is a commonly used tool to describe the regularity of data sets. Entropy functions are defined using multiple parameters, the selection of which is controversial and depends on the intended purpose. This study describes the results of tests conducted to support parameter selection, towards the goal of enabling further biomarker discovery. This study deals with approximate, sample, fuzzy, and fuzzy measure entropies. All data were obtained from PhysioNet, a free-access, on-line archive of physiological signals, and represent various medical conditions. Five tests were defined and conducted to examine the influence of: varying the threshold value r (as multiples of the sample standard deviation σ, or the entropy-maximizing rChon), the data length N, the weighting factors n for fuzzy and fuzzy measure entropies, and the thresholds rF and rL for fuzzy measure entropy. The results were tested for normality using Lilliefors' composite goodness-of-fit test. Consequently, the p-value was calculated with either a two sample t-test or a Wilcoxon rank sum test. The first test shows a cross-over of entropy values with regard to a change of r. Thus, a clear statement that a higher entropy corresponds to a high irregularity is not possible, but is rather an indicator of differences in regularity. N should be at least 200 data points for r = 0.2 σ and should even exceed a length of 1000 for r = rChon. The results for the weighting parameters n for the fuzzy membership function show different behavior when coupled with different r values, therefore the weighting parameters have been chosen independently for the different threshold values. The tests concerning rF and rL showed that there is no optimal choice, but r = rF = rL is reasonable with r = rChon or r = 0.2σ. Some of the tests showed a dependency of the test significance on the data at hand. Nevertheless, as the medical conditions are unknown beforehand, compromises had to be made. Optimal parameter combinations are suggested for the methods considered. Yet, due to the high number of potential parameter combinations, further investigations of entropy for heart rate variability data will be necessary.
Direct computation of stochastic flow in reservoirs with uncertain parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dainton, M.P.; Nichols, N.K.; Goldwater, M.H.
1997-01-15
A direct method is presented for determining the uncertainty in reservoir pressure, flow, and net present value (NPV) using the time-dependent, one phase, two- or three-dimensional equations of flow through a porous medium. The uncertainty in the solution is modelled as a probability distribution function and is computed from given statistical data for input parameters such as permeability. The method generates an expansion for the mean of the pressure about a deterministic solution to the system equations using a perturbation to the mean of the input parameters. Hierarchical equations that define approximations to the mean solution at each point andmore » to the field convariance of the pressure are developed and solved numerically. The procedure is then used to find the statistics of the flow and the risked value of the field, defined by the NPV, for a given development scenario. This method involves only one (albeit complicated) solution of the equations and contrasts with the more usual Monte-Carlo approach where many such solutions are required. The procedure is applied easily to other physical systems modelled by linear or nonlinear partial differential equations with uncertain data. 14 refs., 14 figs., 3 tabs.« less
NASA Technical Reports Server (NTRS)
Mather, R. S.; Lerch, F. J.; Rizos, C.; Masters, E. G.; Hirsch, B.
1978-01-01
The 1977 altimetry data bank is analyzed for the geometrical shape of the sea surface expressed as surface spherical harmonics after referral to the higher reference model defined by GEM 9. The resulting determination is expressed as quasi-stationary dynamic SST. Solutions are obtained from different sets of long arcs in the GEOS-3 altimeter data bank as well as from sub-sets related to the September 1975 and March 1976 equinoxes assembled with a view to minimizing seasonal effects. The results are compared with equivalent parameters obtained from the hydrostatic analysis of sporadic temperature, pressure and salinity measurements of the oceans and the known major steady state current systems with comparable wavelengths. The most clearly defined parameter (the zonal harmonic of degree 2) is obtained with an uncertainty of + or - 6 cm. The preferred numerical value is smaller than the oceanographic value due to the effect of the correction for the permanent earth tide. Similar precision is achieved for the zonal harmonic of degree 3. The precision obtained for the fourth degree zonal harmonic reflects more closely the accuracy expected from the level of noise in the orbital solutions.
Conceptual Model Development for Sea Turtle Nesting Habitat: Support for USACE Navigation Projects
2015-08-01
regional values. • Beach Width: The width of the beach (m) defines the region from the shoreline to the dune toe . Loggerhead turtles tend to prefer...primary drivers of the model parameters. • Beach Elevation: Beach elevation (m) is measured from the shoreline to the dune toe . Elevation influences...mapping, and morphological features in combination with imagery-derived environmental parameters (i.e., dune vegetation) have not been attempted
Thermodynamically consistent model calibration in chemical kinetics
2011-01-01
Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC) method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints) into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new models. Furthermore, TCMC can provide dimensionality reduction, better estimation performance, and lower computational complexity, and can help to alleviate the problem of data overfitting. PMID:21548948
Utility of coupling nonlinear optimization methods with numerical modeling software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, M.J.
1996-08-05
Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less
Adaptive identifier for uncertain complex nonlinear systems based on continuous neural networks.
Alfaro-Ponce, Mariel; Cruz, Amadeo Argüelles; Chairez, Isaac
2014-03-01
This paper presents the design of a complex-valued differential neural network identifier for uncertain nonlinear systems defined in the complex domain. This design includes the construction of an adaptive algorithm to adjust the parameters included in the identifier. The algorithm is obtained based on a special class of controlled Lyapunov functions. The quality of the identification process is characterized using the practical stability framework. Indeed, the region where the identification error converges is derived by the same Lyapunov method. This zone is defined by the power of uncertainties and perturbations affecting the complex-valued uncertain dynamics. Moreover, this convergence zone is reduced to its lowest possible value using ideas related to the so-called ellipsoid methodology. Two simple but informative numerical examples are developed to show how the identifier proposed in this paper can be used to approximate uncertain nonlinear systems valued in the complex domain.
NASA Astrophysics Data System (ADS)
Shimizu, Akira; Inoue, Jun-Ichi
1999-10-01
We study the nonequilibrium time evolution of the Bose-Einstein condensate of interacting bosons confined in a leaky box, when its number fluctuation is initially (t=0) suppressed. We take account of quantum fluctuations of all modes, including k=0, of the bosons. As the wave function of the ground state that has a definite number N of interacting bosons, we use a variational form \\|N,y>, which is obtained by operating a unitary operator eiG(y) on the number state of free bosons. Using eiG(y), we identify a ``natural coordinate'' b of the interacting bosons, by which many physical properties can be simply described. The \\|N,y> can be represented simply as a number state of b we thus call it the ``number state of interacting bosons'' (NSIB). To simulate real systems, for which if one fixes N at t=0 N will fluctuate at later times because of a finite probability of exchanging bosons between the box and the environment, we evaluate the time evolution of the reduced density operator ρ⁁(t) of the bosons in the box as a function of the leakage flux J. We concentrate on the most interesting and nontrivial time stage, i.e., the early time stage for which Jt<
NASA Technical Reports Server (NTRS)
Berman, A. L.
1977-01-01
Observations of Viking differenced S-band/X-band (S-X) range are shown to correlate strongly with Viking Doppler noise. A ratio of proportionality between downlink S-band plasma-induced range error and two-way Doppler noise is calculated. A new parameter (similar to the parameter epsilon which defines the ratio of local electron density fluctuations to mean electron density) is defined as a function of observed data sample interval (Tau) where the time-scale of the observations is 15 Tau. This parameter is interpreted to yield the ratio of net observed phase (or electron density) fluctuations to integrated electron density (in RMS meters/meter). Using this parameter and the thin phase-changing screen approximation, a value for the scale size L is calculated. To be consistent with Doppler noise observations, it is seen necessary for L to be proportional to closest approach distance a, and a strong function of the observed data sample interval, and hence the time-scale of the observations.
Sanz-Peláez, O; Angel-Moreno, A; Tapia-Martín, M; Conde-Martel, A; Carranza-Rodríguez, C; Carballo-Rastrilla, S; Soria-López, A; Pérez-Arellano, J L
2008-09-01
The progressive increase in the number of immigrants to Spain in recent years has made it necessary for health-care professionals to be aware about the specific characteristics of this population. An attempt is made in this study to define the normal range of common laboratory values in healthy sub-Saharan adults. Common laboratory values were studied (blood cell counts, clotting tests and blood biochemistry values) and were measured in 150 sub-Saharan immigrants previously defined as healthy according to a complete health evaluation that included a clinical history, physical examination, serologic tests and study of stool parasites. These results were compared to those from a control group consisting of 81 age-and-sex matched healthy blood donors taken from the Spanish native population. Statistically significant differences were obtained in the following values. Mean corpuscular volume (MCV), red cell distribution width (RDW), total leukocytes, and serum levels of creatinine, uric acid, total protein content, creatin-kinase (CK), aspartate aminotransferase (AST), gamma-glutamyl-transpeptidase (GGT), Immunoglobulin G (IgG) and M (IgM). If evaluated according to the normal values in native people, a considerable percentage of healthy sub-Saharan immigrants would present
NASA Technical Reports Server (NTRS)
Suit, William T.; Schiess, James R.
1988-01-01
The Discovery vehicle was found to have longitudinal and lateral aerodynamic characteristics similar to those of the Columbia and Challenger vehicles. The values of the lateral and longitudinal parameters are compared with the preflight data book. The lateral parameters showed the same trends as the data book. With the exception of C sub l sub Beta for Mach numbers greater than 15, C sub n sub delta r for Mach numbers greater than 2 and for Mach numbers less than 1.5, where the variation boundaries were not well defined, ninety percent of the extracted values of the lateral parameters fell within the predicted variations. The longitudinal parameters showed more scatter, but scattered about the preflight predictions. With the exception of the Mach 1.5 to .5 region of the flight envelope, the preflight predictions seem a reasonable representation of the Shuttle aerodynamics. The models determined accounted for ninety percent of the actual flight time histories.
Deconstructing thermodynamic parameters of a coupled system from site-specific observables.
Chowdhury, Sandipan; Chanda, Baron
2010-11-02
Cooperative interactions mediate information transfer between structural domains of a protein molecule and are major determinants of protein function and modulation. The prevalent theories to understand the thermodynamic origins of cooperativity have been developed to reproduce the complex behavior of a global thermodynamic observable such as ligand binding or enzyme activity. However, in most cases the measurement of a single global observable cannot uniquely define all the terms that fully describe the energetics of the system. Here we establish a theoretical groundwork for analyzing protein thermodynamics using site-specific information. Our treatment involves extracting a site-specific parameter (defined as χ value) associated with a structural unit. We demonstrate that, under limiting conditions, the χ value is related to the direct interaction terms associated with the structural unit under observation and its intrinsic activation energy. We also introduce a site-specific interaction energy term (χ(diff)) that is a function of the direct interaction energy of that site with every other site in the system. When combined with site-directed mutagenesis and other molecular level perturbations, analyses of χ values of site-specific observables may provide valuable insights into protein thermodynamics and structure.
On the role and value of β in incompressible MHD simulations
NASA Astrophysics Data System (ADS)
Chahine, Robert; Bos, Wouter J. T.
2018-04-01
The parameter β, defined as the ratio of the pressure to the square of the magnetic field, is widely used to characterize astrophysical and fusion plasmas. However, in the dynamics of a plasma flow, it is the pressure gradient which is important rather than the value of the pressure itself. It is shown that if one is interested in the influence of the pressure gradient on the dynamics of a plasma, it is not the quantity β which should be considered, but a similar quantity depending on the pressure gradient. The scaling of this newly defined quantity is investigated using incompressible magnetohydrodynamic simulations in a periodic cylinder in the Reversed Field Pinch flow regime.
NASA Technical Reports Server (NTRS)
Merchant, D. H.
1976-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.
NASA Astrophysics Data System (ADS)
Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.
2014-12-01
Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.
Perez, Richard
2003-04-01
A load controller and method are provided for maximizing effective capacity of a non-controllable, renewable power supply coupled to a variable electrical load also coupled to a conventional power grid. Effective capacity is enhanced by monitoring power output of the renewable supply and loading, and comparing the loading against the power output and a load adjustment threshold determined from an expected peak loading. A value for a load adjustment parameter is calculated by subtracting the renewable supply output and the load adjustment parameter from the current load. This value is then employed to control the variable load in an amount proportional to the value of the load control parameter when the parameter is within a predefined range. By so controlling the load, the effective capacity of the non-controllable, renewable power supply is increased without any attempt at operational feedback control of the renewable supply. The expected peak loading of the variable load can be dynamically determined within a defined time interval with reference to variations in the variable load.
Ryu, Hyeuk; Luco, Nicolas; Baker, Jack W.; Karaca, Erdem
2008-01-01
A methodology was recently proposed for the development of hazard-compatible building fragility models using parameters of capacity curves and damage state thresholds from HAZUS (Karaca and Luco, 2008). In the methodology, HAZUS curvilinear capacity curves were used to define nonlinear dynamic SDOF models that were subjected to the nonlinear time history analysis instead of the capacity spectrum method. In this study, we construct a multilinear capacity curve with negative stiffness after an ultimate (capping) point for the nonlinear time history analysis, as an alternative to the curvilinear model provided in HAZUS. As an illustration, here we propose parameter values of the multilinear capacity curve for a moderate-code low-rise steel moment resisting frame building (labeled S1L in HAZUS). To determine the final parameter values, we perform nonlinear time history analyses of SDOF systems with various parameter values and investigate their effects on resulting fragility functions through sensitivity analysis. The findings improve capacity curves and thereby fragility and/or vulnerability models for generic types of structures.
Geothermal reservoir simulation of hot sedimentary aquifer system using FEFLOW®
NASA Astrophysics Data System (ADS)
Nur Hidayat, Hardi; Gala Permana, Maximillian
2017-12-01
The study presents the simulation of hot sedimentary aquifer for geothermal utilization. Hot sedimentary aquifer (HSA) is a conduction-dominated hydrothermal play type utilizing deep aquifer, which is heated by near normal heat flow. One of the examples of HSA is Bavarian Molasse Basin in South Germany. This system typically uses doublet wells: an injection and production well. The simulation was run for 3650 days of simulation time. The technical feasibility and performance are analysed in regards to the extracted energy from this concept. Several parameters are compared to determine the model performance. Parameters such as reservoir characteristics, temperature information and well information are defined. Several assumptions are also defined to simplify the simulation process. The main results of the simulation are heat period budget or total extracted heat energy, and heat rate budget or heat production rate. Qualitative approaches for sensitivity analysis are conducted by using five parameters in which assigned lower and higher value scenarios.
Imposing constraints on parameter values of a conceptual hydrological model using baseflow response
NASA Astrophysics Data System (ADS)
Dunn, S. M.
Calibration of conceptual hydrological models is frequently limited by a lack of data about the area that is being studied. The result is that a broad range of parameter values can be identified that will give an equally good calibration to the available observations, usually of stream flow. The use of total stream flow can bias analyses towards interpretation of rapid runoff, whereas water quality issues are more frequently associated with low flow condition. This paper demonstrates how model distinctions between surface an sub-surface runoff can be used to define a likelihood measure based on the sub-surface (or baseflow) response. This helps to provide more information about the model behaviour, constrain the acceptable parameter sets and reduce uncertainty in streamflow prediction. A conceptual model, DIY, is applied to two contrasting catchments in Scotland, the Ythan and the Carron Valley. Parameter ranges and envelopes of prediction are identified using criteria based on total flow efficiency, baseflow efficiency and combined efficiencies. The individual parameter ranges derived using the combined efficiency measures still cover relatively wide bands, but are better constrained for the Carron than the Ythan. This reflects the fact that hydrological behaviour in the Carron is dominated by a much flashier surface response than in the Ythan. Hence, the total flow efficiency is more strongly controlled by surface runoff in the Carron and there is a greater contrast with the baseflow efficiency. Comparisons of the predictions using different efficiency measures for the Ythan also suggest that there is a danger of confusing parameter uncertainties with data and model error, if inadequate likelihood measures are defined.
Yang, Qinglin; Su, Yingying; Hussain, Mohammed; Chen, Weibi; Ye, Hong; Gao, Daiquan; Tian, Fei
2014-05-01
Burst suppression ratio (BSR) is a quantitative electroencephalography (qEEG) parameter. The purpose of our study was to compare the accuracy of BSR when compared to other EEG parameters in predicting poor outcomes in adults who sustained post-anoxic coma while not being subjected to therapeutic hypothermia. EEG was registered and recorded at least once within 7 days of post-anoxic coma onset. Electrodes were placed according to the international 10-20 system, using a 16-channel layout. Each EEG expert scored raw EEG using a grading scale adapted from Young and scored amplitude-integrated electroencephalography tracings, in addition to obtaining qEEG parameters defined as BSR with a defined threshold. Glasgow outcome scales of 1 and 2 at 3 months, determined by two blinded neurologists, were defined as poor outcome. Sixty patients with Glasgow coma scale score of 8 or less after anoxic accident were included. The sensitivity (97.1%), specificity (73.3%), positive predictive value (82.5%), and negative prediction value (95.0%) of BSR in predicting poor outcome were higher than other EEG variables. BSR1 and BSR2 were reliable in predicting death (area under the curve > 0.8, P < 0.05), with the respective cutoff points being 39.8% and 61.6%. BSR1 was reliable in predicting poor outcome (area under the curve = 0.820, P < 0.05) with a cutoff point of 23.9%. BSR1 was also an independent predictor of increased risk of death (odds ratio = 1.042, 95% confidence intervals: 1.012-1.073, P = 0.006). BSR may be a better predictor in prognosticating poor outcomes in patients with post-anoxic coma who do not undergo therapeutic hypothermia when compared to other qEEG parameters.
Harmonic spinors on a family of Einstein manifolds
NASA Astrophysics Data System (ADS)
Franchetti, Guido
2018-06-01
The purpose of this paper is to study harmonic spinors defined on a 1-parameter family of Einstein manifolds which includes Taub–NUT, Eguchi–Hanson and with the Fubini–Study metric as particular cases. We discuss the existence of and explicitly solve for spinors harmonic with respect to the Dirac operator twisted by a geometrically preferred connection. The metrics examined are defined, for generic values of the parameter, on a non-compact manifold with the topology of and extend to as edge-cone metrics. As a consequence, the subtle boundary conditions of the Atiyah–Patodi–Singer index theorem need to be carefully considered in order to show agreement between the index of the twisted Dirac operator and the result obtained by counting the explicit solutions.
Kern, Madalyn D; Ortega Alcaide, Joan; Rentschler, Mark E
2014-11-01
The objective of this work is to validate an experimental method and nondimensional model for characterizing the normal adhesive response between a polyvinyl chloride based synthetic biological tissue substrate and a flat, cylindrical probe with a smooth polydimethylsiloxane (PDMS) surface. The adhesion response is a critical mobility design parameter of a Robotic Capsule Endoscope (RCE) using PDMS treads to provide mobility to travel through the gastrointestinal tract for diagnostic purposes. Three RCE design characteristics were chosen as input parameters for the normal adhesion testing: pre-load, dwell time and separation rate. These parameters relate to the RCE׳s cross sectional dimension, tread length, and tread speed, respectively. An inscribed central composite design (CCD) prescribed 34 different parameter configurations to be tested. The experimental adhesion response curves were nondimensionalized by the maximum stress and total displacement values for each test configuration and a mean nondimensional curve was defined with a maximum relative error of 5.6%. A mathematical model describing the adhesion behavior as a function of the maximum stress and total displacement was developed and verified. A nonlinear regression analysis was done on the maximum stress and total displacement parameters and equations were defined as a function of the RCE design parameters. The nondimensional adhesion model is able to predict the adhesion curve response of any test configuration with a mean R(2) value of 0.995. Eight additional CCD studies were performed to obtain a qualitative understanding of the impact of tread contact area and synthetic material substrate stiffness on the adhesion response. These results suggest that the nondimensionalization technique for analyzing the adhesion data is sufficient for all values of probe radius and substrate stiffness within the bounds tested. This method can now be used for RCE tread design optimization given a set of environmental conditions for device operation. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Prescribed Flight Performance Assessment for Undersea Vehicle Autopilot Robustness
2016-06-16
parameters are defined. These two non-dimensional parameters are effective buoyancy, effB , and effective center of mass offset, ,CM effX , shown in... effective buoyancy is one minus the weight of the vehicle over the buoyancy of the vehicle. Hence, an effective buoyancy value of -0.1 is equivalent to the...vehicle weight being 10 percent larger in magnitude than the buoyancy of the vehicle causing the vehicle to sink. Effective center of mass offset
Kinematic analysis of crank -cam mechanism of process equipment
NASA Astrophysics Data System (ADS)
Podgornyj, Yu I.; Skeeba, V. Yu; Martynova, T. G.; Pechorkina, N. S.; Skeeba, P. Yu
2018-03-01
This article discusses how to define the kinematic parameters of a crank-cam mechanism. Using the mechanism design, the authors have developed a calculation model and a calculation algorithm that allowed the definition of kinematic parameters of the mechanism, including crank displacements, angular velocities and acceleration, as well as driven link (rocker arm) angular speeds and acceleration. All calculations were performed using the Mathcad mathematical package. The results of the calculations are reported as numerical values.
Photoreceiver efficiency measurements
NASA Technical Reports Server (NTRS)
Lehr, C. G.
1975-01-01
The efficiency and other related parameters of Smithsonian Astrophysical Observatory's four laser receivers were measured at the observing stations by oscilloscope photography. If the efficiency is defined as the number of photoelectrons generated by the photomultiplier tube divided by the number of photons entering the aperture of the receiver, its measured value is about 1% for the laser wavelength of 694 nm. This value is consistent with the efficiency computed from the specified characteristics of the photoreceiver's optical components.
Escobar, Raul G; Munoz, Karin T; Dominguez, Angelica; Banados, Pamela; Bravo, Maria J
2017-01-01
In this study we aimed to determine the maximal isometric muscle strength of a healthy, normal-weight, pediatric population between 6 and 15 years of age using hand-held dynamometry to establish strength reference values. The secondary objective was determining the relationship between strength and anthropometric parameters. Four hundred normal-weight Chilean children, split into 10 age groups, separated by 1-year intervals, were evaluated. Each age group included between 35 and 55 children. The strength values increased with increasing age and weight, with a correlation of 0.83 for age and 0.82 for weight. The results were similar to those reported in previous studies regarding the relationships among strength, age, and anthropometric parameters, but the reported strength differed. These results provide normal strength parameters for healthy and normal-weight Chilean children between 6 and 15 years of age and highlight the relevance of ethnicity in defining reference values for muscle strength in a pediatric population. Muscle Nerve 55: 16-22, 2017. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Dehghani, H.; Ataee-Pour, M.
2012-12-01
The block economic value (EV) is one of the most important parameters in mine evaluation. This parameter can affect significant factors such as mining sequence, final pit limit and net present value. Nowadays, the aim of open pit mine planning is to define optimum pit limits and an optimum life of mine production scheduling that maximizes the pit value under some technical and operational constraints. Therefore, it is necessary to calculate the block economic value at the first stage of the mine planning process, correctly. Unrealistic block economic value estimation may cause the mining project managers to make the wrong decision and thus may impose inexpiable losses to the project. The effective parameters such as metal price, operating cost, grade and so forth are always assumed certain in the conventional methods of EV calculation. While, obviously, these parameters have uncertain nature. Therefore, usually, the conventional methods results are far from reality. In order to solve this problem, a new technique is used base on an invented binomial tree which is developed in this research. This method can calculate the EV and project PV under economic uncertainty. In this paper, the EV and project PV were initially determined using Whittle formula based on certain economic parameters and a multivariate binomial tree based on the economic uncertainties such as the metal price and cost uncertainties. Finally the results were compared. It is concluded that applying the metal price and cost uncertainties causes the calculated block economic value and net present value to be more realistic than certain conditions.
Information relevant to KABAM and explanations of default parameters used to define the 7 trophic levels. KABAM is a simulation model used to predict pesticide concentrations in aquatic regions for use in exposure assessments.
A CRITERION PAPER ON PARAMETERS OF EDUCATION. FINAL REVISION.
ERIC Educational Resources Information Center
MEIERHENRY, W. C.
THIS POSITION PAPER DEFINES ASPECTS OF INNOVATION IN EDUCATION. THE APPROPRIATENESS OF PLANNED CHANGE AND THE LEGITIMACY OF FUNCTION OF PLANNED CHANGE ARE DISCUSSED. PRIMARY ELEMENTS OF INNOVATION INCLUDE THE SUBSTITUTION OF ONE MATERIAL OR PROCESS FOR ANOTHER, THE RESTRUCTURING OF TEACHER ASSIGNMENTS, VALUE CHANGES WITH RESPECT TO TEACHING…
Vandenhove, H; Gil-García, C; Rigol, A; Vidal, M
2009-09-01
Predicting the transfer of radionuclides in the environment for normal release, accidental, disposal or remediation scenarios in order to assess exposure requires the availability of an important number of generic parameter values. One of the key parameters in environmental assessment is the solid liquid distribution coefficient, K(d), which is used to predict radionuclide-soil interaction and subsequent radionuclide transport in the soil column. This article presents a review of K(d) values for uranium, radium, lead, polonium and thorium based on an extensive literature survey, including recent publications. The K(d) estimates were presented per soil groups defined by their texture and organic matter content (Sand, Loam, Clay and Organic), although the texture class seemed not to significantly affect K(d). Where relevant, other K(d) classification systems are proposed and correlations with soil parameters are highlighted. The K(d) values obtained in this compilation are compared with earlier review data.
Cope, Davis; Blakeslee, Barbara; McCourt, Mark E
2013-05-01
The difference-of-Gaussians (DOG) filter is a widely used model for the receptive field of neurons in the retina and lateral geniculate nucleus (LGN) and is a potential model in general for responses modulated by an excitatory center with an inhibitory surrounding region. A DOG filter is defined by three standard parameters: the center and surround sigmas (which define the variance of the radially symmetric Gaussians) and the balance (which defines the linear combination of the two Gaussians). These parameters are not directly observable and are typically determined by nonlinear parameter estimation methods applied to the frequency response function. DOG filters show both low-pass (optimal response at zero frequency) and bandpass (optimal response at a nonzero frequency) behavior. This paper reformulates the DOG filter in terms of a directly observable parameter, the zero-crossing radius, and two new (but not directly observable) parameters. In the two-dimensional parameter space, the exact region corresponding to bandpass behavior is determined. A detailed description of the frequency response characteristics of the DOG filter is obtained. It is also found that the directly observable optimal frequency and optimal gain (the ratio of the response at optimal frequency to the response at zero frequency) provide an alternate coordinate system for the bandpass region. Altogether, the DOG filter and its three standard implicit parameters can be determined by three directly observable values. The two-dimensional bandpass region is a potential tool for the analysis of populations of DOG filters (for example, populations of neurons in the retina or LGN), because the clustering of points in this parameter space may indicate an underlying organizational principle. This paper concentrates on circular Gaussians, but the results generalize to multidimensional radially symmetric Gaussians and are given as an appendix.
Dynamical Analysis of an SEIT Epidemic Model with Application to Ebola Virus Transmission in Guinea.
Li, Zhiming; Teng, Zhidong; Feng, Xiaomei; Li, Yingke; Zhang, Huiguo
2015-01-01
In order to investigate the transmission mechanism of the infectious individual with Ebola virus, we establish an SEIT (susceptible, exposed in the latent period, infectious, and treated/recovery) epidemic model. The basic reproduction number is defined. The mathematical analysis on the existence and stability of the disease-free equilibrium and endemic equilibrium is given. As the applications of the model, we use the recognized infectious and death cases in Guinea to estimate parameters of the model by the least square method. With suitable parameter values, we obtain the estimated value of the basic reproduction number and analyze the sensitivity and uncertainty property by partial rank correlation coefficients.
Predicting fiber refractive index from a measured preform index profile
NASA Astrophysics Data System (ADS)
Kiiveri, P.; Koponen, J.; Harra, J.; Novotny, S.; Husu, H.; Ihalainen, H.; Kokki, T.; Aallos, V.; Kimmelma, O.; Paul, J.
2018-02-01
When producing fiber lasers and amplifiers, silica glass compositions consisting of three to six different materials are needed. Due to the varying needs of different applications, substantial number of different glass compositions are used in the active fiber structures. Often it is not possible to find material parameters for theoretical models to estimate thermal and mechanical properties of those glass compositions. This makes it challenging to predict accurately fiber core refractive index values, even if the preform index profile is measured. Usually the desired fiber refractive index value is achieved experimentally, which is expensive. To overcome this problem, we analyzed statistically the changes between the measured preform and fiber index values. We searched for correlations that would help to predict the Δn-value change from preform to fiber in a situation where we don't know the values of the glass material parameters that define the change. Our index change models were built using the data collected from preforms and fibers made by the Direct Nanoparticle Deposition (DND) technology.
A new function for estimating local rainfall thresholds for landslide triggering
NASA Astrophysics Data System (ADS)
Cepeda, J.; Nadim, F.; Høeg, K.; Elverhøi, A.
2009-04-01
The widely used power law for establishing rainfall thresholds for triggering of landslides was first proposed by N. Caine in 1980. The most updated global thresholds presented by F. Guzzetti and co-workers in 2008 were derived using Caine's power law and a rigorous and comprehensive collection of global data. Caine's function is defined as I = α×Dβ, where I and D are the mean intensity and total duration of rainfall, and α and β are parameters estimated for a lower boundary curve to most or all the positive observations (i.e., landslide triggering rainfall events). This function does not account for the effect of antecedent precipitation as a conditioning factor for slope instability, an approach that may be adequate for global or regional thresholds that include landslides in surface geologies with a wide range of subsurface drainage conditions and pore-pressure responses to sustained rainfall. However, in a local scale and in geological settings dominated by a narrow range of drainage conditions and behaviours of pore-pressure response, the inclusion of antecedent precipitation in the definition of thresholds becomes necessary in order to ensure their optimum performance, especially when used as part of early warning systems (i.e., false alarms and missed events must be kept to a minimum). Some authors have incorporated the effect of antecedent rainfall in a discrete manner by first comparing the accumulated precipitation during a specified number of days against a reference value and then using a Caine's function threshold only when that reference value is exceeded. The approach in other authors has been to calculate threshold values as linear combinations of several triggering and antecedent parameters. The present study is aimed to proposing a new threshold function based on a generalisation of Caine's power law. The proposed function has the form I = (α1×Anα2)×Dβ, where I and D are defined as previously. The expression in parentheses is equivalent to Caine's α parameter. α1, α2 and β are parameters estimated for the threshold. An is the n-days cumulative rainfall. The suggested procedure to estimate the threshold is as follows: (1) Given N storms, assign one of the following flags to each storm: nL (non-triggering storms), yL (triggering storms), uL (uncertain-triggering storms). Successful predictions correspond to nL and yL storms occurring below and above the threshold, respectively. Storms flagged as uL are actually assigned either an nL or yL flag using a randomization procedure. (2) Establish a set of values of ni (e.g. 1, 4, 7, 10, 15 days, etc.) to test for accumulated precipitation. (3) For each storm and each ni value, obtain the antecedent accumulated precipitation in ni days Ani. (4) Generate a 3D grid of values of α1, α2 and β. (5) For a certain value of ni, generate confusion matrices for the N storms at each grid point and estimate an evaluation metrics parameter EMP (e.g., accuracy, specificity, etc.). (6) Repeat the previous step for all the set of ni values. (7) From the 3D grid corresponding to each ni value, search for the optimum grid point EMPopti(global minimum or maximum parameter). (8) Search for the optimum value of ni in the space ni vs EMPopti . (9) The threshold is defined by the value of ni obtained in the previous step and the corresponding values of α1, α2 and β. The procedure is illustrated using rainfall data and landslide observations from the San Salvador volcano, where a rainfall-triggered debris flow destroyed a neighbourhood in the capital city of El Salvador in 19 September, 1982, killing not less than 300 people.
Defining a region of optimization based on engine usage data
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-08-04
Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.
Evaluation of Kurtosis into the product of two normally distributed variables
NASA Astrophysics Data System (ADS)
Oliveira, Amílcar; Oliveira, Teresa; Seijas-Macías, Antonio
2016-06-01
Kurtosis (κ) is any measure of the "peakedness" of a distribution of a real-valued random variable. We study the evolution of the Kurtosis for the product of two normally distributed variables. Product of two normal variables is a very common problem for some areas of study, like, physics, economics, psychology, … Normal variables have a constant value for kurtosis (κ = 3), independently of the value of the two parameters: mean and variance. In fact, the excess kurtosis is defined as κ- 3 and the Normal Distribution Kurtosis is zero. The product of two normally distributed variables is a function of the parameters of the two variables and the correlation between then, and the range for kurtosis is in [0, 6] for independent variables and in [0, 12] when correlation between then is allowed.
Putting the Weather Back Into Climate
NASA Astrophysics Data System (ADS)
Smith, Leonard A.; Stainforth, David A.
2014-05-01
The literature contains a variety of definitions of climate, and the emphasis in these definitions has changed over time. Defining climate as a mean value is, of course, both limiting and misleading; definitions of climate based on averages have been deprecated as far back as 1931 [1]. In the context of current efforts to produce climate predictions for use in climate adaptation, it is timely to consider how well various definitions of climate serve the research for applications community. From a nonlinear dynamical systems perspective it is common to associate climate with a system's natural measure (or "attractor" if such an object exists). Such a definition is not easily applied to physical systems where we have limited observations over a restricted period of time; the duration of 30 years is often mentioned today and the origin of this period is discussed. Given a dynamic system in which parameters are evolving in time, the view of climate as a natural measure becomes problematic as, by definition, there may be no attractor per se. Attractors defined for particular parameter values cannot be expected to have any association with the probability of states under transient changes in the values of that parameter. Alternatively, distributions may be determined which reflect the transient situation, based on (rather broad) additional assumptions regarding the state of the system at some point in the past (say, an ice age planet vs an interglacial planet). Such distributions reflect many of the properties one would hope to be represented in a generalised definition of the system's climate. Here we trace how definitions of climate have changed over time and highlight a number of properties of definitions of climate which would facilitate common use across researchers, from observers to theoreticians, from climate modellers to mathematicians. We show while periodic changes in parameter values (such as those found in an annual cycle or a diurnal cycle) are easily incorporated within the traditional nonlinear dynamical systems view, non-periodic or secular changes (such as those due to increasing atmospheric greenhouse gas concentrations) yield an open challenge. We argue the need both for clarifying and for clearly meeting the open challenges of defining climate in relation to the state of an evolving system, and suggest a path forward. [1] Miller, A.A., 1931: Climatology. First Ed. Methuen.
Tokunaga river networks: New empirical evidence and applications to transport problems
NASA Astrophysics Data System (ADS)
Tejedor, A.; Zaliapin, I. V.
2013-12-01
The Tokunaga self-similarity has proven to be an important constraint for the observed river networks. Notably, various Horton laws are naturally satisfied by the Tokunaga networks, which makes this model of considerable interest for theoretical analysis and modeling of environmental transport. Recall that Horton self-similarity is a weaker property of a tree graph that addresses its principal branching; it is a counterpart of the power-law size distribution for system's elements. The stronger Tokunaga self-similarity addresses so-called side branching; it ensures that different levels of a hierarchy have the same probabilistic structure (in a sense that can be rigorously defined). We describe an improved statistical framework for testing self-similarity in a finite tree and estimating the related parameters. The developed inference is applied to the major river basins in continental United States and Iberian Peninsula. The results demonstrate the validity of the Tokunaga model for the majority of the examined networks with very narrow (universal) range of parameter values. Next, we explore possible relationships between the Tokunaga parameter anomalies (deviations from the universal values) and climatic and geomorphologic characteristics of a region. Finally, we apply the Tokunaga model to explore vulnerability of river networks, defined via reaction of the river discharge to a storm.
Enhanced Elliptic Grid Generation
NASA Technical Reports Server (NTRS)
Kaul, Upender K.
2007-01-01
An enhanced method of elliptic grid generation has been invented. Whereas prior methods require user input of certain grid parameters, this method provides for these parameters to be determined automatically. "Elliptic grid generation" signifies generation of generalized curvilinear coordinate grids through solution of elliptic partial differential equations (PDEs). Usually, such grids are fitted to bounding bodies and used in numerical solution of other PDEs like those of fluid flow, heat flow, and electromagnetics. Such a grid is smooth and has continuous first and second derivatives (and possibly also continuous higher-order derivatives), grid lines are appropriately stretched or clustered, and grid lines are orthogonal or nearly so over most of the grid domain. The source terms in the grid-generating PDEs (hereafter called "defining" PDEs) make it possible for the grid to satisfy requirements for clustering and orthogonality properties in the vicinity of specific surfaces in three dimensions or in the vicinity of specific lines in two dimensions. The grid parameters in question are decay parameters that appear in the source terms of the inhomogeneous defining PDEs. The decay parameters are characteristic lengths in exponential- decay factors that express how the influences of the boundaries decrease with distance from the boundaries. These terms govern the rates at which distance between adjacent grid lines change with distance from nearby boundaries. Heretofore, users have arbitrarily specified decay parameters. However, the characteristic lengths are coupled with the strengths of the source terms, such that arbitrary specification could lead to conflicts among parameter values. Moreover, the manual insertion of decay parameters is cumbersome for static grids and infeasible for dynamically changing grids. In the present method, manual insertion and user specification of decay parameters are neither required nor allowed. Instead, the decay parameters are determined automatically as part of the solution of the defining PDEs. Depending on the shape of the boundary segments and the physical nature of the problem to be solved on the grid, the solution of the defining PDEs may provide for rates of decay to vary along and among the boundary segments and may lend itself to interpretation in terms of one or more physical quantities associated with the problem.
Business model design for a wearable biofeedback system.
Hidefjäll, Patrik; Titkova, Dina
2015-01-01
Wearable sensor technologies used to track daily activities have become successful in the consumer market. In order for wearable sensor technology to offer added value in the more challenging areas of stress-rehab care and occupational health stress-related biofeedback parameters need to be monitored and more elaborate business models are needed. To identify probable success factors for a wearable biofeedback system (Affective Health) in the two mentioned market segments in a Swedish setting, we conducted literature studies and interviews with relevant representatives. Data were collected and used first to describe the two market segments and then to define likely feasible business model designs, according to the Business Model Canvas framework. Needs of stakeholders were identified as inputs to business model design. Value propositions, a key building block of a business model, were defined for each segment. The value proposition for occupational health was defined as "A tool that can both identify employees at risk of stress-related disorders and reinforce healthy sustainable behavior" and for healthcare as: "Providing therapists with objective data about the patient's emotional state and motivating patients to better engage in the treatment process".
Towards the mechanical characterization of abdominal wall by inverse analysis.
Simón-Allué, R; Calvo, B; Oberai, A A; Barbone, P E
2017-02-01
The aim of this study is to characterize the passive mechanical behaviour of abdominal wall in vivo in an animal model using only external cameras and numerical analysis. The main objective lies in defining a methodology that provides in vivo information of a specific patient without altering mechanical properties. It is demonstrated in the mechanical study of abdomen for hernia purposes. Mechanical tests consisted on pneumoperitoneum tests performed on New Zealand rabbits, where inner pressure was varied from 0mmHg to 12mmHg. Changes in the external abdominal surface were recorded and several points were tracked. Based on their coordinates we reconstructed a 3D finite element model of the abdominal wall, considering an incompressible hyperelastic material model defined by two parameters. The spatial distributions of these parameters (shear modulus and non linear parameter) were calculated by inverse analysis, using two different types of regularization: Total Variation Diminishing (TVD) and Tikhonov (H 1 ). After solving the inverse problem, the distribution of the material parameters were obtained along the abdominal surface. Accuracy of the results was evaluated for the last level of pressure. Results revealed a higher value of the shear modulus in a wide stripe along the craneo-caudal direction, associated with the presence of linea alba in conjunction with fascias and rectus abdominis. Non linear parameter distribution was smoother and the location of higher values varied with the regularization type. Both regularizations proved to yield in an accurate predicted displacement field, but H 1 obtained a smoother material parameter distribution while TVD included some discontinuities. The methodology here presented was able to characterize in vivo the passive non linear mechanical response of the abdominal wall. Copyright © 2016 Elsevier Ltd. All rights reserved.
Generating nonlinear FM chirp radar signals by multiple integrations
Doerry, Armin W [Albuquerque, NM
2011-02-01
A phase component of a nonlinear frequency modulated (NLFM) chirp radar pulse can be produced by performing digital integration operations over a time interval defined by the pulse width. Each digital integration operation includes applying to a respectively corresponding input parameter value a respectively corresponding number of instances of digital integration.
Actions, Objectives & Concerns. Human Parameters for Architectural Design.
ERIC Educational Resources Information Center
Lasswell, Thomas E.; And Others
An experiment conducted at California State College, Los Angeles, to test the value of social-psychological research in defining building needs is described. The problems of how to identify and synthesize the disparate objectives, concerns and actions of the groups who use or otherwise have an interest in large and complex buildings is discussed.…
Reducing Design Risk Using Robust Design Methods: A Dual Response Surface Approach
NASA Technical Reports Server (NTRS)
Unal, Resit; Yeniay, Ozgur; Lepsch, Roger A. (Technical Monitor)
2003-01-01
Space transportation system conceptual design is a multidisciplinary process containing considerable element of risk. Risk here is defined as the variability in the estimated (output) performance characteristic of interest resulting from the uncertainties in the values of several disciplinary design and/or operational parameters. Uncertainties from one discipline (and/or subsystem) may propagate to another, through linking parameters and the final system output may have a significant accumulation of risk. This variability can result in significant deviations from the expected performance. Therefore, an estimate of variability (which is called design risk in this study) together with the expected performance characteristic value (e.g. mean empty weight) is necessary for multidisciplinary optimization for a robust design. Robust design in this study is defined as a solution that minimizes variability subject to a constraint on mean performance characteristics. Even though multidisciplinary design optimization has gained wide attention and applications, the treatment of uncertainties to quantify and analyze design risk has received little attention. This research effort explores the dual response surface approach to quantify variability (risk) in critical performance characteristics (such as weight) during conceptual design.
Preliminary Investigation of Ice Shape Sensitivity to Parameter Variations
NASA Technical Reports Server (NTRS)
Miller, Dean R.; Potapczuk, Mark G.; Langhals, Tammy J.
2005-01-01
A parameter sensitivity study was conducted at the NASA Glenn Research Center's Icing Research Tunnel (IRT) using a 36 in. chord (0.91 m) NACA-0012 airfoil. The objective of this preliminary work was to investigate the feasibility of using ice shape feature changes to define requirements for the simulation and measurement of SLD icing conditions. It was desired to identify the minimum change (threshold) in a parameter value, which yielded an observable change in the ice shape. Liquid Water Content (LWC), drop size distribution (MVD), and tunnel static temperature were varied about a nominal value, and the effects of these parameter changes on the resulting ice shapes were documented. The resulting differences in ice shapes were compared on the basis of qualitative and quantitative criteria (e.g., mass, ice horn thickness, ice horn angle, icing limits, and iced area). This paper will provide a description of the experimental method, present selected experimental results, and conclude with an evaluation of these results, followed by a discussion of recommendations for future research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salajegheh, Nima; Abedrabbo, Nader; Pourboghrat, Farhang
An efficient integration algorithm for continuum damage based elastoplastic constitutive equations is implemented in LS-DYNA. The isotropic damage parameter is defined as the ratio of the damaged surface area over the total cross section area of the representative volume element. This parameter is incorporated into the integration algorithm as an internal variable. The developed damage model is then implemented in the FEM code LS-DYNA as user material subroutine (UMAT). Pure stretch experiments of a hemispherical punch are carried out for copper sheets and the results are compared against the predictions of the implemented damage model. Evaluation of damage parameters ismore » carried out and the optimized values that correctly predicted the failure in the sheet are reported. Prediction of failure in the numerical analysis is performed through element deletion using the critical damage value. The set of failure parameters which accurately predict the failure behavior in copper sheets compared to experimental data is reported as well.« less
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.
NASA Astrophysics Data System (ADS)
Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede
2017-10-01
Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.
NASA Astrophysics Data System (ADS)
Llovet, X.; Salvat, F.
2018-01-01
The accuracy of Monte Carlo simulations of EPMA measurements is primarily determined by that of the adopted interaction models and atomic relaxation data. The code PENEPMA implements the most reliable general models available, and it is known to provide a realistic description of electron transport and X-ray emission. Nonetheless, efficiency (i.e., the simulation speed) of the code is determined by a number of simulation parameters that define the details of the electron tracking algorithm, which may also have an effect on the accuracy of the results. In addition, to reduce the computer time needed to obtain X-ray spectra with a given statistical accuracy, PENEPMA allows the use of several variance-reduction techniques, defined by a set of specific parameters. In this communication we analyse and discuss the effect of using different values of the simulation and variance-reduction parameters on the speed and accuracy of EPMA simulations. We also discuss the effectiveness of using multi-core computers along with a simple practical strategy implemented in PENEPMA.
Safety assessment of a shallow foundation using the random finite element method
NASA Astrophysics Data System (ADS)
Zaskórski, Łukasz; Puła, Wojciech
2015-04-01
A complex structure of soil and its random character are reasons why soil modeling is a cumbersome task. Heterogeneity of soil has to be considered even within a homogenous layer of soil. Therefore an estimation of shear strength parameters of soil for the purposes of a geotechnical analysis causes many problems. In applicable standards (Eurocode 7) there is not presented any explicit method of an evaluation of characteristic values of soil parameters. Only general guidelines can be found how these values should be estimated. Hence many approaches of an assessment of characteristic values of soil parameters are presented in literature and can be applied in practice. In this paper, the reliability assessment of a shallow strip footing was conducted using a reliability index β. Therefore some approaches of an estimation of characteristic values of soil properties were compared by evaluating values of reliability index β which can be achieved by applying each of them. Method of Orr and Breysse, Duncan's method, Schneider's method, Schneider's method concerning influence of fluctuation scales and method included in Eurocode 7 were examined. Design values of the bearing capacity based on these approaches were referred to the stochastic bearing capacity estimated by the random finite element method (RFEM). Design values of the bearing capacity were conducted for various widths and depths of a foundation in conjunction with design approaches DA defined in Eurocode. RFEM was presented by Griffiths and Fenton (1993). It combines deterministic finite element method, random field theory and Monte Carlo simulations. Random field theory allows to consider a random character of soil parameters within a homogenous layer of soil. For this purpose a soil property is considered as a separate random variable in every element of a mesh in the finite element method with proper correlation structure between points of given area. RFEM was applied to estimate which theoretical probability distribution fits the empirical probability distribution of bearing capacity basing on 3000 realizations. Assessed probability distribution was applied to compute design values of the bearing capacity and related reliability indices β. Conducted analysis were carried out for a cohesion soil. Hence a friction angle and a cohesion were defined as a random parameters and characterized by two dimensional random fields. A friction angle was described by a bounded distribution as it differs within limited range. While a lognormal distribution was applied in case of a cohesion. Other properties - Young's modulus, Poisson's ratio and unit weight were assumed as deterministic values because they have negligible influence on the stochastic bearing capacity. Griffiths D. V., & Fenton G. A. (1993). Seepage beneath water retaining structures founded on spatially random soil. Géotechnique, 43(6), 577-587.
Formation of the predicted training parameters in the form of a discrete information stream
NASA Astrophysics Data System (ADS)
Smolentseva, T. E.; Sumin, V. I.; Zolnikov, V. K.; Lavlinsky, V. V.
2018-03-01
In work process of training in the form of a discrete information stream is considered. On each of stages of the considered process portions of the training information and quality of their assimilation are analysed. Individual characteristics and reaction trained for every portion of information on appropriate sections are defined. The control algorithm of training with the predicted number of control checks of the trainee who allows to define what operating influence is considered it is necessary to create for the trainee. On the basis of this algorithm the vector of probabilities of ignorance of elements of the training information is received. As a result of the conducted researches the algorithm on formation of the predicted training parameters is developed. In work the task of comparison of duration of training received experimentally with predicted on the basis of it is solved the conclusion is drawn on efficiency of formation of the predicted training parameters. The program complex on the basis of the values of individual parameters received as a result of experiments on each trainee who allows to calculate individual characteristics is developed, to form rating and to monitor process of change of parameters of training.
The Production of FRW Universe and Decay to Particles in Multiverse
NASA Astrophysics Data System (ADS)
Ghaffary, Tooraj
2017-09-01
In this study, first, it will be shown that as the Hubble parameter, " H", increases the production cross section for closed and flat Universes increases rapidly at smaller values of " H" and becomes constant for higher values of " H". However in the case of open Universe, the production cross section has been encountered a singularity. Before this singularity, as the H parameter increases, the cross section increases, for smaller H, ( H < 2.5), exhibits a turn-over at moderate values of H, (2.5 < H < 3.5), decreases for larger amount of H After that and for a special value of H, the cross section has been encountered with a singularity. Although the cross section cannot be defined at this singularity but before and after this point, it is certainly equal to zero. After this singularity, the cross section increases rapidly, when H increases. It is shown that if the production cross section of Universe happens before this singularity, it can't achieve to higher values of Hubble parameter after singularity. More over if the production cross section of Universe situates after the singularity, it won't get access to values of Hubble parameter less than the singularity. After that the thermal distribution for particles inside the FRW Universes are obtained. It is found that a large amount of particles are produced near apparent horizon due to their variety in their energy and their probabilities. Finally, comparing the particle production cross sections for flat, closed and open Universes, it is concluded that as the value of k increases, the cross section decreases.
Using constraints and their value for optimization of large ODE systems
Domijan, Mirela; Rand, David A.
2015-01-01
We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300
Optimization of multi-environment trials for genomic selection based on crop models.
Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J
2017-08-01
We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.
Deter, Russell L.; Lee, Wesley; Yeo, Lami; Romero, Roberto
2012-01-01
Objectives To characterize 2nd and 3rd trimester fetal growth using Individualized Growth Assessment in a large cohort of fetuses with normal growth outcomes. Methods A prospective longitudinal study of 119 pregnancies was carried out from 18 weeks, MA, to delivery. Measurements of eleven fetal growth parameters were obtained from 3D scans at 3–4 week intervals. Regression analyses were used to determine Start Points [SP] and Rossavik model [P = c (t) k + st] coefficients c, k and s for each parameter in each fetus. Second trimester growth model specification functions were re-established. These functions were used to generate individual growth models and determine predicted s and s-residual [s = pred s + s-resid] values. Actual measurements were compared to predicted growth trajectories obtained from the growth models and Percent Deviations [% Dev = {{actual − predicted}/predicted} × 100] calculated. Age-specific reference standards for this statistic were defined using 2-level statistical modeling for the nine directly measured parameters and estimated weight. Results Rossavik models fit the data for all parameters very well [R2: 99%], with SP’s and k values similar to those found in a much smaller cohort. The c values were strongly related to the 2nd trimester slope [R2: 97%] as was predicted s to estimated c [R2: 95%]. The latter was negative for skeletal parameters and positive for soft tissue parameters. The s-residuals were unrelated to estimated c’s [R2: 0%], and had mean values of zero. Rossavik models predicted 3rd trimester growth with systematic errors close to 0% and random errors [95% range] of 5.7 – 10.9% and 20.0 – 24.3% for one and three dimensional parameters, respectively. Moderate changes in age-specific variability were seen in the 3rd trimester.. Conclusions IGA procedures for evaluating 2nd and 3rd trimester growth are now established based on a large cohort [4–6 fold larger than those used previously], thus permitting more reliable growth assessment with each fetus acting as its own control. New, more rigorously defined, age-specific standards for the evaluation of 3rd trimester growth deviations are now available for 10 anatomical parameters. Our results are also consistent with the predicted s and s-residual being representatives of growth controllers operating through the insulin-like growth factor [IGF] axis. PMID:23962305
Computing Information Value from RDF Graph Properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Saffar, Sinan; Heileman, Gregory
2010-11-08
Information value has been implicitly utilized and mostly non-subjectively computed in information retrieval (IR) systems. We explicitly define and compute the value of an information piece as a function of two parameters, the first is the potential semantic impact the target information can subjectively have on its recipient's world-knowledge, and the second parameter is trust in the information source. We model these two parameters as properties of RDF graphs. Two graphs are constructed, a target graph representing the semantics of the target body of information and a context graph representing the context of the consumer of that information. We computemore » information value subjectively as a function of both potential change to the context graph (impact) and the overlap between the two graphs (trust). Graph change is computed as a graph edit distance measuring the dissimilarity between the context graph before and after the learning of the target graph. A particular application of this subjective information valuation is in the construction of a personalized ranking component in Web search engines. Based on our method, we construct a Web re-ranking system that personalizes the information experience for the information-consumer.« less
Moving Toward Quantifying Reliability - The Next Step in a Rapidly Maturing PV Industry: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurtz, Sarah; Sample, Tony; Wohlgemuth, John
2015-12-07
Some may say that PV modules are moving toward being a simple commodity, but most major PV customers ask: 'How can I minimize chances of a module recall?' Or, 'How can I quantify the added value of a 'premium' module?' Or, 'How can I assess the value of an old PV system that I'm thinking of purchasing?' These are all questions that PVQAT (the International PV Quality Assurance Task Force) and partner organizations are working to answer. Defining standard methods for ensuring minimal acceptable quality of PV modules, differentiating modules that provide added value in the toughest of environments, andmore » creating a process (e.g. through IECRE [1]) that can follow a PV system from design through installation and operation are tough tasks, but having standard approaches for these will increase confidence, reduce costs, and be a critical foundation of a mature PV industry. This paper summarizes current needs for new tests, some challenges for defining those tests, and some of the key efforts toward development of international standards, emphasizing that meaningful quantification of reliability (as in defining a service life prediction) must be done in the context of a specific product with design parameters defined through a quality management system.« less
Pre-slaughter stress and pork quality
NASA Astrophysics Data System (ADS)
Stajković, S.; Teodorović, V.; Baltić, M.; Karabasil, N.
2017-09-01
Stress is an inevitable consequence of handling of animals for slaughter. Stress conditions during transport, lairage and at slaughter induce undesirable effects on the end quality of meat such as pale, soft, exudative meat and dark firm dry meat. Hence, it is very important to define appropriate parameters for objective assessment of level of stress. Attempts to define measures of stress have been difficult and no physiological parameter has been successfully used to evaluate stress situations. One physiological change in swine associated with animal handling stress and with pork quality is an increase in blood lactate concentration. Plasma cortisol was thought to be an appropriate indicator of stress, but the concentration was not consistently changed by different stressors. Therefore, finding alternative parameters reacting to stressors, such as acute phase proteins, would be of great value for the objective evaluation of level of stress and meat quality. As the stress during pre-slaughter handling is unavoidable, the final goal is to improve transport and slaughter conditions for the animal and, as a consequence, meat quality and animal welfare.
Critical laboratory values in hemostasis: toward consensus.
Lippi, Giuseppe; Adcock, Dorothy; Simundic, Ana-Maria; Tripodi, Armando; Favaloro, Emmanuel J
2017-09-01
The term "critical values" can be defined to entail laboratory test results that significantly lie outside the normal (reference) range and necessitate immediate reporting to safeguard patient health, as well as those displaying a highly and clinically significant variation compared to previous data. The identification and effective communication of "highly pathological" values has engaged the minds of many clinicians, health care and laboratory professionals for decades, since these activities are vital to good laboratory practice. This is especially true in hemostasis, where a timely and efficient communication of critical values strongly impacts patient management. Due to the heterogeneity of available data, this paper is hence aimed to analyze the state of the art and provide an expert opinion about the parameters, measurement units and alert limits pertaining to critical values in hemostasis, thus providing a basic document for future consultation that assists laboratory professionals and clinicians alike. KEY MESSAGES Critical values are laboratory test results significantly lying outside the normal (reference) range and necessitating immediate reporting to safeguard patient health. A broad heterogeneity exists about critical values in hemostasis worldwide. We provide here an expert opinion about the parameters, measurement units and alert limits pertaining to critical values in hemostasis.
Theoretical prediction of Grüneisen parameter for SiO{sub 2}.TiO{sub 2} bulk metallic glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Chandra K.; Pandey, Brijesh K., E-mail: bkpmmmec11@gmail.com; Pandey, Anjani K.
2016-05-23
The Grüneisen parameter (γ) is very important to decide the limitations for the prediction of thermoelastic properties of bulk metallic glasses. It can be defined in terms of microscopic and macroscopic parameters of the material in which former is based on vibrational frequencies of atoms in the material while later is closely related to its thermodynamic properties. Different formulation and equation of states are used by the pioneer researchers of this field to predict the true sense of Gruneisen parameter for BMG but for SiO{sub 2}.TiO{sub 2} very few and insufficient information is available till now. In the present workmore » we have tested the validity of two different isothermal EOS viz. Poirrior-Tarantola EOS and Usual-Tait EOS to predict the true value of Gruneisen parameter for SiO{sub 2}.TiO{sub 2} as a function of compression. Using different thermodynamic limitations related to the material constraints and analyzing obtained result it is concluded that the Poirrior-Tarantola EOS gives better numeric values of Grüneisen parameter (γ) for SiO{sub 2}.TiO{sub 2} BMG.« less
Yang, Chao; Song, Cunjiang; Geng, Weitao; Li, Qiang; Wang, Yuanyuan; Kong, Meimei; Wang, Shufang
2012-01-01
Environmentally Degradable Parameter (Ed K) is of importance in the describing of biodegradability of environmentally biodegradable polymers (BDPs). In this study, a concept Ed K was introduced. A test procedure of using the ISO 14852 method and detecting the evolved carbon dioxide as an analytical parameter was developed, and the calculated Ed K was used as an indicator for the ultimate biodegradability of materials. Starch and polyethylene used as reference materials were defined as the Ed K values of 100 and 0, respectively. Natural soil samples were inoculated into bioreactors, followed by determining the rates of biodegradation of the reference materials and 15 commercial BDPs over a 2-week test period. Finally, a formula was deduced to calculate the value of Ed K for each material. The Ed K values of the tested materials have a positive correlation to their biodegradation rates in the simulated soil environment, and they indicated the relative biodegradation rate of each material among all the tested materials. Therefore, the Ed K was shown to be a reliable indicator for quantitatively evaluating the potential biodegradability of BDPs in the natural environment. PMID:22675455
Using Least Squares for Error Propagation
ERIC Educational Resources Information Center
Tellinghuisen, Joel
2015-01-01
The method of least-squares (LS) has a built-in procedure for estimating the standard errors (SEs) of the adjustable parameters in the fit model: They are the square roots of the diagonal elements of the covariance matrix. This means that one can use least-squares to obtain numerical values of propagated errors by defining the target quantities as…
NASA Astrophysics Data System (ADS)
Rousseva, Svetla; Kercheva, Milena; Shishkov, Toma; Dimitrov, Emil; Nenov, Martin; Lair, Georg J.; Moraetis, Daniel
2014-05-01
Soil water retention is of primary importance for majority of soil functions. The characteristics derived from Soil Water Retention Curve (SWRC) are directly related to soil structure and soil water regime and can be used as indicators for soil physical quality. The aim of this study is to present some parameters and relationships based on the SWRC data from the soil profiles characterising the European SoilTrEC Critical Zone Observatories Fuchsenbigl and Koiliaris. The studied soils are representative for highly productive soils managed as arable land in the frame of soil formation chronosequence at "Marchfeld" (Fuchsenbigl CZO), Austria and heavily impacted soils during centuries through intensive grazing and farming, under severe risk of desertification in context of climatic and lithological gradient at Koiliaris, Crete, Greece. Soil water retention at pF ≤ 2.52 was determined using the undisturbed soil cores (100 cm3 and 50 cm3) by a suction plate method. Water retention at pF = 4.2 was determined by a membrane press method and at pF ≥ 5.6 - by adsorption of water vapour at controlled relative humidity, both using ground soil samples. The soil physical quality parameter (S-parameter) was defined as the slope of the water retention curve at its inflection point (Dexter, 2006), determined with the obtained parameters of van Genuhten (1980) water retention equation. The S-parameter values were categorised to assess soil physical quality as follows: S < 0.020 very poor, 0.020 ≤ S < 0.035 poor, 0.035 ≤ S < 0.050 good, S ≥ 0.050 very good (Dexter, 2004). The results showed that most of the studied topsoil horizons have good physical quality according to both the S-parameter and the Plant-Available Water content (PAW), with the exception of the soils from croplands at CZO Fuxenbigl (F4, F5) which are with poor soil structure. The link between the S-parameter and the indicator of soil structure stability (water stable soil aggregates with size 1-3 mm) is not well defined. The scattering is due to high values of S in subsoil, which does not always coincide with favourable physical properties, as it can be seen from the relationship with the PAW content. It was found that values of S ≥ 0.05 correspond to PAW > 20 % vol. in the topsoil horizons. The high values of S in subsoil horizons are due to the low PAW and restrict the application of the S categories in these cases. Well defined links are found between the PAW content and the S-parameter when the data from the topsoil horizons are grouped in 2 groups according to the ratio between air-filled pores (at pF 2.52) and plant available water: <2 and ≥ 2. The authors acknowledge gratefully the European Commission Research Directorate-General for funding the SoilTrEC project (Contract No 244118) under its 7th Framework Programme.
The Use of the Nelder-Mead Method in Determining Projection Parameters for Globe Photographs
NASA Astrophysics Data System (ADS)
Gede, M.
2009-04-01
A photo of a terrestrial or celestial globe can be handled as a map. The only hard issue is its projection: the so-called Tilted Perspective Projection which, if the optical axis of the photo intersects the globe's centre, is simplified to the Vertical Near-Side Perspective Projection. When georeferencing such a photo, the exact parameters of the projections are also needed. These parameters depend on the position of the viewpoint of the camera. Several hundreds of globe photos had to be georeferenced during the Virtual Globes Museum project, which made necessary to automatize the calculation of the projection parameters. The author developed a program for this task which uses the Nelder-Mead Method in order to find the optimum parameters when a set of control points are given as input. The Nelder-Mead method is a numerical algorithm for minimizing a function in a many-dimensional space. The function in the present application is the average error of the control points calculated from the actual values of parameters. The parameters are the geographical coordinates of the projection centre, the image coordinates of the same point, the rotation of the projection, the height of the perspective point and the scale of the photo (calculated in pixels/km). The program reads the Global Mappers Ground Control Point (.GCP) file format as input and creates projection description files (.PRJ) for the same software. The initial values of the geographical coordinates of the projection centre are calculated as the average of the control points, while the other parameters are set to experimental values which represent the most common circumstances of taking a globe photograph. The algorithm runs until the change of the parameters sinks below a pre-defined limit. The minimum search can be refined by using the previous result parameter set as new initial values. This paper introduces the calculation mechanism and examples of the usage. Other possible other usages of the method are also discussed.
Electrochemical energy storage subsystems study, volume 1
NASA Technical Reports Server (NTRS)
Miller, F. Q.; Richardson, P. W.; Graff, C. L.; Jordan, M. V.; Patterson, V. L.
1981-01-01
The effects on life cycle costs (LCC) of major design and performance technology parameters for multi kW LEO and GEO energy storage subsystems using NiCd and NiH2 batteries and fuel cell/electrolysis cell devices were examined. Design, performance and LCC dynamic models are developed based on mission and system/subsystem requirements and existing or derived physical and cost data relationships. The models define baseline designs and costs. The major design and performance parameters are each varied to determine their influence on LCC around the baseline values.
An algorithm for surface smoothing with rational splines
NASA Technical Reports Server (NTRS)
Schiess, James R.
1987-01-01
Discussed is an algorithm for smoothing surfaces with spline functions containing tension parameters. The bivariate spline functions used are tensor products of univariate rational-spline functions. A distinct tension parameter corresponds to each rectangular strip defined by a pair of consecutive spline knots along either axis. Equations are derived for writing the bivariate rational spline in terms of functions and derivatives at the knots. Estimates of these values are obtained via weighted least squares subject to continuity constraints at the knots. The algorithm is illustrated on a set of terrain elevation data.
Electrochemical Energy Storage Subsystems Study, Volume 2
NASA Technical Reports Server (NTRS)
Miller, F. Q.; Richardson, P. W.; Graff, C. L.; Jordan, M. V.; Patterson, V. L.
1981-01-01
The effects on life cycle costs (LCC) of major design and performance technology parameters for multi kW LEO and GEO energy storage subsystems using NiCd and NiH2 batteries and fuel cell/electrolysis cell devices were examined. Design, performance and LCC dynamic models are developed based on mission and system/subsystem requirements and existing or derived physical and cost data relationships. The models are exercised to define baseline designs and costs. Then the major design and performance parameters are each varied to determine their influence on LCC around the baseline values.
Ngu, Roland Cheofor; Kadia, Benjamin Momo; Tianyi, Frank-Leonel; Choukem, Simeon Pierre
2018-01-01
Background Waist circumference (WC), waist-to-hip ratio (WHR) and waist-to-height ratio (WHtR) are all independent predictors of cardio-metabolic risk and therefore important in HIV/AIDS patients on antiretroviral therapy at risk of increased visceral adiposity. This study aimed to assess the extent of agreement between these parameters and the body mass index (BMI), as anthropometric parameters and in classifying cardio-metabolic risk in HIV/AIDS patients. Methods A secondary analysis of data from a cross-sectional study involving 200 HIV/AIDS patients was done. Anthropometric parameters were measured from participants using standard guidelines and central obesity defined according to recommended criteria. Increased cardio-metabolic risk was defined according to the standard cut-off values for all four parameters. Data were analyzed using STATA version 14.1. Results The prevalence of WC-defined central obesity, WHR-defined central obesity and WHtR > 0.50 were 33.5%, 44.5% and 36.5%, respectively. The prevalence of BMI-defined overweight and obesity was 40.5%. After adjusting for gender and HAART status, there was a significant linear association and correlation between WC and BMI (regression equation: WC (cm) = 37.184 + 1.756 BMI (Kg/m2) + 0.825 Male + 1.002 HAART, (p < 0.001, r = 0.65)), and between WHtR and BMI (regression equation: WHtR = 0.223 + 0.011 BMI (Kg/m2)– 0.0153 Male + 0.003 HAART, (p < 0.001, r = 0.65)), but not between WHR and BMI (p = 0.097, r = 0.13). There was no agreement between the WC, WHtR and BMI, and minimal agreement between the WHR and BMI, in identifying patients with an increased cardio-metabolic risk. Conclusion Despite the observed linear association and correlation between these anthropometric parameters, the routine use of WC, WHR and WHtR as better predictors of cardio-metabolic risk should be encouraged in these patients, due to their minimal agreement with BMI in identifying HIV/AIDS patients with increased cardio-metabolic risk. HAART status does not appear to significantly affect the association between these anthropometric parameters. PMID:29566089
A testable model of earthquake probability based on changes in mean event size
NASA Astrophysics Data System (ADS)
Imoto, Masajiro
2003-02-01
We studied changes in mean event size using data on microearthquakes obtained from a local network in Kanto, central Japan, from a viewpoint that a mean event size tends to increase as the critical point is approached. A parameter describing changes was defined using a simple weighting average procedure. In order to obtain the distribution of the parameter in the background, we surveyed values of the parameter from 1982 to 1999 in a 160 × 160 × 80 km volume. The 16 events of M5.5 or larger in this volume were selected as target events. The conditional distribution of the parameter was estimated from the 16 values, each of which referred to the value immediately prior to each target event. The distribution of the background becomes a function of symmetry, the center of which corresponds to no change in b value. In contrast, the conditional distribution exhibits an asymmetric feature, which tends to decrease the b value. The difference in the distributions between the two groups was significant and provided us a hazard function for estimating earthquake probabilities. Comparing the hazard function with a Poisson process, we obtained an Akaike Information Criterion (AIC) reduction of 24. This reduction agreed closely with the probability gains of a retrospective study in a range of 2-4. A successful example of the proposed model can be seen in the earthquake of 3 June 2000, which is the only event during the period of prospective testing.
Can online benchmarking increase rates of thrombolysis? Data from the Austrian stroke unit registry.
Ferrari, Julia; Seyfang, Leonhard; Lang, Wilfried
2013-09-01
Despite its widespread availability and known safety and efficacy, a therapy with intravenous thrombolysis is still undergiven. We aimed to identify whether nationwide quality projects--like the stroke registry in Austria--as well as online benchmarking and predefined target values can increase rates of thrombolysis. Therefore, we assessed 6,394 out of 48,462 patients with ischemic stroke from the Austrian stroke registry (study period from March 2003 to December 2011) who had undergone thrombolysis treatment. We defined lower level and target values as quality parameters and evaluated whether or not these parameters could be achieved in the past years. We were able to show that rates of thrombolysis in Austria increased from 4.9% in 2003 to 18.3% in 2011. In a multivariate regression model, the main impact seen was the increase over the years [the OR ranges from 0.47 (95% CI 0.32-0.68) in 2003 to 2.51 (95% CI 2.20-2.87) in 2011). The predefined lower and target levels of thrombolysis were achieved at the majority of participating centers: in 2011 the lower value of 5% was achieved at all stroke units, and the target value of 15% was observed at 21 of 34 stroke units. We conclude that online benchmarking and the concept of defining target values as a tool for nationwide acute stroke care appeared to result in an increase in the rate of thrombolysis over the last few years while the variability between the stroke units has not yet been reduced.
How much expert knowledge is it worth to put in conceptual hydrological models?
NASA Astrophysics Data System (ADS)
Antonetti, Manuel; Zappa, Massimiliano
2017-04-01
Both modellers and experimentalists agree on using expert knowledge to improve our conceptual hydrological simulations on ungauged basins. However, they use expert knowledge differently for both hydrologically mapping the landscape and parameterising a given hydrological model. Modellers use generally very simplified (e.g. topography-based) mapping approaches and put most of the knowledge for constraining the model by defining parameter and process relational rules. In contrast, experimentalists tend to invest all their detailed and qualitative knowledge about processes to obtain a spatial distribution of areas with different dominant runoff generation processes (DRPs) as realistic as possible, and for defining plausible narrow value ranges for each model parameter. Since, most of the times, the modelling goal is exclusively to simulate runoff at a specific site, even strongly simplified hydrological classifications can lead to satisfying results due to equifinality of hydrological models, overfitting problems and the numerous uncertainty sources affecting runoff simulations. Therefore, to test to which extent expert knowledge can improve simulation results under uncertainty, we applied a typical modellers' modelling framework relying on parameter and process constraints defined based on expert knowledge to several catchments on the Swiss Plateau. To map the spatial distribution of the DRPs, mapping approaches with increasing involvement of expert knowledge were used. Simulation results highlighted the potential added value of using all the expert knowledge available on a catchment. Also, combinations of event types and landscapes, where even a simplified mapping approach can lead to satisfying results, were identified. Finally, the uncertainty originated by the different mapping approaches was compared with the one linked to meteorological input data and catchment initial conditions.
Acarbose bioequivalence: exploration of new pharmacodynamic parameters.
Zhang, Min; Yang, Jin; Tao, Lei; Li, Lingjun; Ma, Pengcheng; Fawcett, John Paul
2012-06-01
To investigate bioequivalence (BE) testing of an acarbose formulation in healthy Chinese volunteers through the use of recommended and innovative pharmacodynamic (PD) parameters. Following the Food and Drug Administration (FDA) guidance, a randomized, cross-over study of acarbose test (T) and reference (R) (Glucobay®) formulations was performed with a 1-week wash-out period. Preliminary pilot studies showed that the appropriate dose of acarbose was 2 × 50 mg, and the required number of subjects was 40. Serum glucose concentrations after sucrose administration (baseline) and co-administration of sucrose/acarbose on the following day were both determined. Three newly defined PD measures of glucose fluctuation (glucose excursion (GE), GE' (glucose excursion without the effect of the homeostatic glucose control), and fAUC (degree of fluctuation of serum glucose based on AUC)), the plateau glucose concentration (C(ss)), and time of maximum reduction in glucose concentration (T (max)) were tested in the evaluation. The adequacy of the two parameters recommended by the FDA, ΔC(SG,max) (maximum reduction in serum glucose concentration) and AUEC((0-4h)) (reduction in the AUC((0-4h)) of glucose between baseline and acarbose formulation) was also evaluated. The T (max) values were comparable, and the 90% confidence intervals of the geometric test/reference ratios (T/R) for ΔC(SG,max), C(ss), GE, and fAUC were all within 80-125%. The parameter GE' was slightly outside the limits, and the parameter AUEC((0-4h)) could not be computed due to the presence of negative values. In acarbose BE evaluation, while the recommended parameter ΔC(SG,max) is valuable, the combination of C(ss) and one of the newly defined glucose fluctuation parameters, GE, GE', and fAUC is preferable than AUEC((0-4h)). The acarbose test formulation can be initially considered to be bioequivalent to Glucobay®.
UCODE, a computer code for universal inverse modeling
Poeter, E.P.; Hill, M.C.
1999-01-01
This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating system: it consists of algorithms programmed in perl, a freeware language designed for text manipulation and Fortran90, which efficiently performs numerical calculations.
Determination of Solubility Parameters of Ibuprofen and Ibuprofen Lysinate.
Kitak, Teja; Dumičić, Aleksandra; Planinšek, Odon; Šibanc, Rok; Srčič, Stanko
2015-12-03
In recent years there has been a growing interest in formulating solid dispersions, which purposes mainly include solubility enhancement, sustained drug release and taste masking. The most notable problem by these dispersions is drug-carrier (in)solubility. Here we focus on solubility parameters as a tool for predicting the solubility of a drug in certain carriers. Solubility parameters were determined in two different ways: solely by using calculation methods, and by experimental approaches. Six different calculation methods were applied in order to calculate the solubility parameters of the drug ibuprofen and several excipients. However, we were not able to do so in the case of ibuprofen lysinate, as calculation models for salts are still not defined. Therefore, the extended Hansen's approach and inverse gas chromatography (IGC) were used for evaluating of solubility parameters for ibuprofen lysinate. The obtained values of the total solubility parameter did not differ much between the two methods: by the extended Hansen's approach it was δt = 31.15 MPa(0.5) and with IGC it was δt = 35.17 MPa(0.5). However, the values of partial solubility parameters, i.e., δd, δp and δh, did differ from each other, what might be due to the complex behaviour of a salt in the presence of various solvents.
Roberts, Cynthia J; Mahmoud, Ashraf M; Bons, Jeffrey P; Hossain, Arif; Elsheikh, Ahmed; Vinciguerra, Riccardo; Vinciguerra, Paolo; Ambrósio, Renato
2017-04-01
To investigate two new stiffness parameters and their relationships with the dynamic corneal response (DCR) parameters and compare normal and keratoconic eyes. Stiffness parameters are defined as Resultant Pressure at inward applanation (A1) divided by corneal displacement. Stiffness parameter A1 uses displacement between the undeformed cornea and A1 and stiffness parameter highest concavity (HC) uses displacement from A1 to maximum deflection during HC. The spatial and temporal profiles of the Corvis ST (Oculus Optikgeräte, Wetzlar, Germany) air puff were characterized using hot wire anemometry. An adjusted air pressure impinging on the cornea at A1 (adjAP1) and an algorithm to biomechanically correct intraocular pressure based on finite element modelling (bIOP) were used for Resultant Pressure calculation (adjAP1 - bIOP). Linear regression analyses between DCR parameters and stiffness parameters were performed on a retrospective dataset of 180 keratoconic eyes and 482 normal eyes. DCR parameters from a subset of 158 eyes of 158 patients in each group were matched for bIOP and compared using t tests. A P value of less than .05 was considered statistically significant. All DCR parameters evaluated showed significant differences between normal and keratoconic eyes, except peak distance. Keratoconic eyes had lower stiffness parameter values, thinner pachymetry, shorter applanation lengths, greater absolute values of applanation velocities, earlier A1 times and later second applanation times, greater HC deformation amplitudes and HC deflection amplitudes, and lower HC radius of concave curvature (greater concave curvature). Most DCR parameters showed a significant relationship with both stiffness parameters in both groups. Keratoconic eyes demonstrated less resistance to deformation than normal eyes with similar IOP. The stiffness parameters may be useful in future biomechanical studies as potential biomarkers. [J Refract Surg. 2017;33(4):266-273.]. Copyright 2017, SLACK Incorporated.
Quadratic RK shooting solution for a environmental parameter prediction boundary value problem
NASA Astrophysics Data System (ADS)
Famelis, Ioannis Th.; Tsitouras, Ch.
2014-10-01
Using tools of Information Geometry, the minimum distance between two elements of a statistical manifold is defined by the corresponding geodesic, e.g. the minimum length curve that connects them. Such a curve, where the probability distribution functions in the case of our meteorological data are two parameter Weibull distributions, satisfies a 2nd order Boundary Value (BV) system. We study the numerical treatment of the resulting special quadratic form system using Shooting method. We compare the solutions of the problem when we employ a classical Singly Diagonally Implicit Runge Kutta (SDIRK) 4(3) pair of methods and a quadratic SDIRK 5(3) pair . Both pairs have the same computational costs whereas the second one attains higher order as it is specially constructed for quadratic problems.
Regularities in Low-Temperature Phosphatization of Silicates
NASA Astrophysics Data System (ADS)
Savenko, A. V.
2018-01-01
The regularities in low-temperature phosphatization of silicates are defined from long-term experiments on the interaction between different silicate minerals and phosphate-bearing solutions in a wide range of medium acidity. It is shown that the parameters of the reaction of phosphatization of hornblende, orthoclase, and labradorite have the same values as for clayey minerals (kaolinite and montmorillonite). This effect may appear, if phosphotization proceeds, not after silicate minerals with a different structure and composition, but after a secondary silicate phase formed upon interaction between silicates and water and stable in a certain pH range. Variation in the parameters of the reaction of phosphatization at pH ≈ 1.8 is due to the stability of the silicate phase different from that at higher pH values.
Investigations of respiratory control systems simulation
NASA Technical Reports Server (NTRS)
Gallagher, R. R.
1973-01-01
The Grodins' respiratory control model was investigated and it was determined that the following modifications were necessary before the model would be adaptable for current research efforts: (1) the controller equation must be modified to allow for integration of the respiratory system model with other physiological systems; (2) the system must be more closely correlated to the salient physiological functionings; (3) the respiratory frequency and the heart rate should be expanded to illustrate other physiological relationships and dependencies; and (4) the model should be adapted to particular individuals through a better defined set of initial parameter values in addition to relating these parameter values to the desired environmental conditions. Several of Milhorn's respiratory control models were also investigated in hopes of using some of their features as modifications for Grodins' model.
Thornley, John H. M.
2011-01-01
Background and Aims Plant growth and respiration still has unresolved issues, examined here using a model. The aims of this work are to compare the model's predictions with McCree's observation-based respiration equation which led to the ‘growth respiration/maintenance respiration paradigm’ (GMRP) – this is required to give the model credibility; to clarify the nature of maintenance respiration (MR) using a model which does not represent MR explicitly; and to examine algebraic and numerical predictions for the respiration:photosynthesis ratio. Methods A two-state variable growth model is constructed, with structure and substrate, applicable on plant to ecosystem scales. Four processes are represented: photosynthesis, growth with growth respiration (GR), senescence giving a flux towards litter, and a recycling of some of this flux. There are four significant parameters: growth efficiency, rate constants for substrate utilization and structure senescence, and fraction of structure returned to the substrate pool. Key Results The model can simulate McCree's data on respiration, providing an alternative interpretation to the GMRP. The model's parameters are related to parameters used in this paradigm. MR is defined and calculated in terms of the model's parameters in two ways: first during exponential growth at zero growth rate; and secondly at equilibrium. The approaches concur. The equilibrium respiration:photosynthesis ratio has the value of 0·4, depending only on growth efficiency and recycling fraction. Conclusions McCree's equation is an approximation that the model can describe; it is mistaken to interpret his second coefficient as a maintenance requirement. An MR rate is defined and extracted algebraically from the model. MR as a specific process is not required and may be replaced with an approach from which an MR rate emerges. The model suggests that the respiration:photosynthesis ratio is conservative because it depends on two parameters only whose values are likely to be similar across ecosystems. PMID:21948663
Nurse Scheduling by Cooperative GA with Effective Mutation Operator
NASA Astrophysics Data System (ADS)
Ohki, Makoto
In this paper, we propose an effective mutation operators for Cooperative Genetic Algorithm (CGA) to be applied to a practical Nurse Scheduling Problem (NSP). The nurse scheduling is a very difficult task, because NSP is a complex combinatorial optimizing problem for which many requirements must be considered. In real hospitals, the schedule changes frequently. The changes of the shift schedule yields various problems, for example, a fall in the nursing level. We describe a technique of the reoptimization of the nurse schedule in response to a change. The conventional CGA is superior in ability for local search by means of its crossover operator, but often stagnates at the unfavorable situation because it is inferior to ability for global search. When the optimization stagnates for long generation cycle, a searching point, population in this case, would be caught in a wide local minimum area. To escape such local minimum area, small change in a population should be required. Based on such consideration, we propose a mutation operator activated depending on the optimization speed. When the optimization stagnates, in other words, when the optimization speed decreases, the mutation yields small changes in the population. Then the population is able to escape from a local minimum area by means of the mutation. However, this mutation operator requires two well-defined parameters. This means that user have to consider the value of these parameters carefully. To solve this problem, we propose a periodic mutation operator which has only one parameter to define itself. This simplified mutation operator is effective over a wide range of the parameter value.
Wakiyama, S; Takano, Y; Shiba, H; Gocho, T; Sakamoto, T; Ishida, Y; Yanaga, K
2017-06-01
Graft regeneration and functional recovery after reperfusion of transplanted graft are very important for successful living donor liver transplantation (LDLT). The aim of this study was to evaluate the significance of postoperative portal venous velocity (PVV) in short-term recovery of graft function in LDLT. From February 2007 through December 2015, we performed 17 primary LDLTs, which were included in the present study. The patients ranged in age from 12 to 65 years (mean: 50 years), and 11 were female patients. Postoperatively, Doppler ultrasonography was performed daily to measure PVV (cm/s), and liver function parameters were measured daily. The change in PVV (ΔPVV) was defined as follows: ΔPVV = PVV on postoperative day (POD) 1 - PVV on POD 7. Maximal value of serum aspartate aminotransferase (ASTmax) and maximal value of serum alanine transaminase (ALTmax) at 24 hours after graft reperfusion were used as parameters of reperfusion injury. Correlation analyses were performed as follows: (1) correlation of ΔPVV and PVV on POD 1 (PVV-POD 1) with the values such as ASTmax, ALTmax, other liver function parameters on POD 7 and graft regeneration rate; (2) correlation of ASTmax and ALTmax with other liver function parameters on POD 7. ΔPVV significantly correlated with the values of serum total bilirubin (P < .01), prothrombin time (P < .01), and platelet count (P < .05), and PVV-POD 1 significantly correlated with the values of serum total bilirubin (P < .05) and prothrombin time (P < .05). ΔPVV and PVV-POD 1 may be useful parameters of short-term functional recovery of the transplant liver in LDLT. Copyright © 2017 Elsevier Inc. All rights reserved.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Comparative Study of Light Sources for Household
NASA Astrophysics Data System (ADS)
Pawlak, Andrzej; Zalesińska, Małgorzata
2017-03-01
The article describes test results that provided the ground to define and evaluate basic photometric, colorimetric and electric parameters of selected, widely available light sources, which are equivalent to a traditional incandescent 60-Watt light bulb. Overall, one halogen light bulb, three compact fluorescent lamps and eleven LED light sources were tested. In general, it was concluded that in most cases (branded products, in particular) the measured and calculated parameters differ from the values declared by manufacturers only to a small degree. LED sources prove to be the most beneficial substitute for traditional light bulbs, considering both their operational parameters and their price, which is comparable with the price of compact fluorescent lamps or, in some instances, even lower.
NASA Technical Reports Server (NTRS)
1981-01-01
The results of a preliminary study on the design of a radiation hardened fusible link programmable read-only memory (PROM) are presented. Various fuse technologies and the effects of radiation on MOS integrated circuits are surveyed. A set of design rules allowing the fabrication of a radiation hardened PROM using a Si-gate CMOS process is defined. A preliminary cell layout was completed and the programming concept defined. A block diagram is used to describe the circuit components required for a 4 K design. A design goal data sheet giving target values for the AC, DC, and radiation parameters of the circuit is presented.
Support vector machines-based modelling of seismic liquefaction potential
NASA Astrophysics Data System (ADS)
Pal, Mahesh
2006-08-01
This paper investigates the potential of support vector machines (SVM)-based classification approach to assess the liquefaction potential from actual standard penetration test (SPT) and cone penetration test (CPT) field data. SVMs are based on statistical learning theory and found to work well in comparison to neural networks in several other applications. Both CPT and SPT field data sets is used with SVMs for predicting the occurrence and non-occurrence of liquefaction based on different input parameter combination. With SPT and CPT test data sets, highest accuracy of 96 and 97%, respectively, was achieved with SVMs. This suggests that SVMs can effectively be used to model the complex relationship between different soil parameter and the liquefaction potential. Several other combinations of input variable were used to assess the influence of different input parameters on liquefaction potential. Proposed approach suggest that neither normalized cone resistance value with CPT data nor the calculation of standardized SPT value is required with SPT data. Further, SVMs required few user-defined parameters and provide better performance in comparison to neural network approach.
Liz, Eduardo
2018-02-01
The gamma-Ricker model is one of the more flexible and general discrete-time population models. It is defined on the basis of the Ricker model, introducing an additional parameter [Formula: see text]. For some values of this parameter ([Formula: see text], population is overcompensatory, and the introduction of an additional parameter gives more flexibility to fit the stock-recruitment curve to field data. For other parameter values ([Formula: see text]), the gamma-Ricker model represents populations whose per-capita growth rate combines both negative density dependence and positive density dependence. The former can lead to overcompensation and dynamic instability, and the latter can lead to a strong Allee effect. We study the impact of the cooperation factor in the dynamics and provide rigorous conditions under which increasing the Allee effect strength stabilizes or destabilizes population dynamics, promotes or prevents population extinction, and increases or decreases population size. Our theoretical results also include new global stability criteria and a description of the possible bifurcations.
Low Shrinkage Cement Concrete Intended for Airfield Pavements
NASA Astrophysics Data System (ADS)
Małgorzata, Linek
2017-10-01
The work concerns the issue of hardened concrete parameters improvement intended for airfield pavements. Factors which have direct or indirect influence on rheological deformation size were of particular interest. The aim of lab testing was to select concrete mixture ratio which would make hardened concrete less susceptible to influence of basic operating factors. Analyses included two research groups. External and internal factors were selected. They influence parameters of hardened cement concrete by increasing rheological deformations. Research referred to innovative cement concrete intended for airfield pavements. Due to construction operation, the research considered the influence of weather conditions and forced thermal loads intensifying concrete stress. Fresh concrete mixture parameters were tested and basic parameters of hardened concrete were defined (density, absorbability, compression strength, tensile strength). Influence of the following factors on rheological deformation value was also analysed. Based on obtained test results, it has been discovered that innovative concrete, made on the basis of modifier, which changes internal structure of concrete composite, has definitely lower values of rheological deformation. Observed changes of microstructure, in connection with reduced deformation values allowed to reach the conclusion regarding advantageous characteristic features of the newly designed cement concrete. Applying such concrete for airfield construction may contribute to extension of its operation without malfunction and the increase of its general service life.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parks, N.J.
Data for the bone-by-bone redistribution of 90Sr in the beagle skeleton are reported for a period of 4000 d following a midgestation-to-540-d-exposure by ingestion. The partitioned clearance model (PCM) that was originally developed to describe bone-by-bone radionuclide redistribution of 226Ra after eight semimonthly injections at ages 435-535 d has been fitted to the 90Sr data. The parameter estimates for the PCM that describe the distribution and clearance of 226Ra after deposition on surfaces following injection and analogous parameter estimates for 90Sr after uniform deposition in the skeleton as a function of Ca mass are given. Fractional compact bone masses permore » bone group (mi,COM) are also predicted by the model and compared to measured values; a high degree of correlation (r = 0.84) is found. Bone groups for which the agreement between the model and experimental values of mi,COM was poor had tissue-to-calcium weight ratios about 1.5 times those for bones that agreed well. Metabolically defined surface in PCM is initial activity fraction per Ca fraction in a given skeletal component for intravenously injected alkaline earth (Sae) radionuclides; comparisons are made to similarly defined surface (Sact) values from 239Pu injection studies. The patterns of Sae and Sact distribution throughout the skeleton are similar.« less
Performance Assessment Uncertainty Analysis for Japan's HLW Program Feasibility Study (H12)
DOE Office of Scientific and Technical Information (OSTI.GOV)
BABA,T.; ISHIGURO,K.; ISHIHARA,Y.
1999-08-30
Most HLW programs in the world recognize that any estimate of long-term radiological performance must be couched in terms of the uncertainties derived from natural variation, changes through time and lack of knowledge about the essential processes. The Japan Nuclear Cycle Development Institute followed a relatively standard procedure to address two major categories of uncertainty. First, a FEatures, Events and Processes (FEPs) listing, screening and grouping activity was pursued in order to define the range of uncertainty in system processes as well as possible variations in engineering design. A reference and many alternative cases representing various groups of FEPs weremore » defined and individual numerical simulations performed for each to quantify the range of conceptual uncertainty. Second, parameter distributions were developed for the reference case to represent the uncertainty in the strength of these processes, the sequencing of activities and geometric variations. Both point estimates using high and low values for individual parameters as well as a probabilistic analysis were performed to estimate parameter uncertainty. A brief description of the conceptual model uncertainty analysis is presented. This paper focuses on presenting the details of the probabilistic parameter uncertainty assessment.« less
Parameter investigation with line-implicit lower-upper symmetric Gauss-Seidel on 3D stretched grids
NASA Astrophysics Data System (ADS)
Otero, Evelyn; Eliasson, Peter
2015-03-01
An implicit lower-upper symmetric Gauss-Seidel (LU-SGS) solver has been implemented as a multigrid smoother combined with a line-implicit method as an acceleration technique for Reynolds-averaged Navier-Stokes (RANS) simulation on stretched meshes. The computational fluid dynamics code concerned is Edge, an edge-based finite volume Navier-Stokes flow solver for structured and unstructured grids. The paper focuses on the investigation of the parameters related to our novel line-implicit LU-SGS solver for convergence acceleration on 3D RANS meshes. The LU-SGS parameters are defined as the Courant-Friedrichs-Lewy number, the left-hand side dissipation, and the convergence of iterative solution of the linear problem arising from the linearisation of the implicit scheme. The influence of these parameters on the overall convergence is presented and default values are defined for maximum convergence acceleration. The optimised settings are applied to 3D RANS computations for comparison with explicit and line-implicit Runge-Kutta smoothing. For most of the cases, a computing time acceleration of the order of 2 is found depending on the mesh type, namely the boundary layer and the magnitude of residual reduction.
Malaria Risk Assessment for the Republic of Korea Based on Models of Mosquito Distribution
2008-06-01
Yam;lda All. kleilli Rueda All. belellme Rueda VPH 0.8 • 0.6• ~ ~ 0.’ 0.2 0 H P V VPH Figure I, Illustration of the concept of the mal-area as it...the percentage of the sampled area that these parameters cover. The value for VPH could be used as a simplified index of malaria risk to compare...combinations of the VPH variables. These statistics will consist of the percentage of cells that contain a certain value for the user defined area
Cost/benefit analysis of advanced materials technologies for future aircraft turbine engines
NASA Technical Reports Server (NTRS)
Stephens, G. E.
1980-01-01
The materials technologies studied included thermal barrier coatings for turbine airfoils, turbine disks, cases, turbine vanes and engine and nacelle composite materials. The cost/benefit of each technology was determined in terms of Relative Value defined as change in return on investment times probability of success divided by development cost. A recommended final ranking of technologies was based primarily on consideration of Relative Values with secondary consideration given to changes in other economic parameters. Technologies showing the most promising cost/benefits were thermal barrier coated temperature nacelle/engine system composites.
UNIFORMLY MOST POWERFUL BAYESIAN TESTS
Johnson, Valen E.
2014-01-01
Uniformly most powerful tests are statistical hypothesis tests that provide the greatest power against a fixed null hypothesis among all tests of a given size. In this article, the notion of uniformly most powerful tests is extended to the Bayesian setting by defining uniformly most powerful Bayesian tests to be tests that maximize the probability that the Bayes factor, in favor of the alternative hypothesis, exceeds a specified threshold. Like their classical counterpart, uniformly most powerful Bayesian tests are most easily defined in one-parameter exponential family models, although extensions outside of this class are possible. The connection between uniformly most powerful tests and uniformly most powerful Bayesian tests can be used to provide an approximate calibration between p-values and Bayes factors. Finally, issues regarding the strong dependence of resulting Bayes factors and p-values on sample size are discussed. PMID:24659829
NASA Astrophysics Data System (ADS)
Bateev, A. B.; Filippov, V. P.
2017-01-01
The principle possibility of using computer program Univem MS for Mössbauer spectra fitting as a demonstration material at studying such disciplines as atomic and nuclear physics and numerical methods by students is shown in the article. This program is associated with nuclear-physical parameters such as isomer (or chemical) shift of nuclear energy level, interaction of nuclear quadrupole moment with electric field and of magnetic moment with surrounded magnetic field. The basic processing algorithm in such programs is the Least Square Method. The deviation of values of experimental points on spectra from the value of theoretical dependence is defined on concrete examples. This value is characterized in numerical methods as mean square deviation. The shape of theoretical lines in the program is defined by Gaussian and Lorentzian distributions. The visualization of the studied material on atomic and nuclear physics can be improved by similar programs of the Mössbauer spectroscopy, X-ray Fluorescence Analyzer or X-ray diffraction analysis.
Tiedeman, C.R.; Hill, M.C.; D'Agnese, F. A.; Faunt, C.C.
2003-01-01
Calibrated models of groundwater systems can provide substantial information for guiding data collection. This work considers using such models to guide hydrogeologic data collection for improving model predictions by identifying model parameters that are most important to the predictions. Identification of these important parameters can help guide collection of field data about parameter values and associated flow system features and can lead to improved predictions. Methods for identifying parameters important to predictions include prediction scaled sensitivities (PSS), which account for uncertainty on individual parameters as well as prediction sensitivity to parameters, and a new "value of improved information" (VOII) method presented here, which includes the effects of parameter correlation in addition to individual parameter uncertainty and prediction sensitivity. In this work, the PSS and VOII methods are demonstrated and evaluated using a model of the Death Valley regional groundwater flow system. The predictions of interest are advective transport paths originating at sites of past underground nuclear testing. Results show that for two paths evaluated the most important parameters include a subset of five or six of the 23 defined model parameters. Some of the parameters identified as most important are associated with flow system attributes that do not lie in the immediate vicinity of the paths. Results also indicate that the PSS and VOII methods can identify different important parameters. Because the methods emphasize somewhat different criteria for parameter importance, it is suggested that parameters identified by both methods be carefully considered in subsequent data collection efforts aimed at improving model predictions.
Practical identifiability analysis of a minimal cardiovascular system model.
Pironet, Antoine; Docherty, Paul D; Dauby, Pierre C; Chase, J Geoffrey; Desaive, Thomas
2017-01-17
Parameters of mathematical models of the cardiovascular system can be used to monitor cardiovascular state, such as total stressed blood volume status, vessel elastance and resistance. To do so, the model parameters have to be estimated from data collected at the patient's bedside. This work considers a seven-parameter model of the cardiovascular system and investigates whether these parameters can be uniquely determined using indices derived from measurements of arterial and venous pressures, and stroke volume. An error vector defined the residuals between the simulated and reference values of the seven clinically available haemodynamic indices. The sensitivity of this error vector to each model parameter was analysed, as well as the collinearity between parameters. To assess practical identifiability of the model parameters, profile-likelihood curves were constructed for each parameter. Four of the seven model parameters were found to be practically identifiable from the selected data. The remaining three parameters were practically non-identifiable. Among these non-identifiable parameters, one could be decreased as much as possible. The other two non-identifiable parameters were inversely correlated, which prevented their precise estimation. This work presented the practical identifiability analysis of a seven-parameter cardiovascular system model, from limited clinical data. The analysis showed that three of the seven parameters were practically non-identifiable, thus limiting the use of the model as a monitoring tool. Slight changes in the time-varying function modeling cardiac contraction and use of larger values for the reference range of venous pressure made the model fully practically identifiable. Copyright © 2017. Published by Elsevier B.V.
Averaging of random walks and shift-invariant measures on a Hilbert space
NASA Astrophysics Data System (ADS)
Sakbaev, V. Zh.
2017-06-01
We study random walks in a Hilbert space H and representations using them of solutions of the Cauchy problem for differential equations whose initial conditions are numerical functions on H. We construct a finitely additive analogue of the Lebesgue measure: a nonnegative finitely additive measure λ that is defined on a minimal subset ring of an infinite-dimensional Hilbert space H containing all infinite-dimensional rectangles with absolutely converging products of the side lengths and is invariant under shifts and rotations in H. We define the Hilbert space H of equivalence classes of complex-valued functions on H that are square integrable with respect to a shift-invariant measure λ. Using averaging of the shift operator in H over random vectors in H with a distribution given by a one-parameter semigroup (with respect to convolution) of Gaussian measures on H, we define a one-parameter semigroup of contracting self-adjoint transformations on H, whose generator is called the diffusion operator. We obtain a representation of solutions of the Cauchy problem for the Schrödinger equation whose Hamiltonian is the diffusion operator.
The use and misuse of V(c,max) in Earth System Models.
Rogers, Alistair
2014-02-01
Earth System Models (ESMs) aim to project global change. Central to this aim is the need to accurately model global carbon fluxes. Photosynthetic carbon dioxide assimilation by the terrestrial biosphere is the largest of these fluxes, and in many ESMs is represented by the Farquhar, von Caemmerer and Berry (FvCB) model of photosynthesis. The maximum rate of carboxylation by the enzyme Rubisco, commonly termed V c,max, is a key parameter in the FvCB model. This study investigated the derivation of the values of V c,max used to represent different plant functional types (PFTs) in ESMs. Four methods for estimating V c,max were identified; (1) an empirical or (2) mechanistic relationship was used to relate V c,max to leaf N content, (3) V c,max was estimated using an approach based on the optimization of photosynthesis and respiration or (4) calibration of a user-defined V c,max to obtain a target model output. Despite representing the same PFTs, the land model components of ESMs were parameterized with a wide range of values for V c,max (-46 to +77% of the PFT mean). In many cases, parameterization was based on limited data sets and poorly defined coefficients that were used to adjust model parameters and set PFT-specific values for V c,max. Examination of the models that linked leaf N mechanistically to V c,max identified potential changes to fixed parameters that collectively would decrease V c,max by 31% in C3 plants and 11% in C4 plants. Plant trait data bases are now available that offer an excellent opportunity for models to update PFT-specific parameters used to estimate V c,max. However, data for parameterizing some PFTs, particularly those in the Tropics and the Arctic are either highly variable or largely absent.
Normative Standards for HRpQCT Parameters in Chinese Men and Women.
Zhu, Tracy Y; Yip, Benjamin Hk; Hung, Vivian Wy; Choy, Carol Wy; Cheng, Ka-Lo; Kwok, Timothy Cy; Cheng, Jack Cy; Qin, Ling
2018-06-12
Assessing bone architecture using high resolution peripheral quantitative computed tomography (HRpQCT) has the potential to improve fracture risk assessment. The Normal Reference Study aimed to establish sex-specific reference centile curves for HRpQCT parameters. This was an age-stratified cross-sectional study and 1,072 ambulatory Chinese men (n = 544) and women (n = 528) aged 20-79yrs, who were free from conditions and medications that could affect bone metabolism and had no history of fragility fracture, were recruited from local communities of Hong Kong. Reference centile curves for each HRpQCT parameter were constructed using Generalized Additive Models for Location, Scale and Shape with age as the only explanatory variable. Patterns of reference centile curves reflected age-related changes of bone density, microarchitecture, and estimated bone strength. In both sexes, loss of cortical bone was only evident in mid-adulthood, particularly in women with a more rapid fashion probably concurrent with the onset of menopause. In contrast, loss of trabecular bone was subtle or gradual or occurred at an earlier age. Expected values of HRpQCT parameters for a defined sex and age, and a defined percentile or z-score were obtained from these curves. T-scores were calculated using the population with the peak values as the reference and reflected age- or menopause-related bone loss in an older individual or the room to reach the peak potential in a younger individual. These reference centile curves produced a standard describing a norm or desirable target that enables value clinical judgements. Percentiles, z-scores and T-scores would be helpful in detecting abnormalities in bone density and microarchitecture arising from various conditions and establishing entry criteria for clinical trials. They also hold the potential to refine the diagnosis of osteoporosis and assessment of fracture risk. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
You, Benoit; Deng, Wei; Hénin, Emilie; Oza, Amit; Osborne, Raymond
2016-01-01
In low-risk gestational trophoblastic neoplasia, chemotherapy effect is monitored and adjusted with serum human chorionic gonadotrophin (hCG) levels. Mathematical modeling of hCG kinetics may allow prediction of methotrexate (MTX) resistance, with production parameter "hCGres." This approach was evaluated using the GOG-174 (NRG Oncology/Gynecologic Oncology Group-174) trial database, in which weekly MTX (arm 1) was compared with dactinomycin (arm 2). Database (210 patients, including 78 with resistance) was split into 2 sets. A 126-patient training set was initially used to estimate model parameters. Patient hCG kinetics from days 7 to 45 were fit to: [hCG(time)] = hCG7 * exp(-k * time) + hCGres, where hCGres is residual hCG tumor production, hCG7 is the initial hCG level, and k is the elimination rate constant. Receiver operating characteristic (ROC) analyses defined putative hCGRes predictor of resistance. An 84-patient test set was used to assess prediction validity. The hCGres was predictive of outcome in both arms, with no impact of treatment arm on unexplained variability of kinetic parameter estimates. The best hCGres cutoffs to discriminate resistant versus sensitive patients were 7.7 and 74.0 IU/L in arms 1 and 2, respectively. By combining them, 2 predictive groups were defined (ROC area under the curve, 0.82; sensitivity, 93.8%; specificity, 70.5%). The predictive value of hCGres-based groups regarding resistance was reproducible in test set (ROC area under the curve, 0.81; sensitivity, 88.9%; specificity, 73.1%). Both hCGres and treatment arm were associated with resistance by logistic regression analysis. The early predictive value of the modeled kinetic parameter hCGres regarding resistance seems promising in the GOG-174 study. This is the second positive evaluation of this approach. Prospective validation is warranted.
Lax representations for matrix short pulse equations
NASA Astrophysics Data System (ADS)
Popowicz, Z.
2017-10-01
The Lax representation for different matrix generalizations of Short Pulse Equations (SPEs) is considered. The four-dimensional Lax representations of four-component Matsuno, Feng, and Dimakis-Müller-Hoissen-Matsuno equations are obtained. The four-component Feng system is defined by generalization of the two-dimensional Lax representation to the four-component case. This system reduces to the original Feng equation, to the two-component Matsuno equation, or to the Yao-Zang equation. The three-component version of the Feng equation is presented. The four-component version of the Matsuno equation with its Lax representation is given. This equation reduces the new two-component Feng system. The two-component Dimakis-Müller-Hoissen-Matsuno equations are generalized to the four-parameter family of the four-component SPE. The bi-Hamiltonian structure of this generalization, for special values of parameters, is defined. This four-component SPE in special cases reduces to the new two-component SPE.
TCP performance in ATM networks: ABR parameter tuning and ABR/UBR comparisons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chien Fang; Lin, A.
1996-02-27
This paper explores two issues on TOP performance over ATM networks: ABR parameter tuning and performance comparison of binary mode ABR with enhanced UBR services. Of the fifteen parameters defined for ABR, two parameters dominate binary mode ABR performance: Rate Increase Factor (RIF) and Rate Decrease Factor (RDF). Using simulations, we study the effects of these two parameters on TOP over ABR performance. We compare TOP performance with different ABR parameter settings in terms of through-puts and fairness. The effects of different buffer sizes and LAN/WAN distances are also examined. We then compare TOP performance with the best ABR parametermore » setting with corresponding UBR service enhanced with Early Packet Discard and also with a fair buffer allocation scheme. The results show that TOP performance over binary mode ABR is very sensitive to parameter value settings, and that a poor choice of parameters can result in ABR performance worse than that of the much less expensive UBR-EPD scheme.« less
Uncertainty in predictions of oil spill trajectories in a coastal zone
NASA Astrophysics Data System (ADS)
Sebastião, P.; Guedes Soares, C.
2006-12-01
A method is introduced to determine the uncertainties in the predictions of oil spill trajectories using a classic oil spill model. The method considers the output of the oil spill model as a function of random variables, which are the input parameters, and calculates the standard deviation of the output results which provides a measure of the uncertainty of the model as a result of the uncertainties of the input parameters. In addition to a single trajectory that is calculated by the oil spill model using the mean values of the parameters, a band of trajectories can be defined when various simulations are done taking into account the uncertainties of the input parameters. This band of trajectories defines envelopes of the trajectories that are likely to be followed by the spill given the uncertainties of the input. The method was applied to an oil spill that occurred in 1989 near Sines in the southwestern coast of Portugal. This model represented well the distinction between a wind driven part that remained offshore, and a tide driven part that went ashore. For both parts, the method defined two trajectory envelopes, one calculated exclusively with the wind fields, and the other using wind and tidal currents. In both cases reasonable approximation to the observed results was obtained. The envelope of likely trajectories that is obtained with the uncertainty modelling proved to give a better interpretation of the trajectories that were simulated by the oil spill model.
2013-01-01
Background Among disposable bioreactor systems, cylindrical orbitally shaken bioreactors show important advantages. They provide a well-defined hydrodynamic flow combined with excellent mixing and oxygen transfer for mammalian and plant cell cultivations. Since there is no known universal correlation between the volumetric mass transfer coefficient for oxygen kLa and relevant operating parameters in such bioreactor systems, the aim of this current study is to experimentally determine a universal kLa correlation. Results A Respiration Activity Monitoring System (RAMOS) was used to measure kLa values in cylindrical disposable shaken bioreactors and Buckingham’s π-Theorem was applied to define a dimensionless equation for kLa. In this way, a scale- and volume-independent kLa correlation was developed and validated in bioreactors with volumes from 2 L to 200 L. The final correlation was used to calculate cultivation parameters at different scales to allow a sufficient oxygen supply of tobacco BY-2 cell suspension cultures. Conclusion The resulting equation can be universally applied to calculate the mass transfer coefficient for any of seven relevant cultivation parameters such as the reactor diameter, the shaking frequency, the filling volume, the viscosity, the oxygen diffusion coefficient, the gravitational acceleration or the shaking diameter within an accuracy range of +/− 30%. To our knowledge, this is the first kLa correlation that has been defined and validated for the cited bioreactor system on a bench-to-pilot scale. PMID:24289110
Klöckner, Wolf; Gacem, Riad; Anderlei, Tibor; Raven, Nicole; Schillberg, Stefan; Lattermann, Clemens; Büchs, Jochen
2013-12-02
Among disposable bioreactor systems, cylindrical orbitally shaken bioreactors show important advantages. They provide a well-defined hydrodynamic flow combined with excellent mixing and oxygen transfer for mammalian and plant cell cultivations. Since there is no known universal correlation between the volumetric mass transfer coefficient for oxygen kLa and relevant operating parameters in such bioreactor systems, the aim of this current study is to experimentally determine a universal kLa correlation. A Respiration Activity Monitoring System (RAMOS) was used to measure kLa values in cylindrical disposable shaken bioreactors and Buckingham's π-Theorem was applied to define a dimensionless equation for kLa. In this way, a scale- and volume-independent kLa correlation was developed and validated in bioreactors with volumes from 2 L to 200 L. The final correlation was used to calculate cultivation parameters at different scales to allow a sufficient oxygen supply of tobacco BY-2 cell suspension cultures. The resulting equation can be universally applied to calculate the mass transfer coefficient for any of seven relevant cultivation parameters such as the reactor diameter, the shaking frequency, the filling volume, the viscosity, the oxygen diffusion coefficient, the gravitational acceleration or the shaking diameter within an accuracy range of +/- 30%. To our knowledge, this is the first kLa correlation that has been defined and validated for the cited bioreactor system on a bench-to-pilot scale.
Calibration process of highly parameterized semi-distributed hydrological model
NASA Astrophysics Data System (ADS)
Vidmar, Andrej; Brilly, Mitja
2017-04-01
Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group. Third step is to set appropriate bounds to parameters in their range of realistic values. Fourth step is to use of singular value decomposition (SVD) ensures that PEST maintains numerical stability, regardless of how ill-posed is the inverse problem Fifth step is to run PWTADJ1. This creates a new PEST control file in which weights are adjusted such that the contribution made to the total objective function by each observation group is the same. This prevents the information content of any group from being invisible to the inversion process. Sixth step is to add Tikhonov regularization to the PEST control file by running the ADDREG1 utility (Doherty, J, 2013). In adding regularization to the PEST control file ADDREG1 automatically provides a prior information equation for each parameter in which the preferred value of that parameter is equated to its initial value. Last step is to run PEST. We run BeoPEST which a parallel version of PEST and can be run on multiple computers in parallel in same time on TCP communications and this speedup process of calibrations. The case study with results of calibration and validation of the model will be presented.
A Novel Degradation Identification Method for Wind Turbine Pitch System
NASA Astrophysics Data System (ADS)
Guo, Hui-Dong
2018-04-01
It’s difficult for traditional threshold value method to identify degradation of operating equipment accurately. An novel degradation evaluation method suitable for wind turbine condition maintenance strategy implementation was proposed in this paper. Based on the analysis of typical variable-speed pitch-to-feather control principle and monitoring parameters for pitch system, a multi input multi output (MIMO) regression model was applied to pitch system, where wind speed, power generation regarding as input parameters, wheel rotation speed, pitch angle and motor driving currency for three blades as output parameters. Then, the difference between the on-line measurement and the calculated value from the MIMO regression model applying least square support vector machines (LSSVM) method was defined as the Observed Vector of the system. The Gaussian mixture model (GMM) was applied to fitting the distribution of the multi dimension Observed Vectors. Applying the model established, the Degradation Index was calculated using the SCADA data of a wind turbine damaged its pitch bearing retainer and rolling body, which illustrated the feasibility of the provided method.
Cooley, Richard L.
1993-01-01
A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.
Semen parameters in fertile US men: the Study for Future Families.
Redmon, J B; Thomas, W; Ma, W; Drobnis, E Z; Sparks, A; Wang, C; Brazil, C; Overstreet, J W; Liu, F; Swan, S H
2013-11-01
Establishing reference norms for semen parameters in fertile men is important for accurate assessment, counselling and treatment of men with male factor infertility. Identifying temporal or geographic variability in semen quality also requires accurate measurement of semen parameters in well-characterized, defined populations of men. The Study for Future Families (SFF) recruited men who were partners of pregnant women attending prenatal clinics in Los Angeles CA, Minneapolis MN, Columbia MO, New York City NY and Iowa City IA. Semen samples were collected on site from 763 men (73% White, 15% Hispanic/Latino, 7% Black and 5% Asian or other ethnic group) using strict quality control and well-defined protocols. Semen volume (by weight), sperm concentration (hemacytometer) and sperm motility were measured at each centre. Sperm morphology (both WHO, 1999 strict and WHO, 1987) was determined at a central laboratory. Mean abstinence was 3.2 days. Mean (median; 5th-95th percentile) values were: semen volume, 3.9 (3.7; 1.5-6.8) mL; sperm concentration, 60 (67; 12-192) × 10(6) /mL; total sperm count 209 (240; 32-763) × 10(6) ; % motile, 51 (52; 28-67) %; and total motile sperm count, 104 (128; 14-395) × 10(6) respectively. Values for sperm morphology were 11 (10; 3-20) % and 57 (59; 38-72) % normal forms for WHO (1999) (strict) and WHO (1987) criteria respectively. Black men had significantly lower semen volume, sperm concentration and total motile sperm counts than White and Hispanic/Latino men. Semen parameters were marginally higher in men who achieved pregnancy more quickly but differences were small and not statistically significant. The SFF provides robust estimates of semen parameters in fertile men living in five different geographic locations in the US. Fertile men display wide variation in all of the semen parameters traditionally used to assess fertility potential. © 2013 American Society of Andrology and European Academy of Andrology.
Solar wind velocity and daily variation of cosmic rays
NASA Technical Reports Server (NTRS)
Ahluwalia, H. S.; Riker, J. F.
1985-01-01
Recently parameters applicable to the solar wind and the interplanetary magnetic field (IMF) have become much better defined. Superior quality of data bases that are now available, particularly for post-1971 period, make it possible to believe the long-term trends in the data. These data are correlated with the secular changes observed in the diurnal variation parameters obtained from neutron monitor data at Deep River and underground muon telescope data at Embudo (30 MEW) and Socorro (82 MWE). The annual mean amplitudes appear to have large values during the epochs of high speed solar wind streams. Results are discussed.
Automatic genetic optimization approach to two-dimensional blade profile design for steam turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trigg, M.A.; Tubby, G.R.; Sheard, A.G.
1999-01-01
In this paper a systematic approach to the optimization of two-dimensional blade profiles is presented. A genetic optimizer has been developed that modifies the blade profile and calculates its profile loss. This process is automatic, producing profile designs significantly faster and with significantly lower loss than has previously been possible. The optimizer developed uses a genetic algorithm to optimize a two-dimensional profile, defined using 17 parameters, for minimum loss with a given flow condition. The optimizer works with a population of two-dimensional profiles with varied parameters. A CFD mesh is generated for each profile, and the result is analyzed usingmore » a two-dimensional blade-to-blade solver, written for steady viscous compressible flow, to determine profile loss. The loss is used as the measure of a profile`s fitness. The optimizer uses this information to select the members of the next population, applying crossovers, mutations, and elitism in the process. Using this method, the optimizer tends toward the best values for the parameters defining the profile with minimum loss.« less
Evolutionary algorithm for vehicle driving cycle generation.
Perhinschi, Mario G; Marlowe, Christopher; Tamayo, Sergio; Tu, Jun; Wayne, W Scott
2011-09-01
Modeling transit bus emissions and fuel economy requires a large amount of experimental data over wide ranges of operational conditions. Chassis dynamometer tests are typically performed using representative driving cycles defined based on vehicle instantaneous speed as sequences of "microtrips", which are intervals between consecutive vehicle stops. Overall significant parameters of the driving cycle, such as average speed, stops per mile, kinetic intensity, and others, are used as independent variables in the modeling process. Performing tests at all the necessary combinations of parameters is expensive and time consuming. In this paper, a methodology is proposed for building driving cycles at prescribed independent variable values using experimental data through the concatenation of "microtrips" isolated from a limited number of standard chassis dynamometer test cycles. The selection of the adequate "microtrips" is achieved through a customized evolutionary algorithm. The genetic representation uses microtrip definitions as genes. Specific mutation, crossover, and karyotype alteration operators have been defined. The Roulette-Wheel selection technique with elitist strategy drives the optimization process, which consists of minimizing the errors to desired overall cycle parameters. This utility is part of the Integrated Bus Information System developed at West Virginia University.
NASA Astrophysics Data System (ADS)
Thampi, S. V.; Ravindran, S.; Devasia, C. V.; Pant, T. K.; Sreelatha, P.; Sridharan, R.
The Coherent Radio Beacon Experiment (CRABEX) is aimed at investigating the equatorial ionospheric processes like the Equatorial Ionization Anomaly (EIA) and Equatorial Spread F (ESF) and their inter relationships. As a part of CRABEX program, a network of six stations covering the region from Trivandrum (8.5°N) to Nainital (29.3°N) is set up along the 77-78° E meridian. These ground receivers basically measure the slant Total Electron Content (TEC) along the line of sight from the Low Earth Orbiting satellites (NIMS). These simultaneous TEC measurements are inverted to obtain the tomographic image of the latitudinal distribution of electron densities in the meridional plane. In this paper, the tomographic images of the equatorial ionosphere along the 77-78°E meridian are presented. The crest intensities in the southern and northern hemispheres also show significant differences with seasons, showing the variability in the EIA asymmetry. The evening images give an indication of the prevailing electrodynamical conditions on different days, preceding the occurrence/non-occurrence of ESF. Apart from this, the single station TEC measurements from the Trivandrum station itself is used to estimate the EIA strength and asymmetry. Since this station is situated at the trough of the EIA, right over the dip equator, the latitudinal gradients on both northern (N) and southern (S) sides can be used to compute the EIA strength and asymmetry. These two parameters, obtained well ahead of the onset time of ESF, are shown to have a definite role on the subsequent ESF activity. Hence, both these factors are combined to define a new `forecast parameter' for the generation of ESF. It has been shown that this parameter can uniquely define the state of the `background ionosphere' conducive for the generation of ESF irregularities as early as 1600 IST. A critical value for the `forecast parameter' has been identified such that when the estimated value for `forecast parameter' exceeds it, the ESF is seen to occur. It is also observed that this critical value varies with season. All these aspects are studied in detail and the results are presented.
NASA Astrophysics Data System (ADS)
Rochman, YA; Agustin, A.
2017-06-01
This study proposes the DMAIC Six Sigma approach of Define, Measure, Analyze, Improve/Implement and Control (DMAIC) to minimizing the number of defective products in the bridge & rib department. There are 5 types of defects were the most dominant are broken rib, broken sound board, strained rib, rib sliding and sound board minori. The imperative objective is to improve the quality through the DMAIC phases. In the define phase, the critical to quality (CTQ) parameters was identified minimization of product defects through the pareto chart and FMEA. In this phase, to identify waste based on the current value stream mapping. In the measure phase, the specified control limits product used to maintain the variations of the product, the calculation of the value of DPMO (Defect Per Million Opportunities) and the calculation of the value of sigma level. In analyze phase, determine the type of defect of the most dominant and identify the causes of defective products. In the improve phase, the existing design was modified through various alternative solutions by conducting brainstorming sessions. In this phase, the solution was identified based on the results of FMEA. Improvements were made to the seven priority causes of disability based on the highest RPN value. In the control phase, focusing on improvements to be made. Proposed improvements include making and define standard operating procedures, improving the quality and eliminate waste defective products.
Lake Number, a quantitative indicator of mixing used to estimate changes in dissolved oxygen
Robertson, Dale M.; Imberger, Jorg
1994-01-01
Lake Number, LN, values are shown to be quantitative indicators of deep mixing in lakes and reservoirs that can be used to estimate changes in deep water dissolved oxygen (DO) concentrations. LN is a dimensionless parameter defined as the ratio of the moments about the center of volume of the water body, of the stabilizing force of gravity associated with density stratification to the destabilizing forces supplied by wind, cooling, inflow, outflow, and other artificial mixing devices. To demonstrate the universality of this parameter, LN values are used to describe the extent of deep mixing and are compared with changes in DO concentrations in three reservoirs in Australia and four lakes in the U.S.A., which vary in productivity and mixing regimes. A simple model is developed which relates changes in LN values, i.e., the extent of mixing, to changes in near bottom DO concentrations. After calibrating the model for a specific system, it is possible to use real-time LN values, calculated using water temperature profiles and surface wind velocities, to estimate changes in DO concentrations (assuming unchanged trophic conditions).
Balancing income and cost in red deer management.
Skonhoft, Anders; Veiberg, Vebjørn; Gauteplass, Asle; Olaussen, Jon Olaf; Meisingset, Erling L; Mysterud, Atle
2013-01-30
This paper presents a bioeconomic analysis of a red deer population within a Norwegian institutional context. This population is managed by a well-defined manager, typically consisting of many landowners operating in a cooperative manner, with the goal of maximizing the present-value hunting related income while taking browsing and grazing damages into account. The red deer population is structured in five categories of animals (calves, female and male yearlings, adult females and adult males). It is shown that differences in the per-animal meat values and survival rates ('biological discounted' values) are instrumental in determining the optimal harvest composition. Fertility plays no direct role. It is argued that this is a general result working in stage-structured models with harvest values. In the numerical illustration it is shown that the optimal harvest pattern stays quite stable under various parameter changes. It is revealed which parameters and harvest restrictions that is most important. We also show that the current harvest pattern involves too much yearling harvest compared with the economically efficient level. Copyright © 2012 Elsevier Ltd. All rights reserved.
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con
Spacecraft utility and the development of confidence intervals for criticality of anomalies
NASA Technical Reports Server (NTRS)
Williams, R. E.
1980-01-01
The concept of spacecraft utility, a measure of its performance in orbit, is discussed and its formulation is described. Performance is defined in terms of the malfunctions that occur and the criticality to the mission of these malfunctions. Different approaches to establishing average or expected values of criticality are discussed and confidence intervals are developed for parameters used in the computation of utility.
Impulsive Choice and Workplace Safety: A New Area of Inquiry for Research in Occupational Settings
ERIC Educational Resources Information Center
Reynolds, Brady; Schiffbauer, Ryan M.
2004-01-01
A conceptual argument is presented for the relevance of behavior-analytic research on impulsive choice to issues of occupational safety and health. Impulsive choice is defined in terms of discounting, which is the tendency for the value of a commodity to decrease as a function of various parameters (e.g., having to wait or expend energy to receive…
NASA Technical Reports Server (NTRS)
Palosz, B.; Grzanka, E.; Gierlotka, S.; Stelmakh, S.; Pielaszek, R.; Bismayer, U.; Weber, H.-P.; Palosz, W.; Curreri, Peter A. (Technical Monitor)
2002-01-01
The applicability of standard methods of elaboration of powder diffraction data for determination of the structure of nano-size crystallites is analysed. Based on our theoretical calculations of powder diffraction data we show, that the assumption of the infinite crystal lattice for nanocrystals smaller than 20 nm in size is not justified. Application of conventional tools developed for elaboration of powder diffraction data, like the Rietveld method, may lead to erroneous interpretation of the experimental results. An alternate evaluation of diffraction data of nanoparticles, based on the so-called 'apparent lattice parameter' (alp) is introduced. We assume a model of nanocrystal having a grain core with well-defined crystal structure, surrounded by a surface shell with the atomic structure similar to that of the core but being under a strain (compressive or tensile). The two structural components, the core and the shell, form essentially a composite crystal with interfering, inseparable diffraction properties. Because the structure of such a nanocrystal is not uniform, it defies the basic definitions of an unambiguous crystallographic phase. Consequently, a set of lattice parameters used for characterization of simple crystal phases is insufficient for a proper description of the complex structure of nanocrystals. We developed a method of evaluation of powder diffraction data of nanocrystals, which refers to a core-shell model and is based on the 'apparent lattice parameter' methodology. For a given diffraction pattem, the alp values are calculated for every individual Bragg reflection. For nanocrystals the alp values depend on the diffraction vector Q. By modeling different a0tomic structures of nanocrystals and calculating theoretically corresponding diffraction patterns using the Debye functions we showed, that alp-Q plots show characteristic shapes which can be used for evaluation of the atomic structure of the core-shell system. We show, that using a simple model of a nanocrystal with spherical shape and centro-symmetric strain at the surface shell we obtain theoretical alp-Q values which match very well the alp-Q plots determined experimentally for Sic, GaN, and diamond nanopowders. The theoretical models are defined by the lattice parameter of the grain core, thickness of the surface shell, and the magnitude and distribution of the strain field in the surface shell. According to our calculations, the part of the diffraction pattern measured at relatively low diffraction vectors Q (below 10/angstrom) provides information on the surface strain, whle determination of the lattice parameters in the grain core requires measurements at large Q-values (above 15 - 20/angstrom).
Renormalization group approach to symmetry protected topological phases
NASA Astrophysics Data System (ADS)
van Nieuwenburg, Evert P. L.; Schnyder, Andreas P.; Chen, Wei
2018-04-01
A defining feature of a symmetry protected topological phase (SPT) in one dimension is the degeneracy of the Schmidt values for any given bipartition. For the system to go through a topological phase transition separating two SPTs, the Schmidt values must either split or cross at the critical point in order to change their degeneracies. A renormalization group (RG) approach based on this splitting or crossing is proposed, through which we obtain an RG flow that identifies the topological phase transitions in the parameter space. Our approach can be implemented numerically in an efficient manner, for example, using the matrix product state formalism, since only the largest first few Schmidt values need to be calculated with sufficient accuracy. Using several concrete models, we demonstrate that the critical points and fixed points of the RG flow coincide with the maxima and minima of the entanglement entropy, respectively, and the method can serve as a numerically efficient tool to analyze interacting SPTs in the parameter space.
User's Manual for Aerofcn: a FORTRAN Program to Compute Aerodynamic Parameters
NASA Technical Reports Server (NTRS)
Conley, Joseph L.
1992-01-01
The computer program AeroFcn is discussed. AeroFcn is a utility program that computes the following aerodynamic parameters: geopotential altitude, Mach number, true velocity, dynamic pressure, calibrated airspeed, equivalent airspeed, impact pressure, total pressure, total temperature, Reynolds number, speed of sound, static density, static pressure, static temperature, coefficient of dynamic viscosity, kinematic viscosity, geometric altitude, and specific energy for a standard- or a modified standard-day atmosphere using compressible flow and normal shock relations. Any two parameters that define a unique flight condition are selected, and their values are entered interactively. The remaining parameters are computed, and the solutions are stored in an output file. Multiple cases can be run, and the multiple case solutions can be stored in another output file for plotting. Parameter units, the output format, and primary constants in the atmospheric and aerodynamic equations can also be changed.
NASA Astrophysics Data System (ADS)
Cristescu, Constantin P.; Stan, Cristina; Scarlat, Eugen I.; Minea, Teofil; Cristescu, Cristina M.
2012-04-01
We present a novel method for the parameter oriented analysis of mutual correlation between independent time series or between equivalent structures such as ordered data sets. The proposed method is based on the sliding window technique, defines a new type of correlation measure and can be applied to time series from all domains of science and technology, experimental or simulated. A specific parameter that can characterize the time series is computed for each window and a cross correlation analysis is carried out on the set of values obtained for the time series under investigation. We apply this method to the study of some currency daily exchange rates from the point of view of the Hurst exponent and the intermittency parameter. Interesting correlation relationships are revealed and a tentative crisis prediction is presented.
Modified Denavit-Hartenberg parameters for better location of joint axis systems in robot arms
NASA Technical Reports Server (NTRS)
Barker, L. K.
1986-01-01
The Denavit-Hartenberg parameters define the relative location of successive joint axis systems in a robot arm. A recent justifiable criticism is that one of these parameters becomes extremely large when two successive joints have near-parallel rotational axes. Geometrically, this parameter then locates a joint axis system at an excessive distance from the robot arm and, computationally, leads to an ill-conditioned transformation matrix. In this paper, a simple modification (which results from constraining a transverse vector between successive joint rotational axes to be normal to one of the rotational axes, instead of both) overcomes this criticism and favorably locates the joint axis system. An example is given for near-parallel rotational axes of the elbow and shoulder joints in a robot arm. The regular and modified parameters are extracted by an algebraic method with simulated measurement data. Unlike the modified parameters, extracted values of the regular parameters are very sensitive to measurement accuracy.
Rankl, James G.
1990-01-01
A physically based point-infiltration model was developed for computing infiltration of rainfall into soils and the resulting runoff from small basins in Wyoming. The user describes a 'design storm' in terms of average rainfall intensity and storm duration. Information required to compute runoff for the design storm by using the model include (1) soil type and description, and (2) two infiltration parameters and a surface-retention storage parameter. Parameter values are tabulated in the report. Rainfall and runoff data for three ephemeral-stream basins that contain only one type of soil were used to develop the model. Two assumptions were necessary: antecedent soil moisture is some long-term average, and storm rainfall is uniform in both time and space. The infiltration and surface-retention storage parameters were determined for the soil of each basin. Observed rainstorm and runoff data were used to develop a separation curve, or incipient-runoff curve, which distinguishes between runoff and nonrunoff rainfall data. The position of this curve defines the infiltration and surface-retention storage parameters. A procedure for applying the model to basins that contain more than one type of soil was developed using data from 7 of the 10 study basins. For these multiple-soil basins, the incipient-runoff curve defines the infiltration and retention-storage parameters for the soil having the highest runoff potential. Parameters were defined by ranking the soils according to their relative permeabilities and optimizing the position of the incipient-runoff curve by using measured runoff as a control for the fit. Analyses of runoff from multiple-soil basins indicate that the effective contributing area of runoff is less than the drainage area of the basin. In this study, the effective drainage area ranged from 41.6 to 71.1 percent of the total drainage area. Information on effective drainage area is useful in evaluating drainage area as an independent variable in statistical analyses of hydrologic data, such as annual peak frequency distributions and sediment yield.A comparison was made of the sum of the simulated runoff and the sum of the measured runoff for all available records of runoff-producing storms in the 10 study basins. The sums of the simulated runoff ranged from 12.0 percent less than to 23.4 percent more than the sums of the measured runoff. A measure of the standard error of estimate was computed for each data set. These values ranged from 20 to 70 percent of the mean value of the measured runoff. Rainfall-simulator infiltrometer tests were made in two small basins. The amount of water uptake measured by the test in Dugout Creek tributary basin averaged about three times greater than the amount of water uptake computed from rainfall and runoff data. Therefore, infiltrometer data were not used to determine infiltration rates for this study.
Resolution of the threshold fracture energy paradox for solid particle erosion
NASA Astrophysics Data System (ADS)
Peck, Daniel; Volkov, Grigory; Mishuris, Gennady; Petrov, Yuri
2016-12-01
Previous models of a single erosion impact, for a rigid axisymmetric indenter defined by the shape function ?, have shown that a critical shape parameter ? exists which determines the behaviour of the threshold fracture energy. However, repeated investigations into this parameter have found no physical explanation for its value. Again utilising the notion of incubation time prior to fracture, this paper attempts to provide a physical explanation of this phenomena by introducing a supersonic stage into the model. The final scheme allows for the effect of waves along the indenters contact area to be taken into account. The effect of this physical characteristic of the impact on the threshold fracture energy and critical shape parameter ? are investigated and discussed.
Recycled grains in lunar soils as an additional, necessary, regolith evolution parameter
NASA Technical Reports Server (NTRS)
Basu, A.
1990-01-01
Recycled lunar soil grains are defined as those soil grains that have been a part of either regolith breccias or agglutinates; thus, mineral grains, rock fragments, older agglutinates, and volcanic glass spherules, if dislodged from an agglutinate or a regolith breccia, would all qualify as recycled grains. This paper shows that it is possible to estimate the proportion of recycled material in lunar soils. Optical data from 12 soils in the Apollo 16 core 64001/2 were collected to estimate the proportion (W) of recycled crystalline grains in each of these soils. The W values show a correspondence with other independently derived parameters and the history of the core soils, indicating that W can be used as a valid soil-evolution parameter.
Topology versus Anderson localization: Nonperturbative solutions in one dimension
NASA Astrophysics Data System (ADS)
Altland, Alexander; Bagrets, Dmitry; Kamenev, Alex
2015-02-01
We present an analytic theory of quantum criticality in quasi-one-dimensional topological Anderson insulators. We describe these systems in terms of two parameters (g ,χ ) representing localization and topological properties, respectively. Certain critical values of χ (half-integer for Z classes, or zero for Z2 classes) define phase boundaries between distinct topological sectors. Upon increasing system size, the two parameters exhibit flow similar to the celebrated two-parameter flow of the integer quantum Hall insulator. However, unlike the quantum Hall system, an exact analytical description of the entire phase diagram can be given in terms of the transfer-matrix solution of corresponding supersymmetric nonlinear sigma models. In Z2 classes we uncover a hidden supersymmetry, present at the quantum critical point.
Improving Bedload Transport Predictions by Incorporating Hysteresis
NASA Astrophysics Data System (ADS)
Crowe Curran, J.; Gaeuman, D.
2015-12-01
The importance of unsteady flow on sediment transport rates has long been recognized. However, the majority of sediment transport models were developed under steady flow conditions that did not account for changing bed morphologies and sediment transport during flood events. More recent research has used laboratory data and field data to quantify the influence of hysteresis on bedload transport and adjust transport models. In this research, these new methods are combined to improve further the accuracy of bedload transport rate quantification and prediction. The first approach defined reference shear stresses for hydrograph rising and falling limbs, and used these values to predict total and fractional transport rates during a hydrograph. From this research, a parameter for improving transport predictions during unsteady flows was developed. The second approach applied a maximum likelihood procedure to fit a bedload rating curve to measurements from a number of different coarse bed rivers. Parameters defining the rating curve were optimized for values that maximized the conditional probability of producing the measured bedload transport rate. Bedload sample magnitude was fit to a gamma distribution, and the probability of collecting N particles in a sampler during a given time step was described with a Poisson probability density function. Both approaches improved estimates of total transport during large flow events when compared to existing methods and transport models. Recognizing and accounting for the changes in transport parameters over time frames on the order of a flood or flood sequence influences the choice of method for parameter calculation in sediment transport calculations. Those methods that more tightly link the changing flow rate and bed mobility have the potential to improve bedload transport rates.
NASA Astrophysics Data System (ADS)
Hayat, Asma; Bashir, Shazia; Rafique, Muhammad Shahid; Ahmad, Riaz; Akram, Mahreen; Mahmood, Khaliq; Zaheer, Ali
2017-12-01
Spatial confinement effects on plasma parameters and surface morphology of laser ablated Zr (Zirconium) are studied by introducing a metallic blocker. Nd:YAG laser at various fluencies ranging from 8 J cm-2 to 32 J cm-2 was employed as an irradiation source. All measurements were performed in the presence of Ar under different pressures. Confinement effects offered by metallic blocker are investigated by placing the blocker at different distances of 6 mm, 8 mm and 10 mm from the target surface. It is revealed from LIBS analysis that both plasma parameters i.e. excitation temperature and electron number density increase with increasing laser fluence due to enhancement in energy deposition. It is also observed that spatial confinement offered by metallic blocker is responsible for the enhancement of both electron temperature and electron number density of Zr plasma. This is true for all laser fluences and pressures of Ar. Maximum values of electron temperature and electron number density without blocker are 12,600 K and 14 × 1017 cm-3 respectively whereas, these values are enhanced to 15,000 K and 21 × 1017 cm-3 in the presence of blocker. The physical mechanisms responsible for the enhancement of Zr plasma parameters are plasma compression, confinement and pronounced collisional excitations due to reflection of shock waves. Scanning Electron Microscope (SEM) analysis was performed to explore the surface morphology of laser ablated Zr. It reveals the formation of cones, cavities and ripples. These features become more distinct and well defined in the presence of blocker due to plasma confinement. The optimum combination of blocker distance, fluence and Ar pressure can identify the suitable conditions for defining the role of plasma parameters for surface structuring.
Quantifying Groundwater Model Uncertainty
NASA Astrophysics Data System (ADS)
Hill, M. C.; Poeter, E.; Foglia, L.
2007-12-01
Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This approach is attainable through universal model analysis software such as UCODE-2005, PEST, and joint use of these programs, which allow many aspects of a model to be defined as parameters. (2) Use highly parameterized models to quantify aspects of (e). While promising, this approach implicitly includes parameterizations that may be considered unreasonable if investigated explicitly, so that resulting measures of uncertainty may be too large. (3) Use a combination of inferential and global methods that can be facilitated using the new software MMA (Multi-Model Analysis), which is constructed using the JUPITER API. Here we consider issues related to the model discrimination criteria calculated by MMA.
Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen
2017-12-01
A method is proposed and verified for selecting the optimum segmentation of a TEM reconstruction among the results of several segmentation algorithms. The selection criterion is the accuracy of the segmentation. To do this selection, a parameter for the comparison of the accuracies of the different segmentations has been defined. It consists of the mutual information value between the acquired TEM images of the sample and the Radon projections of the segmented volumes. In this work, it has been proved that this new mutual information parameter and the Jaccard coefficient between the segmented volume and the ideal one are correlated. In addition, the results of the new parameter are compared to the results obtained from another validated method to select the optimum segmentation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Why anthropic reasoning cannot predict Lambda.
Starkman, Glenn D; Trotta, Roberto
2006-11-17
We revisit anthropic arguments purporting to explain the measured value of the cosmological constant. We argue that different ways of assigning probabilities to candidate universes lead to totally different anthropic predictions. As an explicit example, we show that weighting different universes by the total number of possible observations leads to an extremely small probability for observing a value of Lambda equal to or greater than what we now measure. We conclude that anthropic reasoning within the framework of probability as frequency is ill-defined and that in the absence of a fundamental motivation for selecting one weighting scheme over another the anthropic principle cannot be used to explain the value of Lambda, nor, likely, any other physical parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
M. Gross
2004-09-01
The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural response calculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall inmore » emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural response calculations that support development of the seismic scenario for the Total System Performance Assessment for the License Application (TSPA-LA). The results from this scientific analysis also address project requirements related to parameter uncertainty, as specified in the acceptance criteria in ''Yucca Mountain Review Plan, Final Report'' (NRC 2003 [DIRS 163274]). This document was prepared under the direction of ''Technical Work Plan for: Regulatory Integration Modeling of Drift Degradation, Waste Package and Drip Shield Vibratory Motion and Seismic Consequences'' (BSC 2004 [DIRS 170528]) which directed the work identified in work package ARTM05. This document was prepared under procedure AP-SIII.9Q, ''Scientific Analyses''. There are no specific known limitations to this analysis.« less
Berthele, H; Sella, O; Lavarde, M; Mielcarek, C; Pense-Lheritier, A-M; Pirnay, S
2014-02-01
Ethanol, pH and water activity are three well-known parameters that can influence the preservation of cosmetic products. With the new constraints regarding the antimicrobial effectiveness and the restrictive use of preservatives, a D-optimal design was set up to evaluate the influence of these three parameters on the microbiological conservation. To monitor the effectiveness of the different combination of these set parameters, a challenge test in compliance with the International standard ISO 11930: 2012 was implemented. The formulations established in our study could support wide variations of ethanol concentration, pH values and glycerin concentration without noticeable effects on the stability of the products. In the conditions of the study, determining the value of a single parameter, with the tested concentration, could not guarantee microbiological conservation. However, a high concentration of ethanol associated with an extreme pH could inhibit bacteria growth from the first day (D0). Besides, it appears that despite an aw above 0.6 (even 0.8) and without any preservatives incorporated in formulas, it was possible to guarantee the microbiological stability of the cosmetic product when maintaining the right combination of the selected parameters. Following the analysis of the different values obtained during the experimentation, there seems to be a correlation between the aw and the selected parameters aforementioned. An application of this relationship could be to define the aw of cosmetic products by using the formula, thus avoiding the evaluation of this parameter with a measuring device. © 2013 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Tateishi, Seiichiro; Watase, Mariko; Fujino, Yoshihisa; Mori, Koji
2016-01-01
In Japan, employee fitness for work is determined by annual medical examinations. It may be possible to reduce the variability in the results of work fitness determination, particularly for situation, if there is consensus among experts regarding consideration of limitation of work by means of a single parameter. Consensus building was attempted among 104 occupational physicians by employing a 3-round Delphi method. Among the medical examination parameters for which at least 50% of participants agreed in the 3rd round of the survey that the parameter would independently merit consideration for limitation of work, the values of the parameters proposed as criterion values that trigger consideration of limitation of work were sought. Parameters, along with their most frequently proposed criterion values, were defined in the study group meeting as parameters for which consensus was reached. Consensus was obtained for 8 parameters: systolic blood pressure 180 mmHg (86.6%), diastolic blood pressure 110 mmHg (85.9%), postprandial plasma glucose 300 mg/dl (76.9%), fasting plasma glucose 200 mg/dl (69.1%), Cre 2.0mg/dl (67.2%), HbA1c (JDS) 10% (62.3%), ALT 200 U/l (61.6%), and Hb 8 g/l (58.5%). To support physicians who give advice to employers about work-related measures based on the results of general medical examinations of employees, expert consensus information was obtained that can serve as background material for making judgements. It is expected that the use of this information will facilitate the ability to take appropriate measures after medical examination of employees.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
Salvago, Pietro; Rizzo, Serena; Bianco, Antonino; Martines, Francesco
2017-03-01
To investigate the relationship between haematological routine parameters and audiogram shapes in patients affected by sudden sensorineural hearing loss (SSNHL). A retrospective study. All patients were divided into four groups according to the audiometric curve and mean values of haematological parameters (haemoglobin, white blood cell, neutrophils and lymphocytes relative count, platelet count, haematocrit, prothrombin time, activated partial thromboplastin time, fibrinogen and neutrophil-to-lymphocite ratio) of each group were statistically compared. The prognostic role of blood profile and coagulation test was also examined. A cohort of 183 SSNHL patients without comorbidities. With a 48.78% of complete hearing recovery, individuals affected by upsloping hearing loss presented a better prognosis instead of flat (18.36%), downsloping (19.23%) and anacusis (2.45%) groups (p = 0.0001). The multivariate analysis of complete blood count values revealed lower mean percentage of lymphocytes (p = 0.041) and higher platelet levels (p = 0.015) in case of downsloping hearing loss; with the exception of fibrinogen (p = 0.041), none of the main haematological parameters studied resulted associated with poorer prognosis. Our work suggested a lack of association between haematological parameters and a defined audiometric picture in SSNHL patients; furthermore, only fibrinogen seems to influence the prognosis of this disease.
Volumetric flow rate in simulations of microfluidic devices+
NASA Astrophysics Data System (ADS)
Kovalčíková, KristÍna; Slavík, Martin; Bachratá, Katarína; Bachratý, Hynek; Bohiniková, Alžbeta
2018-06-01
In this work, we examine the volumetric flow rate of microfluidic devices. The volumetric flow rate is a parameter which is necessary to correctly set up a simulation of a real device and to check the conformity of a simulation and a laboratory experiments [1]. Instead of defining the volumetric rate at the beginning as a simulation parameter, a parameter of external force is set. The proposed hypothesis is that for a fixed set of other parameters (topology, viscosity of the liquid, …) the volumetric flow rate is linearly dependent on external force in typical ranges of fluid velocity used in our simulations. To confirm this linearity hypothesis and to find numerical limits of this approach, we test several values of the external force parameter. The tests are designed for three different topologies of simulation box and for various haematocrits. The topologies of the microfluidic devices are inspired by existing laboratory experiments [3 - 6]. The linear relationship between the external force and the volumetric flow rate is verified in orders of magnitudes similar to the values obtained from laboratory experiments. Supported by the Slovak Research and Development Agency under the contract No. APVV-15-0751 and by the Ministry of Education, Science, Research and Sport of the Slovak Republic under the contract No. VEGA 1/0643/17.
NASA Astrophysics Data System (ADS)
Kovalev, Alexey A.; Kotlyar, Victor V.
2015-03-01
We study a non-paraxial family of nondiffracting laser beams whose complex amplitude is proportional to an n-th order Lommel function of two variables. These beams are referred to as Lommel modes. Explicit analytical relations for the angular spectrum of plane waves and orbital angular momentum of the Lommel beams have been derived. The even (n=2p) and odd (n=2p+1) Lommel modes are mutually orthogonal, as are the Lommel modes characterized by different projections of the wave vector on the optical axis. At a definite parameter, the Lommel modes change to conventional Bessel beams. Asymmetry of the Lommel modes depends on a complex parameter с, with its modulus in the polar notation defining the intensity pattern in the beam‧s cross-section and the argument defining the angle of rotation of the intensity pattern about the optical axis. If the parameter с is real or purely imaginary, the transverse intensity component of the Lommel modes is specularly symmetric about the Cartesian coordinate axes. Besides, with the modulus of the с parameter increasing from 0 to 1, the orbital angular momentum of the Lommel modes increases from a finite value proportional to the topological charge n to infinity. The orbital angular momentum of the Lommel modes undergoes continuous variations, in contrast to its discrete changes in the Bessel modes.
Quantifying uncertainty in NDSHA estimates due to earthquake catalogue
NASA Astrophysics Data System (ADS)
Magrin, Andrea; Peresan, Antonella; Vaccari, Franco; Panza, Giuliano
2014-05-01
The procedure for the neo-deterministic seismic zoning, NDSHA, is based on the calculation of synthetic seismograms by the modal summation technique. This approach makes use of information about the space distribution of large magnitude earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g., morphostructural features and ongoing deformation processes identified by earth observations). Hence the method does not make use of attenuation models (GMPE), which may be unable to account for the complexity of the product between seismic source tensor and medium Green function and are often poorly constrained by the available observations. NDSHA defines the hazard from the envelope of the values of ground motion parameters determined considering a wide set of scenario earthquakes; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In NDSHA uncertainties are not statistically treated as in PSHA, where aleatory uncertainty is traditionally handled with probability density functions (e.g., for magnitude and distance random variables) and epistemic uncertainty is considered by applying logic trees that allow the use of alternative models and alternative parameter values of each model, but the treatment of uncertainties is performed by sensitivity analyses for key modelling parameters. To fix the uncertainty related to a particular input parameter is an important component of the procedure. The input parameters must account for the uncertainty in the prediction of fault radiation and in the use of Green functions for a given medium. A key parameter is the magnitude of sources used in the simulation that is based on catalogue informations, seismogenic zones and seismogenic nodes. Because the largest part of the existing catalogues is based on macroseismic intensity, a rough estimate of ground motion error can therefore be the factor of 2, intrinsic in MCS scale. We tested this hypothesis by the analysis of uncertainty in ground motion maps due to the catalogue random errors in magnitude and localization.
Helical tomotherapy setup variations in canine nasal tumor patients immobilized with a bite block.
Kubicek, Lyndsay N; Seo, Songwon; Chappell, Richard J; Jeraj, Robert; Forrest, Lisa J
2012-01-01
The purpose of our study was to compare setup variation in four degrees of freedom (vertical, longitudinal, lateral, and roll) between canine nasal tumor patients immobilized with a mattress and bite block, versus a mattress alone. Our secondary aim was to define a clinical target volume (CTV) to planning target volume (PTV) expansion margin based on our mean systematic error values associated with nasal tumor patients immobilized by a mattress and bite block. We evaluated six parameters for setup corrections: systematic error, random error, patient-patient variation in systematic errors, the magnitude of patient-specific random errors (root mean square [RMS]), distance error, and the variation of setup corrections from zero shift. The variations in all parameters were statistically smaller in the group immobilized by a mattress and bite block. The mean setup corrections in the mattress and bite block group ranged from 0.91 mm to 1.59 mm for the translational errors and 0.5°. Although most veterinary radiation facilities do not have access to Image-guided radiotherapy (IGRT), we identified a need for more rigid fixation, established the value of adding IGRT to veterinary radiation therapy, and define the CTV-PTV setup error margin for canine nasal tumor patients immobilized in a mattress and bite block. © 2012 Veterinary Radiology & Ultrasound.
NASA Astrophysics Data System (ADS)
Vora, H.; Morgan, J.
2017-12-01
Brittle failure in rock under confined biaxial conditions is accompanied by release of seismic energy, known as acoustic emissions (AE). The objective our study is to understand the influence of elastic properties of rock and its stress state on deformation patterns, and associated seismicity in granular rocks. Discrete Element Modeling is used to simulate biaxial tests on granular rocks of defined grain size distribution. Acoustic Energy and seismic moments are calculated from microfracture events as rock is taken to conditions of failure under different confining pressure states. Dimensionless parameters such as seismic b-value and fractal parameter for deformation, D-value, are used to quantify seismic character and distribution of damage in rock. Initial results suggest that confining pressure has the largest control on distribution of induced microfracturing, while fracture energy and seismic magnitudes are highly sensitive to elastic properties of rock. At low confining pressures, localized deformation (low D-values) and high seismic b-values are observed. Deformation at high confining pressures is distributed in nature (high D-values) and exhibit low seismic b-values as shearing becomes the dominant mode of microfracturing. Seismic b-values and fractal D-values obtained from microfracturing exhibit a linear inverse relationship, similar to trends observed in earthquakes. Mode of microfracturing in our simulations of biaxial compression tests show mechanistic similarities to propagation of fractures and faults in nature.
Luis, Sushil Allen; Blauwet, Lori A; Samardhi, Himabindu; West, Cathy; Mehta, Ramila A; Luis, Chris R; Scalia, Gregory M; Miller, Fletcher A; Burstow, Darryl J
2017-10-15
This study aimed to investigate the utility of transthoracic echocardiographic (TTE) Doppler-derived parameters in detection of mitral prosthetic dysfunction and to define optimal cut-off values for identification of such dysfunction by valve type. In total, 971 TTE studies (647 mechanical prostheses; 324 bioprostheses) were compared with transesophageal echocardiography for evaluation of mitral prosthesis function. Among all prostheses, mitral valve prosthesis (MVP) ratio (ratio of time velocity integral of MVP to that of left ventricular outflow tract; odds ratio [OR] 10.34, 95% confidence interval [95% CI] 6.43 to 16.61, p<0.001), E velocity (OR 3.23, 95% CI 1.61 to 6.47, p<0.001), and mean gradient (OR 1.13, 95% CI 1.02 to 1.25, p=0.02) provided good discrimination of clinically normal and clinically abnormal prostheses. Optimal cut-off values by receiver operating characteristic analysis for differentiating clinically normal and abnormal prostheses varied by prosthesis type. Combining MVP ratio and E velocity improved specificity (92%) and positive predictive value (65%) compared with either parameter alone, with minimal decline in negative predictive value (92%). Pressure halftime (OR 0.99, 95% CI 0.98 to 1.00, p=0.04) did not differentiate between clinically normal and clinically abnormal prostheses but was useful in discriminating obstructed from normal and regurgitant prostheses. In conclusion, cut-off values for TTE-derived Doppler parameters of MVP function were specific to prosthesis type and carried high sensitivity and specificity for identifying prosthetic valve dysfunction. MVP ratio was the best predictor of prosthetic dysfunction and, combined with E velocity, provided a useful parameter for determining likelihood of dysfunction and need for further assessment. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.
Building fast well-balanced two-stage numerical schemes for a model of two-phase flows
NASA Astrophysics Data System (ADS)
Thanh, Mai Duc
2014-06-01
We present a set of well-balanced two-stage schemes for an isentropic model of two-phase flows arisen from the modeling of deflagration-to-detonation transition in granular materials. The first stage is to absorb the source term in nonconservative form into equilibria. Then in the second stage, these equilibria will be composed into a numerical flux formed by using a convex combination of the numerical flux of a stable Lax-Friedrichs-type scheme and the one of a higher-order Richtmyer-type scheme. Numerical schemes constructed in such a way are expected to get the interesting property: they are fast and stable. Tests show that the method works out until the parameter takes on the value CFL, and so any value of the parameter between zero and this value is expected to work as well. All the schemes in this family are shown to capture stationary waves and preserves the positivity of the volume fractions. The special values of the parameter 0,1/2,1/(1+CFL), and CFL in this family define the Lax-Friedrichs-type, FAST1, FAST2, and FAST3 schemes, respectively. These schemes are shown to give a desirable accuracy. The errors and the CPU time of these schemes and the Roe-type scheme are calculated and compared. The constructed schemes are shown to be well-balanced and faster than the Roe-type scheme.
Veeraselvam, M.; Sridhar, R.; Perumal, P.; Jayathangaraj, M. G.
2014-01-01
The present study was conducted to define the physiological responses of captive sloth bears immobilized with ketamine hydrochloride and xylazine hydrochloride and to determine and compare the values of hematology and serum biochemical parameters between sexes. A total of 15 sloth bears were immobilized using combination of ketamine hydrochloride and xylazine hydrochloride drugs at the dose rate of 5.0 milligram (mg) per kg body weight and 2.0 mg per kg body weight, respectively. The use of combination of these drugs was found satisfactory for the chemical immobilization of captive sloth bears. There were no significant differences observed in induction time and recovery time and physiological parameters such as heart rate, respiratory rate, and rectal temperature between sexes. Health related parameters comprising hematological values like packed cell volume (PCV), hemoglobin (Hb), red blood cell count (RBC), erythrocyte indices, and so forth and biochemical values like total protein, blood urea nitrogen (BUN), creatinine, alkaline amino-transferase (ALT), aspartate amino-transferase (AST), and so forth were estimated in 11 (5 males and 6 females) apparently healthy bears. Comparison between sexes revealed significant difference in PCV (P < 0.05) and mean corpuscular hemoglobin concentration (MCHC) (P < 0.05). The study might help to evaluate health profiles of sloth bears for appropriate line treatment. PMID:24876990
NASA Astrophysics Data System (ADS)
Wang, Ying-Mei; Wang, Wen-Xiu; Chen, He-Sheng; Zhang, Kai; Jiang, Yu-Mei; Wang, Xu-Ming; He, Da-Ren
2002-03-01
A system concatenated by two area-preserving maps may be addressed as "quasi- dissipative," since such a system can display dissipative behaviors^1. This is due to noninvertibility induced by discontinuity in the system function. In such a system, the image set of the discontinuous border forms a chaotic quasi-attractor. At a critical control parameter value the quasi-attractor suddenly vanishes. The chaotic iterations escape, via a leaking hole, to an emergent period-8 elliptic island. The hole is the intersection of the chaotic quasi-attractor and the period-8 island. The chaotic quasi-attractor thus changes to chaotic quasi-transients. The scaling behavior that drives the quasi-crisis has been investigated numerically. It reads:
Static penetration resistance of soils
NASA Technical Reports Server (NTRS)
Durgunoglu, H. T.; Mitchell, J. K.
1973-01-01
Model test results were used to define the failure mechanism associated with the static penetration resistance of cohesionless and low-cohesion soils. Knowledge of this mechanism has permitted the development of a new analytical method for calculating the ultimate penetration resistance which explicitly accounts for penetrometer base apex angle and roughness, soil friction angle, and the ratio of penetration depth to base width. Curves relating the bearing capacity factors to the soil friction angle are presented for failure in general shear. Strength parameters and penetrometer interaction properties of a fine sand were determined and used as the basis for prediction of the penetration resistance encountered by wedge, cone, and flat-ended penetrometers of different surface roughness using the proposed analytical method. Because of the close agreement between predicted values and values measured in laboratory tests, it appears possible to deduce in-situ soil strength parameters and their variation with depth from the results of static penetration tests.
Illusion optics: Optically transforming the nature and the location of electromagnetic emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Jianjia; Tichit, Paul-Henri; Burokur, Shah Nawaz, E-mail: shah-nawaz.burokur@u-psud.fr
Complex electromagnetic structures can be designed by using the powerful concept of transformation electromagnetics. In this study, we define a spatial coordinate transformation that shows the possibility of designing a device capable of producing an illusion on an antenna radiation pattern. Indeed, by compressing the space containing a radiating element, we show that it is able to change the radiation pattern and to make the radiation location appear outside the latter space. Both continuous and discretized models with calculated electromagnetic parameter values are presented. A reduction of the electromagnetic material parameters is also proposed for a possible physical fabrication ofmore » the device with achievable values of permittivity and permeability that can be obtained from existing well-known metamaterials. Following that, the design of the proposed antenna using a layered metamaterial is presented. Full wave numerical simulations using Finite Element Method are performed to demonstrate the performances of such a device.« less
A modified Galam’s model for word-of-mouth information exchange
NASA Astrophysics Data System (ADS)
Ellero, Andrea; Fasano, Giovanni; Sorato, Annamaria
2009-09-01
In this paper we analyze the stochastic model proposed by Galam in [S. Galam, Modelling rumors: The no plane Pentagon French hoax case, Physica A 320 (2003), 571-580], for information spreading in a ‘word-of-mouth’ process among agents, based on a majority rule. Using the communications rules among agents defined in the above reference, we first perform simulations of the ‘word-of-mouth’ process and compare the results with the theoretical values predicted by Galam’s model. Some dissimilarities arise in particular when a small number of agents is considered. We find motivations for these dissimilarities and suggest some enhancements by introducing a new parameter dependent model. We propose a modified Galam’s scheme which is asymptotically coincident with the original model in the above reference. Furthermore, for relatively small values of the parameter, we provide a numerical experience proving that the modified model often outperforms the original one.
Noncontact detection of dry eye using a custom designed infrared thermal image system
NASA Astrophysics Data System (ADS)
Su, Tai Yuan; Hwa, Chen Kerh; Liu, Po Hsuan; Wu, Ming Hong; Chang, David O.; Su, Po Fang; Chang, Shu Wen; Chiang, Huihua Kenny
2011-04-01
Dry eye syndrome is a common irritating eye disease. Current clinical diagnostic methods are invasive and uncomfortable for patients. This study developed a custom designed noncontact infrared (IR) thermal image system to measure the spatial and temporal variation of the ocular surface temperature over a 6-second eye-open period. This research defined two parameters: the temperature difference value and the compactness value to represent the temperature change and the irregularity of the temperature distribution on the tear film. Using these two parameters, this study achieved discrimination results for the dry eye and the normal eye groups; the sensitivity is 0.84, the specificity is 0.83, and the receiver operating characteristic area is 0.87. The results suggest that the custom designed IR thermal image system may be used as an effective tool for noncontact detection of dry eye.
Noncontact detection of dry eye using a custom designed IR thermal image system
NASA Astrophysics Data System (ADS)
Su, Tai Yuan; Chen, Kerh Hwa; Liu, Po Hsuan; Wu, Ming Hong; Chang, David O.; Chiang, Huihua
2011-03-01
Dry eye syndrome is a common irritating eye disease. Current clinical diagnostic methods are invasive and uncomfortable to patients. A custom designed noncontact infrared (IR) thermal image system was developed to measure the spatial and temporal variation of the ocular surface temperature over a 6-second eye-opening period. We defined two parameters: the temperature difference value and the compactness value to represent the degree of the temperature change and irregularity of the temperature distribution on the tear film. By using these two parameters, in this study, a linear discrimination result for the dry eye and the normal eye groups; the sensitivity is 0.9, the specificity is 0.86 and the receiver operating characteristic (ROC) area is 0.91. The result suggests that the custom designed IR thermal image system may be used as an effective tool for noncontact detection of dry eye.
Lateral position detection and control for friction stir systems
Fleming, Paul; Lammlein, David H.; Cook, George E.; Wilkes, Don Mitchell; Strauss, Alvin M.; Delapp, David R.; Hartman, Daniel A.
2012-06-05
An apparatus and computer program are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.
Lateral position detection and control for friction stir systems
Fleming, Paul [Boulder, CO; Lammlein, David H [Houston, TX; Cook, George E [Brentwood, TN; Wilkes, Don Mitchell [Nashville, TN; Strauss, Alvin M [Nashville, TN; Delapp, David R [Ashland City, TN; Hartman, Daniel A [Fairhope, AL
2011-11-08
Friction stir methods are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.
Vladymyrov, O A; Tofan, N I
2003-01-01
Investigations on using of index possibilities of L-arginin/NO system in evaluation of sanatorium-resort treatment effectiveness of pregnant with cardiovascular disorders were conducted. It was analyzed the dynamics of twenty four hour cycle rhythm of L-arginin and total nitrites and nitrates constance in saliva of 58 pregnant, 20 of which were suffering from metabolic cardiomyopathy, 23-from neurocirculatory dystonia and 15 were healthy pregnant. As the result of examination there were found out considerable changes of parameter values of biological rhythms: duration of rhythm, average twenty four hour cycle range of activity, amount of difference between max index per twenty hour cycle and average index per twenty hour cycle, period of max and min activity. Analysis of tendencies of twenty four hour cycles rhythms concerning dynamics of sanatorium-resort treatment will give possibility to define characteristic features of recovering values of this parameter.
NASA Technical Reports Server (NTRS)
Kim, Won S.; Tendick, Frank; Stark, Lawrence
1989-01-01
A teleoperation simulator was constructed with vector display system, joysticks, and a simulated cylindrical manipulator, in order to quantitatively evaluate various display conditions. The first of two experiments conducted investigated the effects of perspective parameter variations on human operators' pick-and-place performance, using a monoscopic perspective display. The second experiment involved visual enhancements of the monoscopic perspective display, by adding a grid and reference lines, by comparison with visual enhancements of a stereoscopic display; results indicate that stereoscopy generally permits superior pick-and-place performance, but that monoscopy nevertheless allows equivalent performance when defined with appropriate perspective parameter values and adequate visual enhancements.
Optimization of seismic isolation systems via harmony search
NASA Astrophysics Data System (ADS)
Melih Nigdeli, Sinan; Bekdaş, Gebrail; Alhan, Cenk
2014-11-01
In this article, the optimization of isolation system parameters via the harmony search (HS) optimization method is proposed for seismically isolated buildings subjected to both near-fault and far-fault earthquakes. To obtain optimum values of isolation system parameters, an optimization program was developed in Matlab/Simulink employing the HS algorithm. The objective was to obtain a set of isolation system parameters within a defined range that minimizes the acceleration response of a seismically isolated structure subjected to various earthquakes without exceeding a peak isolation system displacement limit. Several cases were investigated for different isolation system damping ratios and peak displacement limitations of seismic isolation devices. Time history analyses were repeated for the neighbouring parameters of optimum values and the results proved that the parameters determined via HS were true optima. The performance of the optimum isolation system was tested under a second set of earthquakes that was different from the first set used in the optimization process. The proposed optimization approach is applicable to linear isolation systems. Isolation systems composed of isolation elements that are inherently nonlinear are the subject of a future study. Investigation of the optimum isolation system parameters has been considered in parametric studies. However, obtaining the best performance of a seismic isolation system requires a true optimization by taking the possibility of both near-fault and far-fault earthquakes into account. HS optimization is proposed here as a viable solution to this problem.
Improving the Fit of a Land-Surface Model to Data Using its Adjoint
NASA Astrophysics Data System (ADS)
Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine
2016-04-01
Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.
Indexing of exoplanets in search for potential habitability: application to Mars-like worlds
NASA Astrophysics Data System (ADS)
Kashyap Jagadeesh, Madhu; Gudennavar, Shivappa B.; Doshi, Urmi; Safonova, Margarita
2017-08-01
Study of exoplanets is one of the main goals of present research in planetary sciences and astrobiology. Analysis of huge planetary data from space missions such as CoRoT and Kepler is directed ultimately at finding a planet similar to Earth—the Earth's twin, and answering the question of potential exo-habitability. The Earth Similarity Index (ESI) is a first step in this quest, ranging from 1 (Earth) to 0 (totally dissimilar to Earth). It was defined for the four physical parameters of a planet: radius, density, escape velocity and surface temperature. The ESI is further sub-divided into interior ESI (geometrical mean of radius and density) and surface ESI (geometrical mean of escape velocity and surface temperature). The challenge here is to determine which exoplanet parameter(s) is important in finding this similarity; how exactly the individual parameters entering the interior ESI and surface ESI are contributing to the global ESI. Since the surface temperature entering surface ESI is a non-observable quantity, it is difficult to determine its value. Using the known data for the Solar System objects, we established the calibration relation between surface and equilibrium temperatures to devise an effective way to estimate the value of the surface temperature of exoplanets. ESI is a first step in determining potential exo-habitability that may not be very similar to a terrestrial life. A new approach, called Mars Similarity Index (MSI), is introduced to identify planets that may be habitable to the extreme forms of life. MSI is defined in the range between 1 (present Mars) and 0 (dissimilar to present Mars) and uses the same physical parameters as ESI. We are interested in Mars-like planets to search for planets that may host the extreme life forms, such as the ones living in extreme environments on Earth; for example, methane on Mars may be a product of the methane-specific extremophile life form metabolism.
Shifted one-parameter supersymmetric family of quartic asymmetric double-well potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosu, Haret C., E-mail: hcr@ipicyt.edu.mx; Mancas, Stefan C., E-mail: mancass@erau.edu; Chen, Pisin, E-mail: pisinchen@phys.ntu.edu.tw
2014-10-15
Extending our previous work (Rosu, 2014), we define supersymmetric partner potentials through a particular Riccati solution of the form F(x)=(x−c){sup 2}−1, where c is a real shift parameter, and work out the quartic double-well family of one-parameter isospectral potentials obtained by using the corresponding general Riccati solution. For these parametric double well potentials, we study how the localization properties of the two wells depend on the parameter of the potentials for various values of the shifting parameter. We also consider the supersymmetric parametric family of the first double-well potential in the Razavy chain of double well potentials corresponding to F(x)=1/2more » sinh2x−2((1+√(2))sinh2x)/((1+√(2))cosh2x+1) , both unshifted and shifted, to test and compare the localization properties. - Highlights: • Quartic one-parameter DWs with an additional shift parameter are introduced. • Anomalous localization feature of their zero modes is confirmed at different shifts. • Razavy one-parameter DWs are also introduced and shown not to have this feature.« less
NASA Astrophysics Data System (ADS)
Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.
2018-02-01
Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.
A Morphological Analysis of Gamma-Ray Burst Early-optical Afterglows
NASA Astrophysics Data System (ADS)
Gao, He; Wang, Xiang-Gao; Mészáros, Peter; Zhang, Bing
2015-09-01
Within the framework of the external shock model of gamma-ray burst (GRB) afterglows, we perform a morphological analysis of the early-optical light curves to directly constrain model parameters. We define four morphological types, i.e., the reverse shock-dominated cases with/without the emergence of the forward shock peak (Type I/Type II), and the forward shock-dominated cases without/with νm crossing the band (Type III/IV). We systematically investigate all of the Swift GRBs that have optical detection earlier than 500 s and find 3/63 Type I bursts (4.8%), 12/63 Type II bursts (19.0%), 30/63 Type III bursts (47.6%), 8/63 Type IV bursts (12.7%), and 10/63 Type III/IV bursts (15.9%). We perform Monte Carlo simulations to constrain model parameters in order to reproduce the observations. We find that the favored value of the magnetic equipartition parameter in the forward shock ({ɛ }B{{f}}) ranges from 10-6 to 10-2, and the reverse-to-forward ratio of ɛB ({{R}}B) is about 100. The preferred electron equipartition parameter {ɛ }{{e}}{{r},{{f}}} value is 0.01, which is smaller than the commonly assumed value, e.g., 0.1. This could mitigate the so-called “efficiency problem” for the internal shock model, if ɛe during the prompt emission phase (in the internal shocks) is large (say, ˜0.1). The preferred {{R}}B value is in agreement with the results in previous works that indicate a moderately magnetized baryonic jet for GRBs.
Genetic parameters for milk production traits and breeding goals for Gir dairy cattle in Brazil.
Prata, M A; Faro, L E; Moreira, H L; Verneque, R S; Vercesi Filho, A E; Peixoto, M G C D; Cardoso, V L
2015-10-19
To implement an animal breeding program, it is important to define the production circumstances of the animals of interest to determine which traits of economic interest will be selected for the breeding goal. The present study defined breeding goals and proposed selection indices for milk production and quality traits of Gir dairy cattle. First, a bioeconomic model was developed to calculate economic values. The genetic and phenotypic parameters were estimated based on records from 22,468 first-lactation Gir dairy cows and their crosses for which calving occurred between 1970 and 2011. Statistical analyses were carried out for the animal model, with multitrait analyses using the restricted maximum likelihood method. Two situations were created in the present study to define the breeding goals: 1) including only milk yield in the breeding goal (HGL1) and 2) including fat and protein in addition to the milk yield (HGL2). The heritability estimates for milk, protein, and fat production were 0.33 ± 0.02, 0.26 ± 0.02, and 0.24 ± 0.02, respectively. All phenotypic and genetic correlations were highly positive. The economic values for milk, fat, and protein were US$0.18, US$0.27, and US$7.04, respectively. The expected economic responses for HGL2 and for HGL1 were US$126.30 and US$79.82, respectively. These results indicate that milk component traits should be included in a selection index to rank animals evaluated in the National Gir Dairy Breeding Program developed in Brazil.
Conceptual design of high speed supersonic aircraft: A brief review on SR-71 (Blackbird) aircraft
NASA Astrophysics Data System (ADS)
Xue, Hui; Khawaja, H.; Moatamedi, M.
2014-12-01
The paper presents the conceptual design of high-speed supersonic aircraft. The study focuses on SR-71 (Blackbird) aircraft. The input to the conceptual design is a mission profile. Mission profile is a flight profile of the aircraft defined by the customer. This paper gives the SR-71 aircraft mission profile specified by US air force. Mission profile helps in defining the attributes the aircraft such as wing profile, vertical tail configuration, propulsion system, etc. Wing profile and vertical tail configurations have direct impact on lift, drag, stability, performance and maneuverability of the aircraft. A propulsion system directly influences the performance of the aircraft. By combining the wing profile and the propulsion system, two important parameters, known as wing loading and thrust to weight ratio can be calculated. In this work, conceptual design procedure given by D. P. Raymer (AIAA Educational Series) is applied to calculate wing loading and thrust to weight ratio. The calculated values are compared against the actual values of the SR-71 aircraft. Results indicates that the values are in agreement with the trend of developments in aviation.
Li, Yu; Chen, Dong-Ning; Cui, Jing; Xin, Zhong; Yang, Guang-Ran; Niu, Ming-Jia; Yang, Jin-Kui
2016-11-06
Subclinical hypothyroidism, commonly caused by Hashimoto thyroiditis (HT), is a risk factor for cardiovascular diseases. This disorder is defined as merely having elevated serum thyroid stimulating hormone (TSH) levels. However, the upper limit of reference range for TSH is debated recently. This study was to determine the cutoff value for the upper normal limit of TSH in a cohort using the prevalence of Hashimoto thyroiditis as "gold" calibration standard. The research population was medical staff of 2856 individuals who took part in health examination annually. Serum free triiodothyronine (FT3), free thyroxine (FT4), TSH, thyroid peroxidase antibody (TPAb), thyroglobulin antibody (TGAb) and other biochemistry parameters were tested. Meanwhile, thyroid ultrasound examination was performed. The diagnosis of HT was based on presence of thyroid antibodies (TPAb and TGAb) and abnormalities of thyroid ultrasound examination. We used two different methods to estimate the cutoff point of TSH based on the prevalence of HT. Joinpoint regression showed the prevalence of HT increased significantly at the ninth decile of TSH value corresponding to 2.9 mU/L. ROC curve showed a TSH cutoff value of 2.6 mU/L with the maximized sensitivity and specificity in identifying HT. Using the newly defined cutoff value of TSH can detect patients with hyperlipidemia more efficiently, which may indicate our approach to define the upper limit of TSH can make more sense from the clinical point of view. A significant increase in the prevalence of HT occurred among individuals with a TSH of 2.6-2.9 mU/L made it possible to determine the cutoff value of normal upper limit of TSH.
Composing chaotic music from the letter m
NASA Astrophysics Data System (ADS)
Sotiropoulos, Anastasios D.
Chaotic music is composed from a proposed iterative map depicting the letter m, relating the pitch, duration and loudness of successive steps. Each of the two curves of the letter m is based on the classical logistic map. Thus, the generating map is xn+1 = r xn(1/2 - xn) for xn between 0 and 1/2 defining the first curve, and xn+1 = r (xn - 1/2)(1 - xn) for xn between 1/2 and 1 representing the second curve. The parameter r which determines the height(s) of the letter m varies from 2 to 16, the latter value ensuring fully developed chaotic solutions for the whole letter m; r = 8 yielding full chaotic solutions only for its first curve. The m-model yields fixed points, bifurcation points and chaotic regions for each separate curve, as well as values of the parameter r greater than 8 which produce inter-fixed points, inter-bifurcation points and inter-chaotic regions from the interplay of the two curves. Based on this, music is composed from mapping the m- recurrence model solutions onto actual notes. The resulting musical score strongly depends on the sequence of notes chosen by the composer to define the musical range corresponding to the range of the chaotic mathematical solutions x from 0 to 1. Here, two musical ranges are used; one is the middle chromatic scale and the other is the seven- octaves range. At the composer's will and, for aesthetics, within the same composition, notes can be the outcome of different values of r and/or shifted in any octave. Compositions with endings of non-repeating note patterns result from values of r in the m-model that do not produce bifurcations. Scores of chaotic music composed from the m-model and the classical logistic model are presented.
Hay, L.E.; McCabe, G.J.; Clark, M.P.; Risley, J.C.
2009-01-01
The accuracy of streamflow forecasts depends on the uncertainty associated with future weather and the accuracy of the hydrologic model that is used to produce the forecasts. We present a method for streamflow forecasting where hydrologic model parameters are selected based on the climate state. Parameter sets for a hydrologic model are conditioned on an atmospheric pressure index defined using mean November through February (NDJF) 700-hectoPascal geopotential heights over northwestern North America [Pressure Index from Geopotential heights (PIG)]. The hydrologic model is applied in the Sprague River basin (SRB), a snowmelt-dominated basin located in the Upper Klamath basin in Oregon. In the SRB, the majority of streamflow occurs during March through May (MAM). Water years (WYs) 1980-2004 were divided into three groups based on their respective PIG values (high, medium, and low PIG). Low (high) PIG years tend to have higher (lower) than average MAM streamflow. Four parameter sets were calibrated for the SRB, each using a different set of WYs. The initial set used WYs 1995-2004 and the remaining three used WYs defined as high-, medium-, and low-PIG years. Two sets of March, April, and May streamflow volume forecasts were made using Ensemble Streamflow Prediction (ESP). The first set of ESP simulations used the initial parameter set. Because the PIG is defined using NDJF pressure heights, forecasts starting in March can be made using the PIG parameter set that corresponds with the year being forecasted. The second set of ESP simulations used the parameter set associated with the given PIG year. Comparison of the ESP sets indicates that more accuracy and less variability in volume forecasts may be possible when the ESP is conditioned using the PIG. This is especially true during the high-PIG years (low-flow years). ?? 2009 American Water Resources Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buttler, D J
The Java Metadata Facility is introduced by Java Specification Request (JSR) 175 [1], and incorporated into the Java language specification [2] in version 1.5 of the language. The specification allows annotations on Java program elements: classes, interfaces, methods, and fields. Annotations give programmers a uniform way to add metadata to program elements that can be used by code checkers, code generators, or other compile-time or runtime components. Annotations are defined by annotation types. These are defined the same way as interfaces, but with the symbol {at} preceding the interface keyword. There are additional restrictions on defining annotation types: (1) Theymore » cannot be generic; (2) They cannot extend other annotation types or interfaces; (3) Methods cannot have any parameters; (4) Methods cannot have type parameters; (5) Methods cannot throw exceptions; and (6) The return type of methods of an annotation type must be a primitive, a String, a Class, an annotation type, or an array, where the type of the array is restricted to one of the four allowed types. See [2] for additional restrictions and syntax. The methods of an annotation type define the elements that may be used to parameterize the annotation in code. Annotation types may have default values for any of its elements. For example, an annotation that specifies a defect report could initialize an element defining the defect outcome submitted. Annotations may also have zero elements. This could be used to indicate serializability for a class (as opposed to the current Serializability interface).« less
Morphology parameters for intracranial aneurysm rupture risk assessment.
Dhar, Sujan; Tremmel, Markus; Mocco, J; Kim, Minsuok; Yamamoto, Junichi; Siddiqui, Adnan H; Hopkins, L Nelson; Meng, Hui
2008-08-01
The aim of this study is to identify image-based morphological parameters that correlate with human intracranial aneurysm (IA) rupture. For 45 patients with terminal or sidewall saccular IAs (25 unruptured, 20 ruptured), three-dimensional geometries were evaluated for a range of morphological parameters. In addition to five previously studied parameters (aspect ratio, aneurysm size, ellipticity index, nonsphericity index, and undulation index), we defined three novel parameters incorporating the parent vessel geometry (vessel angle, aneurysm [inclination] angle, and [aneurysm-to-vessel] size ratio) and explored their correlation with aneurysm rupture. Parameters were analyzed with a two-tailed independent Student's t test for significance; significant parameters (P < 0.05) were further examined by multivariate logistic regression analysis. Additionally, receiver operating characteristic analyses were performed on each parameter. Statistically significant differences were found between mean values in ruptured and unruptured groups for size ratio, undulation index, nonsphericity index, ellipticity index, aneurysm angle, and aspect ratio. Logistic regression analysis further revealed that size ratio (odds ratio, 1.41; 95% confidence interval, 1.03-1.92) and undulation index (odds ratio, 1.51; 95% confidence interval, 1.08-2.11) had the strongest independent correlation with ruptured IA. From the receiver operating characteristic analysis, size ratio and aneurysm angle had the highest area under the curve values of 0.83 and 0.85, respectively. Size ratio and aneurysm angle are promising new morphological metrics for IA rupture risk assessment. Because these parameters account for vessel geometry, they may bridge the gap between morphological studies and more qualitative location-based studies.
NASA Astrophysics Data System (ADS)
Nosov, G. V.; Kuleshova, E. O.; Lefebvre, S.; Plyusnin, A. A.; Tokmashev, D. M.
2017-02-01
The technique for parameters determination of magnetic skin effect on ferromagnetic plate at a specified pulse of magnetic field intensity on the plate surface is proposed. It is based on a frequency-domain method and could be applied for a pulsing transformer, a dynamoelectric pulse generator and a commutating inductor that contains an imbricated core. Due to this technique, such plate parameters as specific heat loss energy, the average power of this energy and the plate temperature raise, the magnetic flux attenuation factor and the plate q-factor could be calculated. These parameters depend on the steel type, the amplitude, the rms value, the duration and the form of the magnetic field intensity impulse on the plate surface. The plate thickness is defined by the value of the flux attenuation factor and the plate q-factor that should be maximal. The reliability of the proposed technique is built on a common frequency-domain usage applicable for pulse transient study under zero boundary conditions of the electric circuit and the conformity of obtained results with the sinusoidal steady-state mode.
Extension of the energy-to-moment parameter Θ to intermediate and deep earthquakes
NASA Astrophysics Data System (ADS)
Saloor, Nooshin; Okal, Emile A.
2018-01-01
We extend to intermediate and deep earthquakes the slowness parameter Θ originally introduced by Newman and Okal (1998). Because of the increasing time lag with depth between the phases P, pP and sP, and of variations in anelastic attenuation parameters t∗ , we define four depth bins featuring slightly different algorithms for the computation of Θ . We apply this methodology to a global dataset of 598 intermediate and deep earthquakes with moments greater than 1025 dyn∗cm. We find a slight increase with depth in average values of Θ (from -4.81 between 80 and 135 km to -4.48 between 450 and 700 km), which however all have intersecting one- σ bands. With widths ranging from 0.26 to 0.31 logarithmic units, these are narrower than their counterpart for a reference dataset of 146 shallow earthquakes (σ = 0.55). Similarly, we find no correlation between values of Θ and focal geometry. These results point to stress conditions within the seismogenic zones inside the Wadati-Benioff slabs more homogeneous than those prevailing at the shallow contacts between tectonic plates.
Identifying Bearing Rotodynamic Coefficients Using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Miller, Brad A.; Howard, Samuel A.
2008-01-01
An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter's performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.
Uncertainty Quantification in Multi-Scale Coronary Simulations Using Multi-resolution Expansion
NASA Astrophysics Data System (ADS)
Tran, Justin; Schiavazzi, Daniele; Ramachandra, Abhay; Kahn, Andrew; Marsden, Alison
2016-11-01
Computational simulations of coronary flow can provide non-invasive information on hemodynamics that can aid in surgical planning and research on disease propagation. In this study, patient-specific geometries of the aorta and coronary arteries are constructed from CT imaging data and finite element flow simulations are carried out using the open source software SimVascular. Lumped parameter networks (LPN), consisting of circuit representations of vascular hemodynamics and coronary physiology, are used as coupled boundary conditions for the solver. The outputs of these simulations depend on a set of clinically-derived input parameters that define the geometry and boundary conditions, however their values are subjected to uncertainty. We quantify the effects of uncertainty from two sources: uncertainty in the material properties of the vessel wall and uncertainty in the lumped parameter models whose values are estimated by assimilating patient-specific clinical and literature data. We use a generalized multi-resolution chaos approach to propagate the uncertainty. The advantages of this approach lies in its ability to support inputs sampled from arbitrary distributions and its built-in adaptivity that efficiently approximates stochastic responses characterized by steep gradients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakke, K., E-mail: kbakke@fisica.ufpb.br; Belich, H., E-mail: belichjr@gmail.com
2016-10-15
Based on the Standard Model Extension, we investigate relativistic quantum effects on a scalar particle in backgrounds of the Lorentz symmetry violation defined by a tensor field. We show that harmonic-type and linear-type confining potentials can stem from Lorentz symmetry breaking effects, and thus, relativistic bound state solutions can be achieved. We first analyse a possible scenario of the violation of the Lorentz symmetry that gives rise to a harmonic-type potential. In the following, we analyse another possible scenario of the breaking of the Lorentz symmetry that induces both harmonic-type and linear-type confining potentials. In this second case, we alsomore » show that not all values of the parameter associated with the intensity of the electric field are permitted in the search for polynomial solutions to the radial equation, where the possible values of this parameter are determined by the quantum numbers of the system and the parameters associated with the violation of the Lorentz symmetry.« less
The crack-inclusion interaction problem
NASA Technical Reports Server (NTRS)
Liu, X.-H.; Erdogan, F.
1986-01-01
The general plane elastostatic problem of interaction between a crack and an inclusion is considered. The Green's functions for a pair of dislocations and a pair of concentrated body forces are used to generate the crack and the inclusion. Integral equations are obtained for a line crack and an elastic line inclusion having an arbitrary relative orientation and size. The nature of stress singularity around the end points of rigid and elastic inclusions is described and three special cases of this intersection problem are studied. The problem is solved for an arbitrary uniform stress state away from the crack-inclusion region. The nonintersecting crack-inclusion problem is considered for various relative size, orientation, and stiffness parameters, and the stress intensity factors at the ends of the inclusion and the crack are calculated. For the crack-inclusion intersection case, special stress intensity factors are defined and are calculated for various values of the parameters defining the relative size and orientation of the crack and the inclusion and the stiffness of the inclusion.
The crack-inclusion interaction problem
NASA Technical Reports Server (NTRS)
Xue-Hui, L.; Erdogan, F.
1984-01-01
The general plane elastostatic problem of interaction between a crack and an inclusion is considered. The Green's functions for a pair of dislocations and a pair of concentrated body forces are used to generate the crack and the inclusion. Integral equations are obtained for a line crack and an elastic line inclusion having an arbitrary relative orientation and size. The nature of stress singularity around the end points of rigid and elastic inclusions is described and three special cases of this intersection problem are studied. The problem is solved for an arbitrary uniform stress state away from the crack-inclusion region. The nonintersecting crack-inclusion problem is considered for various relative size, orientation, and stiffness parameters, and the stress intensity factors at the ends of the inclusion and the crack are calculated. For the crack-inclusion intersection case, special stress intensity factors are defined and are calculated for various values of the parameters defining the relative size and orientation of the crack and the inclusion and the stiffness of the inclusion.
NASA Astrophysics Data System (ADS)
Punov, Plamen; Milkov, Nikolay; Danel, Quentin; Perilhon, Christelle; Podevin, Pierre; Evtimov, Teodossi
2017-02-01
An optimization study of the Rankine cycle as a function of diesel engine operating mode is presented. The Rankine cycle here, is studied as a waste heat recovery system which uses the engine exhaust gases as heat source. The engine exhaust gases parameters (temperature, mass flow and composition) were defined by means of numerical simulation in advanced simulation software AVL Boost. Previously, the engine simulation model was validated and the Vibe function parameters were defined as a function of engine load. The Rankine cycle output power and efficiency was numerically estimated by means of a simulation code in Python(x,y). This code includes discretized heat exchanger model and simplified model of the pump and the expander based on their isentropic efficiency. The Rankine cycle simulation revealed the optimum value of working fluid mass flow and evaporation pressure according to the heat source. Thus, the optimal Rankine cycle performance was obtained over the engine operating map.
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
Angular photogrammetric comparison of the soft-tissue facial profile of Kenyans and Chinese.
Wamalwa, Peter; Amisi, Stella Kabarika; Wang, Yunji; Chen, Song
2011-05-01
The purpose of this study was to determine the average angular dimensions that define the normal soft-tissue facial profiles of black Kenyans and Chinese and compare them with each other and with values proposed for whites. Standardized facial profile photographs, taken in natural head position, of 177 black Kenyans and 156 Chinese with normal occlusion and well-balanced faces were analyzed for 12 angular parameters. Two-sample t-tests were used to determine sex and racial differences. Kenyan and Chinese averages were compared with proposed white values using 1-sample t-tests. Eight parameters in Kenyans and 7 in Chinese showed sex differences. All angles, except for facial convexity, nasal dorsum, and inferior facial height, were different between Kenyans and Chinese. Kenyan and Chinese averages for all parameters were different from proposed white average, except for facial convexity. Nasolabial and mentolabial angles showed large individual variability and racial differences. The study demonstrated many differences in average angular measurements of the facial profiles of black Kenyans, Chinese, and white standards. Orthodontists, maxillofacial and plastic surgeons, and other clinicians working in the craniofacial region should bear these in mind when setting aesthetic treatment goals for patients of different races. Mean values from this study can be used for comparison with similar records of subjects with same ethnicity.
The application of the pilot points in groundwater numerical inversion model
NASA Astrophysics Data System (ADS)
Hu, Bin; Teng, Yanguo; Cheng, Lirong
2015-04-01
Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4 zones. And add range of disturbance values to inversion targets to calculate the value of hydraulic conductivity. Third, after inversion calculation (PEST), the interpolated field will minimize an objective function measuring the misfit between calculated and measured data. It's an optimization problem to find the optimum value of parameters. After the inversion modeling, the following major conclusion can be found out: (1) In a field structure formation is heterogeneity, the results of pilot point method is more real: better fitting result of parameters, more stable calculation of numerical simulation (stable residual distribution). Compared to zones, it is better of reflecting the heterogeneity of study field. (2) Pilot point method ensures that each parameter is sensitive and not entirely dependent on other parameters. Thus it guarantees the relative independence and authenticity of parameters evaluation results. However, it costs more time to calculate than zones. Key words: groundwater; pilot point; inverse model; heterogeneity; hydraulic conductivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berthet, M.
1963-01-01
The energy levels and their displacement DELTA E with respect to that of a meson placed in a coulomb potential are determined and compared with the experimental values. This comparison permits the selection of values for the parameters introduced by the hypothesis of the optical model. The absorption in the nucleus is studied using the hamiltonian of the nucleon- pi meson interaction and not th optical model. The results are compared with experimen values. As an introduction, the exact form of the interac tion of mesons with nuclei is defined by adopting the opti model. (J.S.R.)
Dembo, M; De Penfold, J B; Ruiz, R; Casalta, H
1985-03-01
Four pigeons were trained to peck a key under different values of a temporally defined independent variable (T) and different probabilities of reinforcement (p). Parameter T is a fixed repeating time cycle and p the probability of reinforcement for the first response of each cycle T. Two dependent variables were used: mean response rate and mean postreinforcement pause. For all values of p a critical value for the independent variable T was found (T=1 sec) in which marked changes took place in response rate and postreinforcement pauses. Behavior typical of random ratio schedules was obtained at T 1 sec and behavior typical of random interval schedules at T 1 sec. Copyright © 1985. Published by Elsevier B.V.
Opieliński, Krzysztof J; Gudra, Tadeusz
2002-05-01
The effective ultrasonic energy radiation into the air of piezoelectric transducers requires using multilayer matching systems with accurately selected acoustic impedances and the thickness of particular layers. This problem is of particular importance in the case of ultrasonic transducers working at a frequency above 1 MHz. Because the possibilities of choosing material with required acoustic impedance are limited (the counted values cannot always be realised and applied in practice) it is necessary to correct the differences between theoretical values and the possibilities of practical application of given acoustic impedances. Such a correction can be done by manipulating other parameters of matching layers (e.g. by changing their thickness). The efficiency of the energy transmission from the piezoceramic transducer through different layers with different thickness enabling a compensation of non-ideal real values by changing their thickness was computer analysed. The result of this analysis is the conclusion that from the technological point of view a layer with defined thickness is easier and faster to produce than elaboration of a new material with required acoustic parameter.
Statistical moments of the Strehl ratio
NASA Astrophysics Data System (ADS)
Yaitskova, Natalia; Esselborn, Michael; Gladysz, Szymon
2012-07-01
Knowledge of the statistical characteristics of the Strehl ratio is essential for the performance assessment of the existing and future adaptive optics systems. For full assessment not only the mean value of the Strehl ratio but also higher statistical moments are important. Variance is related to the stability of an image and skewness reflects the chance to have in a set of short exposure images more or less images with the quality exceeding the mean. Skewness is a central parameter in the domain of lucky imaging. We present a rigorous theory for the calculation of the mean value, the variance and the skewness of the Strehl ratio. In our approach we represent the residual wavefront as being formed by independent cells. The level of the adaptive optics correction defines the number of the cells and the variance of the cells, which are the two main parameters of our theory. The deliverables are the values of the three moments as the functions of the correction level. We make no further assumptions except for the statistical independence of the cells.
Managing Attribute—Value Clinical Trials Data Using the ACT/DB Client—Server Database System
Nadkarni, Prakash M.; Brandt, Cynthia; Frawley, Sandra; Sayward, Frederick G.; Einbinder, Robin; Zelterman, Daniel; Schacter, Lee; Miller, Perry L.
1998-01-01
ACT/DB is a client—server database application for storing clinical trials and outcomes data, which is currently undergoing initial pilot use. It stores most of its data in entity—attribute—value form. Such data are segregated according to data type to allow indexing by value when possible, and binary large object data are managed in the same way as other data. ACT/DB lets an investigator design a study rapidly by defining the parameters (or attributes) that are to be gathered, as well as their logical grouping for purposes of display and data entry. ACT/DB generates customizable data entry. The data can be viewed through several standard reports as well as exported as text to external analysis programs. ACT/DB is designed to encourage reuse of parameters across multiple studies and has facilities for dictionary search and maintenance. It uses a Microsoft Access client running on Windows 95 machines, which communicates with an Oracle server running on a UNIX platform. ACT/DB is being used to manage the data for seven studies in its initial deployment. PMID:9524347
García-Rodríguez, Rodrigo; Villanueva-Cab, Julio; Anta, Juan A.; Oskam, Gerko
2016-01-01
The influence of the thickness of the nanostructured, mesoporous TiO2 film on several parameters determining the performance of a dye-sensitized solar cell is investigated both experimentally and theoretically. We pay special attention to the effect of the exchange current density in the dark, and we compare the values obtained by steady state measurements with values extracted from small perturbation techniques. We also evaluate the influence of exchange current density, the solar cell ideality factor, and the effective absorption coefficient of the cell on the optimal film thickness. The results show that the exchange current density in the dark is proportional to the TiO2 film thickness, however, the effective absorption coefficient is the parameter that ultimately defines the ideal thickness. We illustrate the importance of the exchange current density in the dark on the determination of the current–voltage characteristics and we show how an important improvement of the cell performance can be achieved by decreasing values of the total series resistance and the exchange current density in the dark. PMID:28787833
Classification of hepatocellular carcinoma stages from free-text clinical and radiology reports
Yim, Wen-wai; Kwan, Sharon W; Johnson, Guy; Yetisgen, Meliha
2017-01-01
Cancer stage information is important for clinical research. However, they are not always explicitly noted in electronic medical records. In this paper, we present our work on automatic classification of hepatocellular carcinoma (HCC) stages from free-text clinical and radiology notes. To accomplish this, we defined 11 stage parameters used in the three HCC staging systems, American Joint Committee on Cancer (AJCC), Barcelona Clinic Liver Cancer (BCLC), and Cancer of the Liver Italian Program (CLIP). After aggregating stage parameters to the patient-level, the final stage classifications were achieved using an expert-created decision logic. Each stage parameter relevant for staging was extracted using several classification methods, e.g. sentence classification and automatic information structuring, to identify and normalize text as cancer stage parameter values. Stage parameter extraction for the test set performed at 0.81 F1. Cancer stage prediction for AJCC, BCLC, and CLIP stage classifications were 0.55, 0.50, and 0.43 F1.
Riley, Pete; Ben-Nun, Michal; Armenta, Richard; Linker, Jon A; Eick, Angela A; Sanchez, Jose L; George, Dylan; Bacon, David P; Riley, Steven
2013-01-01
Rapidly characterizing the amplitude and variability in transmissibility of novel human influenza strains as they emerge is a key public health priority. However, comparison of early estimates of the basic reproduction number during the 2009 pandemic were challenging because of inconsistent data sources and methods. Here, we define and analyze influenza-like-illness (ILI) case data from 2009-2010 for the 50 largest spatially distinct US military installations (military population defined by zip code, MPZ). We used publicly available data from non-military sources to show that patterns of ILI incidence in many of these MPZs closely followed the pattern of their enclosing civilian population. After characterizing the broad patterns of incidence (e.g. single-peak, double-peak), we defined a parsimonious SIR-like model with two possible values for intrinsic transmissibility across three epochs. We fitted the parameters of this model to data from all 50 MPZs, finding them to be reasonably well clustered with a median (mean) value of 1.39 (1.57) and standard deviation of 0.41. An increasing temporal trend in transmissibility ([Formula: see text], p-value: 0.013) during the period of our study was robust to the removal of high transmissibility outliers and to the removal of the smaller 20 MPZs. Our results demonstrate the utility of rapidly available - and consistent - data from multiple populations.
Riley, Pete; Ben-Nun, Michal; Armenta, Richard; Linker, Jon A.; Eick, Angela A.; Sanchez, Jose L.; George, Dylan; Bacon, David P.; Riley, Steven
2013-01-01
Rapidly characterizing the amplitude and variability in transmissibility of novel human influenza strains as they emerge is a key public health priority. However, comparison of early estimates of the basic reproduction number during the 2009 pandemic were challenging because of inconsistent data sources and methods. Here, we define and analyze influenza-like-illness (ILI) case data from 2009–2010 for the 50 largest spatially distinct US military installations (military population defined by zip code, MPZ). We used publicly available data from non-military sources to show that patterns of ILI incidence in many of these MPZs closely followed the pattern of their enclosing civilian population. After characterizing the broad patterns of incidence (e.g. single-peak, double-peak), we defined a parsimonious SIR-like model with two possible values for intrinsic transmissibility across three epochs. We fitted the parameters of this model to data from all 50 MPZs, finding them to be reasonably well clustered with a median (mean) value of 1.39 (1.57) and standard deviation of 0.41. An increasing temporal trend in transmissibility (, p-value: 0.013) during the period of our study was robust to the removal of high transmissibility outliers and to the removal of the smaller 20 MPZs. Our results demonstrate the utility of rapidly available – and consistent – data from multiple populations. PMID:23696723
Stochastic modeling of economic injury levels with respect to yearly trends in price commodity.
Damos, Petros
2014-05-01
The economic injury level (EIL) concept integrates economics and biology and uses chemical applications in crop protection only when economic loss by pests is anticipated. The EIL is defined by five primary variables: the cost of management tactic per production unit, the price of commodity, the injury units per pest, the damage per unit injury, and the proportionate reduction of injury averted by the application of a tactic. The above variables are related according to the formula EIL = C/VIDK. The observable dynamic alteration of the EIL due to its different parameters is a major characteristic of its concept. In this study, the yearly effect of the economic variables is assessed, and in particular the influence of the parameter commodity value on the shape of the EIL function. In addition, to predict the effects of the economic variables on the EIL level, yearly commodity values were incorporated in the EIL formula and the generated outcomes were further modelled with stochastic linear autoregressive models having different orders. According to the AR(1) model, forecasts for the five-year period of 2010-2015 ranged from 2.33 to 2.41 specimens per sampling unit. These values represent a threshold that is in reasonable limits to justify future control actions. Management actions as related to productivity and price commodity significantly affect costs of crop production and thus define the adoption of IPM and sustainable crop production systems at local and international levels. This is an open access paper. We use the Creative Commons Attribution 3.0 license that permits unrestricted use, provided that the paper is properly attributed.
NASA Astrophysics Data System (ADS)
Baumann, Sebastian; Robl, Jörg; Wendt, Lorenz; Willingshofer, Ernst; Hilberg, Sylke
2016-04-01
Automated lineament analysis on remotely sensed data requires two general process steps: The identification of neighboring pixels showing high contrast and the conversion of these domains into lines. The target output is the lineaments' position, extent and orientation. We developed a lineament extraction tool programmed in R using digital elevation models as input data to generate morphological lineaments defined as follows: A morphological lineament represents a zone of high relief roughness, whose length significantly exceeds the width. As relief roughness any deviation from a flat plane, defined by a roughness threshold, is considered. In our novel approach a multi-directional and multi-scale roughness filter uses moving windows of different neighborhood sizes to identify threshold limited rough domains on digital elevation models. Surface roughness is calculated as the vertical elevation difference between the center cell and the different orientated straight lines connecting two edge cells of a neighborhood, divided by the horizontal distance of the edge cells. Thus multiple roughness values depending on the neighborhood sizes and orientations of the edge connecting lines are generated for each cell and their maximum and minimum values are extracted. Thereby negative signs of the roughness parameter represent concave relief structures as valleys, positive signs convex relief structures as ridges. A threshold defines domains of high relief roughness. These domains are thinned to a representative point pattern by a 3x3 neighborhood filter, highlighting maximum and minimum roughness peaks, and representing the center points of lineament segments. The orientation and extent of the lineament segments are calculated within the roughness domains, generating a straight line segment in the direction of least roughness differences. We tested our algorithm on digital elevation models of multiple sources and scales and compared the results visually with shaded relief map of these digital elevation models. The lineament segments trace the relief structure to a great extent and the calculated roughness parameter represents the physical geometry of the digital elevation model. Modifying the threshold for the surface roughness value highlights different distinct relief structures. Also the neighborhood size at which lineament segments are detected correspond with the width of the surface structure and may be a useful additional parameter for further analysis. The discrimination of concave and convex relief structures perfectly matches with valleys and ridges of the surface.
Precise Penning trap measurements of double β-decay Q-values
NASA Astrophysics Data System (ADS)
Redshaw, M.; Brodeur, M.; Bollen, G.; Bustabad, S.; Eibach, M.; Gulyuz, K.; Izzo, C.; Lincoln, D. L.; Novario, S. J.; Ringle, R.; Sandler, R.; Schwarz, S.; Valverde, A. A.
2015-10-01
The double β-decay (ββ -decay) Q-value, defined as the mass difference between parent and daughter atoms, is an important parameter for both two-neutrino ββ -decay (2 νββ) and neutrinoless ββ -decay (0 νββ) experiments. The Q-value enters into the calculation of the phase space factors, which relate the measured ββ -decay half-life to the nuclear matrix element and, in the case of 0 νββ , the effective Majorana mass of the neutrino. In addition, the Q-value defines the total kinetic energy of the two electrons emitted in 0 νββ , corresponding to the location of the single peak that is the sought after signature of 0 νββ . Hence, it is essential to have a precise and accurate Q-value determination. Over the last decade, the Penning trap mass spectrometry community has made a significant effort to provide precise ββ -decay Q-value determinations. Here we report on recent measurements with the Low Energy Beam and Ion Trap (LEBIT) facility at the National Superconducting Cyclotron Laboratory (NSCL) of the 48Ca, 82Se, and 96Zr Q-values. These measurements complete the determination of ββ -decay Q-values for the 11 ``best'' candidates (those with Q >2 MeV). We also report on a measurement of the 78Kr double electron capture (2EC) Q-value and discuss ongoing Penning trap measurements relating to ββ -decay and 2EC. Support from NSF Contract No. PHY-1102511, and DOE Grant No. 03ER-41268.
Petersson, K J F; Friberg, L E; Karlsson, M O
2010-10-01
Computer models of biological systems grow more complex as computing power increase. Often these models are defined as differential equations and no analytical solutions exist. Numerical integration is used to approximate the solution; this can be computationally intensive, time consuming and be a large proportion of the total computer runtime. The performance of different integration methods depend on the mathematical properties of the differential equations system at hand. In this paper we investigate the possibility of runtime gains by calculating parts of or the whole differential equations system at given time intervals, outside of the differential equations solver. This approach was tested on nine models defined as differential equations with the goal to reduce runtime while maintaining model fit, based on the objective function value. The software used was NONMEM. In four models the computational runtime was successfully reduced (by 59-96%). The differences in parameter estimates, compared to using only the differential equations solver were less than 12% for all fixed effects parameters. For the variance parameters, estimates were within 10% for the majority of the parameters. Population and individual predictions were similar and the differences in OFV were between 1 and -14 units. When computational runtime seriously affects the usefulness of a model we suggest evaluating this approach for repetitive elements of model building and evaluation such as covariate inclusions or bootstraps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calva-Tellez, E.; Zepeda, A.
We discuss how weak neutral currents of popular gauge models manifest themselves in the process e/sup +/e/sup -/ ..-->.. ..pi../sup +/..pi../sup -/..pi../sup 0/ for an unpolarized initial state. We define three asymmetry parameters, A/sub c1/, A/sub c2/, and A/sub p/, which provide information about the presence of the neutral current. The former two give account of charge asymmetries in the ..pi../sup +/..pi../sup -/ final state, while A/sub p/ is nonzero when parity-violating effects occur. Using a phenomenological model for the hadronic vertices, we obtain that the maximum value of these parameters is approx. 3 to 4%, and that this valuemore » is reached at a beam energy approx. = 20 GeV. (AIP)« less
Chandrasekaran, Srinivas Niranj; Das, Jhuma; Dokholyan, Nikolay V.; Carter, Charles W.
2016-01-01
PATH rapidly computes a path and a transition state between crystal structures by minimizing the Onsager-Machlup action. It requires input parameters whose range of values can generate different transition-state structures that cannot be uniquely compared with those generated by other methods. We outline modifications to estimate these input parameters to circumvent these difficulties and validate the PATH transition states by showing consistency between transition-states derived by different algorithms for unrelated protein systems. Although functional protein conformational change trajectories are to a degree stochastic, they nonetheless pass through a well-defined transition state whose detailed structural properties can rapidly be identified using PATH. PMID:26958584
Defining the Field of Existence of Shrouded Blades in High-Speed Gas Turbines
NASA Astrophysics Data System (ADS)
Belousov, Anatoliy I.; Nazdrachev, Sergeiy V.
2018-01-01
This work provides a method for determining the region of existence of banded blades of gas turbines for aircraft engines based on the analytical evaluation of tensile stresses in specific characteristic sections of the blade. This region is determined by the set of values of the parameter, which forms the law of distribution of the cross-sectional area of the cross-sections along the height of the airfoil. When seven independent parameters (gas-dynamic, structural and strength) are changed, the choice of the best option is proposed at the early design stage. As an example, the influence of the dimension of a turbine on the domain of the existence of banded blades is shown.
Evaluation of keratoconus progression.
Shajari, Mehdi; Steinwender, Gernot; Herrmann, Kim; Kubiak, Kate Barbara; Pavlovic, Ivana; Plawetzki, Elena; Schmack, Ingo; Kohnen, Thomas
2018-06-01
To define variables for the evaluation of keratoconus progression and to determine cut-off values. In this retrospective cohort study (2010-2016), 265 eyes of 165 patients diagnosed with keratoconus underwent two Scheimpflug measurements (Pentacam) that took place 1 year apart ±3 months. Variables used for keratoconus detection were evaluated for progression and a correlation analysis was performed. By logistic regression analysis, a keratoconus progression index (KPI) was defined. Receiver-operating characteristic curve (ROC) analysis was performed and Youden Index calculated to determine cut-off values. Variables used for keratoconus detection showed a weak correlation with each other (eg, correlation r=0.245 between RPImin and Kmax, p<0.001). Therefore, we used parameters that took several variables into consideration (eg, D-index, index of surface variance, index for height asymmetry, KPI). KPI was defined by logistic regression and consisted of a Pachymin coefficient of -0.78 (p=0.001), a maximum elevation of back surface coefficient of 0.27 and coefficient of corneal curvature at the zone 3 mm away from the thinnest point on the posterior corneal surface of -12.44 (both p<0.001). The two variables with the highest Youden Index in the ROC analysis were D-index and KPI: D-index had a cut-off of 0.4175 (70.6% sensitivity) and Youden Index of 0.606. Cut-off for KPI was -0.78196 (84.7% sensitivity) and a Youden Index of 0.747; both 90% specificity. Keratoconus progression should be defined by evaluating parameters that consider several corneal changes; we suggest D-index and KPI to detect progression. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Bentley, T William
2006-08-25
A recently proposed, multi-parameter correlation: log k (25 degrees C)=s(f) (Ef + Nf), where Ef is electrofugality and Nf is nucleofugality, for the substituent and solvent effects on the rate constants for solvolyses of benzhydryl and substituted benzhydryl substrates, is re-evaluated. A new formula (Ef=log k (RCl/EtOH/25 degrees C) -1.87), where RCl/EtOH refers to ethanolysis of chlorides, reproduces published values of Ef satisfactorily, avoids multi-parameter optimisations and provides additional values of Ef. From the formula for Ef, it is shown that the term (sfxEf) is compatible with the Hammett-Brown (rho+sigma+) equation for substituent effects. However, the previously published values of N(f) do not accurately account for solvent and leaving group effects (e.g. nucleofuge Cl or X), even for benzhydryl solvolyses; alternatively, if the more exact, two-parameter term, (sfxNf) is used, calculated effects are less accurate. A new formula (Nf=6.14 + log k(BX/any solvent/25 degrees C)), where BX refers to solvolysis of the parent benzhydryl as electrofuge, defines improved Nf values for benzhydryl substrates. The new formulae for Ef and Nf are consistent with an assumption that sf=1.00(,) and so improved correlations for benzhydryl substrates can be obtained from the additive formula: log k(RX/any solvent/25 degrees C)=(Ef + Nf). Possible extensions of this approach are also discussed.
Gaia FGK benchmark stars: Metallicity
NASA Astrophysics Data System (ADS)
Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.
2014-04-01
Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133
Hoenig, John M; Then, Amy Y.-H.; Babcock, Elizabeth A.; Hall, Norman G.; Hewitt, David A.; Hesp, Sybrand A.
2016-01-01
There are a number of key parameters in population dynamics that are difficult to estimate, such as natural mortality rate, intrinsic rate of population growth, and stock-recruitment relationships. Often, these parameters of a stock are, or can be, estimated indirectly on the basis of comparative life history studies. That is, the relationship between a difficult to estimate parameter and life history correlates is examined over a wide variety of species in order to develop predictive equations. The form of these equations may be derived from life history theory or simply be suggested by exploratory data analysis. Similarly, population characteristics such as potential yield can be estimated by making use of a relationship between the population parameter and bio-chemico–physical characteristics of the ecosystem. Surprisingly, little work has been done to evaluate how well these indirect estimators work and, in fact, there is little guidance on how to conduct comparative life history studies and how to evaluate them. We consider five issues arising in such studies: (i) the parameters of interest may be ill-defined idealizations of the real world, (ii) true values of the parameters are not known for any species, (iii) selecting data based on the quality of the estimates can introduce a host of problems, (iv) the estimates that are available for comparison constitute a non-random sample of species from an ill-defined population of species of interest, and (v) the hierarchical nature of the data (e.g. stocks within species within genera within families, etc., with multiple observations at each level) warrants consideration. We discuss how these issues can be handled and how they shape the kinds of questions that can be asked of a database of life history studies.
NASA Astrophysics Data System (ADS)
Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.
2012-08-01
A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.
Linear regression metamodeling as a tool to summarize and present simulation model results.
Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M
2013-10-01
Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
Q estimation of seismic data using the generalized S-transform
NASA Astrophysics Data System (ADS)
Hao, Yaju; Wen, Xiaotao; Zhang, Bo; He, Zhenhua; Zhang, Rui; Zhang, Jinming
2016-12-01
Quality factor, Q, is a parameter that characterizes the energy dissipation during seismic wave propagation. The reservoir pore is one of the main factors that affect the value of Q. Especially, when pore space is filled with oil or gas, the rock usually exhibits a relative low Q value. Such a low Q value has been used as a direct hydrocarbon indicator by many researchers. The conventional Q estimation method based on spectral ratio suffers from the problem of waveform tuning; hence, many researchers have introduced time-frequency analysis techniques to tackle this problem. Unfortunately, the window functions adopted in time-frequency analysis algorithms such as continuous wavelet transform (CWT) and S-transform (ST) contaminate the amplitude spectra because the seismic signal is multiplied by the window functions during time-frequency decomposition. The basic assumption of the spectral ratio method is that there is a linear relationship between natural logarithmic spectral ratio and frequency. However, this assumption does not hold if we take the influence of window functions into consideration. In this paper, we first employ a recently developed two-parameter generalized S-transform (GST) to obtain the time-frequency spectra of seismic traces. We then deduce the non-linear relationship between natural logarithmic spectral ratio and frequency. Finally, we obtain a linear relationship between natural logarithmic spectral ratio and a newly defined parameter γ by ignoring the negligible second order term. The gradient of this linear relationship is 1/Q. Here, the parameter γ is a function of frequency and source wavelet. Numerical examples for VSP and post-stack reflection data confirm that our algorithm is capable of yielding accurate results. The Q-value results estimated from field data acquired in western China show reasonable comparison with oil-producing well location.
Role of pharmacogenetics on deferasirox AUC and efficacy.
Cusato, Jessica; Allegra, Sarah; De Francia, Silvia; Massano, Davide; Piga, Antonio; D'Avolio, Antonio
2016-04-01
We evaluated deferasirox pharmacokinetic according to SNPs in genes involved in its metabolism and elimination. Moreover, we defined a plasma area under the curve cut-off value predicting therapy response. Allelic discrimination was performed by real-time PCR. Drug plasma concentrations were measured by a high performance liquid chromatography system coupled with an ultraviolet method. Pharmacokinetic parameters were significantly influenced by UGT1A1 rs887829C>T, UGT1A3 rs1983023C>T and rs3806596A>G SNPs. Area under the curve cut-off values of 360 μg/ml/h for efficacy were here defined and 250 μg/ml/h for nonresponse was reported. UGT1A3 rs3806596GG and ABCG2 rs13120400CC genotypes were factors able to predict efficacy, whereas UGT1A3 rs3806596GG was a nonresponse predictor. These data show how screening patient's genetic profile may help clinicians to optimize iron chelation therapy with deferasirox.
Metastability in the Spin-1 Blume-Emery-Griffiths Model within Constant Coupling Approximation
NASA Astrophysics Data System (ADS)
Ekiz, C.
2017-02-01
In this paper, the equilibrium properties of spin-1 Blume-Emery-Griffiths model are studied by using constant-coupling approximation. The dipolar and quadrupolar order parameters, the stable, metastable and unstable states and free energy of the model are investigated. The states are defined in terms of local minima of the free energy of system. The numerical calculations are presented for several values of exchange interactions on the simple cubic lattice with q = 6.
Toric Networks, Geometric R-Matrices and Generalized Discrete Toda Lattices
NASA Astrophysics Data System (ADS)
Inoue, Rei; Lam, Thomas; Pylyavskyy, Pavlo
2016-11-01
We use the combinatorics of toric networks and the double affine geometric R-matrix to define a three-parameter family of generalizations of the discrete Toda lattice. We construct the integrals of motion and a spectral map for this system. The family of commuting time evolutions arising from the action of the R-matrix is explicitly linearized on the Jacobian of the spectral curve. The solution to the initial value problem is constructed using Riemann theta functions.
NASA Astrophysics Data System (ADS)
Grigoryev, Evgeny G.
2011-01-01
Simultaneous electro discharge sintering of high strength structure of tungsten carbide—cobalt composite and connection it with high-speed steel substrate is investigated and suitable operating parameters are defined. Tungsten carbide—cobalt and high-speed steel joining was produced by the method of high voltage electrical discharge together with application of mechanical pressure to powder compact. It was found that the density and hardness of composite material reach its maximum values at certain magnitudes of applied pressure and high voltage electrical discharge parameters. We show that there is an upper level for the discharge voltage beyond which the powder of composite material disintegrates like an exploding wire. Due to our results it is possible to determine optimal parameters for simultaneous electro discharge sintering of WC-Co and bonding it with high-speed steel substrate.
Sol-gel derived ceramic electrolyte films on porous substrates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kueper, T.W.
1992-05-01
A process for the deposition of sol-gel derived thin films on porous substrates has been developed; such films should be useful for solid oxide fuel cells and related applications. Yttria-stabilized zirconia films have been formed from metal alkoxide starting solutions. Dense films have been deposited on metal substrates and ceramic substrates, both dense and porous, through dip-coating and spin-coating techniques, followed by a heat treatment in air. X-ray diffraction has been used to determine the crystalline phases formed and the extent of reactions with various substrates which may be encountered in gas/gas devices. Surface coatings have been successfully applied tomore » porous substrates through the control of substrate pore size and deposition parameters. Wetting of the substrate pores by the coating solution is discussed, and conditions are defined for which films can be deposited over the pores without filling the interiors of the pores. Shrinkage cracking was encountered in films thicker than a critical value, which depended on the sol-gel process parameters and on the substrate characteristics. Local discontinuities were also observed in films which were thinner than a critical value which depended on the substrate pore size. A theoretical discussion of cracking mechanisms is presented for both types of cracking, and the conditions necessary for successful thin formation are defined. The applicability of these film gas/gas devices is discussed.« less
Prioux, J; Mercier, J; Ramonatxo, M; Granier, P; Mercier, B; Prefaut, C
1995-01-01
The aim of the study was to define the changes of parameters of breathing pattern and ventilation (VE) as a function of age during maximal exercise in children. A multi-longitudinal survey was conducted in forty four untrained schoolboys, divided in three groups with initial age of 11.2 years for group I, 12.9 years for group II, and 14.9 for group III. These children were subsequently followed three years ago at the same period. The range age was thus 11.2 to 16.9 years. This study showed that, during growth, ventilation (VE max), tidal volume (VT max) and mean inspiratory flow (VT/TI max) increased significantly with age, that inspiratory frequency (f max) decreased, that inspiratory, expiratory and total time of the respiratory cycle (TI max, TE max, TTOT max) increased slightly and that the inspiration fraction (TI/TTOT max) was identical at 11 and 17 years. Furthermore we observed that the peak height velocity and peak tidal volume velocity took place at the same age, i.e., 14 years and that those of weight and VT/TI at the same age of 15 years. In conclusion, this study allowed us to define reference values for breathing pattern at maximal exercise in sedentary boys and to specify the relation between growth and parameters of breathing pattern in these children.
Incorporating rainfall uncertainty in a SWAT model: the river Zenne basin (Belgium) case study
NASA Astrophysics Data System (ADS)
Tolessa Leta, Olkeba; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy
2013-04-01
The European Union Water Framework Directive (EU-WFD) called its member countries to achieve a good ecological status for all inland and coastal water bodies by 2015. According to recent studies, the river Zenne (Belgium) is far from this objective. Therefore, an interuniversity and multidisciplinary project "Towards a Good Ecological Status in the river Zenne (GESZ)" was launched to evaluate the effects of wastewater management plans on the river. In this project, different models have been developed and integrated using the Open Modelling Interface (OpenMI). The hydrologic, semi-distributed Soil and Water Assessment Tool (SWAT) is hereby used as one of the model components in the integrated modelling chain in order to model the upland catchment processes. The assessment of the uncertainty of SWAT is an essential aspect of the decision making process, in order to design robust management strategies that take the predicted uncertainties into account. Model uncertainty stems from the uncertainties on the model parameters, the input data (e.g, rainfall), the calibration data (e.g., stream flows) and on the model structure itself. The objective of this paper is to assess the first three sources of uncertainty in a SWAT model of the river Zenne basin. For the assessment of rainfall measurement uncertainty, first, we identified independent rainfall periods, based on the daily precipitation and stream flow observations and using the Water Engineering Time Series PROcessing tool (WETSPRO). Secondly, we assigned a rainfall multiplier parameter for each of the independent rainfall periods, which serves as a multiplicative input error corruption. Finally, we treated these multipliers as latent parameters in the model optimization and uncertainty analysis (UA). For parameter uncertainty assessment, due to the high number of parameters of the SWAT model, first, we screened out its most sensitive parameters using the Latin Hypercube One-factor-At-a-Time (LH-OAT) technique. Subsequently, we only considered the most sensitive parameters for parameter optimization and UA. To explicitly account for the stream flow uncertainty, we assumed that the stream flow measurement error increases linearly with the stream flow value. To assess the uncertainty and infer posterior distributions of the parameters, we used a Markov Chain Monte Carlo (MCMC) sampler - differential evolution adaptive metropolis (DREAM) that uses sampling from an archive of past states to generate candidate points in each individual chain. It is shown that the marginal posterior distributions of the rainfall multipliers vary widely between individual events, as a consequence of rainfall measurement errors and the spatial variability of the rain. Only few of the rainfall events are well defined. The marginal posterior distributions of the SWAT model parameter values are well defined and identified by DREAM, within their prior ranges. The posterior distributions of output uncertainty parameter values also show that the stream flow data is highly uncertain. The approach of using rainfall multipliers to treat rainfall uncertainty for a complex model has an impact on the model parameter marginal posterior distributions and on the model results Corresponding author: Tel.: +32 (0)2629 3027; fax: +32(0)2629 3022. E-mail: otolessa@vub.ac.be
NASA Technical Reports Server (NTRS)
Siewert, R. D.
1972-01-01
Evacuation areas for accidental spills of toxic propellants along rail and highway shipping routes are defined to help local authorities reduce risks to people from excessive vapor concentrations. These criteria along with other emergency information are shown in propellant spill cards. The evacuation areas are based on current best estimates of propellant evaporation rates from various areas of spill puddles. These rates are used together with a continuous point-source, bi-normal model of plume dispersion. The rate at which the toxic plume disperses is based on a neutral atmospheric condition. This condition, which results in slow plume dispersion, represents the widest range of weather parameters which could occur during the day and nighttime periods. Evacuation areas are defined by the ground level boundaries of the plume within which the concentrations exceed the toxic Threshold Limit Value (TLV) or in some cases the Emergency Exposure Limit (EEL).
NASA Astrophysics Data System (ADS)
Barcelona, H.; Mena, M.; Sanchez-Bettucci, L.
2009-05-01
The Valle Chico Complex, at southeast Uruguay, is related Paraná-Etendeka Province. The study involved basaltic lavas, quarz-syenites, and rhyolitic and trachytic dikes. Samples were taken from 18 sites and the AMS of 250 specimens was analyzed. The AMS is modeled by a second order tensor K and it graphical representation is a symmetric ellipsoid. The axes relations determine parameters which describe different properties like shape, lineation, and foliation, degree of anisotropy and bulk magnetic susceptibility. Under this perspective, one lava, dike, or igneous body can be considered a mosaic of magnetic susceptibility domains (MSD). The DSM is an area with specific degree of homogeneity in the distribution of parameters values and cinematic conditions. An average tensor would weigh only one MSD, but if the site is a mosaic, subsets of specimens with similar parameters can be created. Hypothesis tests can be used to establish parameter similarities. It would be suitable considered as a MSD the subsets with statistically significant differences in at least one of its means parameters, and therefore, be treated independently. Once defined the MSDs the tensor analysis continues. The basalt-andesitic lavas present MSD with an NNW magnetic foliation, dipping 10. The K1 are sub-horizontal, oriented E-W and reprsent the magmatic flow direction. The quartz-syenites show a variable magnetic fabric or prolate ellipsoids mayor axes dispose parallel to the flow direction (10 to the SSE). Deformed syenites show N300/11 magnetic foliation, consistent with the trend of fractures. The K1 is subvertical. The MSD defined in rhyolitic dikes have magnetic foliations consistent with the structural trend. The trachytic dikes show an important indetermination in the magnetic response. However, a 62/N90 magnetic lineation was defined. The MSDs obtained are consistent with the geological structures and contribute to the knowledge of the tectonic, magmatic and kinematic events.
Gaussian copula as a likelihood function for environmental models
NASA Astrophysics Data System (ADS)
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.
TU-F-12A-05: Sensitivity of Textural Features to 3D Vs. 4D FDG-PET/CT Imaging in NSCLC Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, F; Nyflot, M; Bowen, S
2014-06-15
Purpose: Neighborhood Gray-level difference matrices (NGLDM) based texture parameters extracted from conventional (3D) 18F-FDG PET scans in patients with NSCLC have been previously shown to associate with response to chemoradiation and poorer patient outcome. However, the change in these parameters when utilizing respiratory-correlated (4D) FDG-PET scans has not yet been characterized for NSCLC. The Objectives: of this study was to assess the extent to which NGLDM-based texture parameters on 4D PET images vary with reference to values derived from 3D scans in NSCLC. Methods: Eight patients with newly diagnosed NSCLC treated with concomitant chemoradiotherapy were included in this study. 4Dmore » PET scans were reconstructed with OSEM-IR in 5 respiratory phase-binned images and corresponding CT data of each phase were employed for attenuation correction. NGLDM-based texture features, consisting of coarseness, contrast, busyness, complexity and strength, were evaluated for gross tumor volumes defined on 3D/4D PET scans by radiation oncologists. Variation of the obtained texture parameters over the respiratory cycle were examined with respect to values extracted from 3D scans. Results: Differences between texture parameters derived from 4D scans at different respiratory phases and those extracted from 3D scans ranged from −30% to 13% for coarseness, −12% to 40% for contrast, −5% to 50% for busyness, −7% to 38% for complexity, and −43% to 20% for strength. Furthermore, no evident correlations were observed between respiratory phase and 4D scan texture parameters. Conclusion: Results of the current study showed that NGLDM-based texture parameters varied considerably based on choice of 3D PET and 4D PET reconstruction of NSCLC patient images, indicating that standardized image acquisition and analysis protocols need to be established for clinical studies, especially multicenter clinical trials, intending to validate prognostic values of texture features for NSCLC.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosking, Jonathan R. M.; Natarajan, Ramesh
The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.
NASA Astrophysics Data System (ADS)
Raimondi, L.; Azetsu-Scott, K.; Wallace, D.
2016-02-01
This work assesses the internal consistency of ocean carbon dioxide through the comparison of discrete measurements and calculated values of four analytical parameters of the inorganic carbon system: Total Alkalinity (TA), Dissolved Inorganic Carbon (DIC), pH and Partial Pressure of CO2 (pCO2). The study is based on 486 seawater samples analyzed for TA, DIC and pH and 86 samples for pCO2 collected during the 2014 Cruise along the AR7W line in Labrador Sea. The internal consistency has been assessed using all combinations of input parameters and eight sets of thermodynamic constants (K1, K2) in calculating each parameter through the CO2SYS software. Residuals of each parameter have been calculated as the differences between measured and calculated values (reported as ΔTA, ΔDIC, ΔpH and ΔpCO2). Although differences between the selected sets of constants were observed, the largest were obtained using different pairs of input parameters. As expected the couple pH-pCO2 produced to poorest results, suggesting that measurements of either TA or DIC are needed to define the carbonate system accurately and precisely. To identify signature of organic alkalinity we isolated the residuals in the bloom area. Therefore only ΔTA from surface waters (0-30 m) along the Greenland side of the basin were selected. The residuals showed that no measured value was higher than calculations and therefore we could not observe presence of organic bases in the shallower water column. The internal consistency in characteristic water masses of Labrador Sea (Denmark Strait Overflow Water, North East Atlantic Deep Water, Newly-ventilated Labrador Sea Water, Greenland and Labrador Shelf waters) will also be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lenardic, A.; Crowley, J. W., E-mail: ajns@rice.edu, E-mail: jwgcrowley@gmail.com
2012-08-20
A model of coupled mantle convection and planetary tectonics is used to demonstrate that history dependence can outweigh the effects of a planet's energy content and material parameters in determining its tectonic state. The mantle convection-surface tectonics system allows multiple tectonic modes to exist for equivalent planetary parameter values. The tectonic mode of the system is then determined by its specific geologic and climatic history. This implies that models of tectonics and mantle convection will not be able to uniquely determine the tectonic mode of a terrestrial planet without the addition of historical data. Historical data exists, to variable degrees,more » for all four terrestrial planets within our solar system. For the Earth, the planet with the largest amount of observational data, debate does still remain regarding the geologic and climatic history of Earth's deep past but constraints are available. For planets in other solar systems, no such constraints exist at present. The existence of multiple tectonic modes, for equivalent parameter values, points to a reason why different groups have reached different conclusions regarding the tectonic state of extrasolar terrestrial planets larger than Earth ({sup s}uper-Earths{sup )}. The region of multiple stable solutions is predicted to widen in parameter space for more energetic mantle convection (as would be expected for larger planets). This means that different groups can find different solutions, all potentially viable and stable, using identical models and identical system parameter values. At a more practical level, the results argue that the question of whether extrasolar terrestrial planets will have plate tectonics is unanswerable and will remain so until the temporal evolution of extrasolar planets can be constrained.« less
Descriptive Quantitative Analysis of Rearfoot Alignment Radiographic Parameters.
Meyr, Andrew J; Wagoner, Matthew R
2015-01-01
Although the radiographic parameters of the transverse talocalcaneal angle (tTCA), calcaneocuboid angle (CCA), talar head uncovering (THU), calcaneal inclination angle (CIA), talar declination angle (TDA), lateral talar-first metatarsal angle (lTFA), and lateral talocalcaneal angle (lTCA) form the basis of the preoperative evaluation and procedure selection for pes planovalgus deformity, the so-called normal values of these measurements are not well-established. The objectives of the present study were to retrospectively evaluate the descriptive statistics of these radiographic parameters (tTCA, CCA, THU, CIA, TDA, lTFA, and lTCA) in a large population, and, second, to determine an objective basis for defining "normal" versus "abnormal" measurements. As a secondary outcome, the relationship of these variables to the body mass index was assessed. Anteroposterior and lateral foot radiographs from 250 consecutive patients without a history of previous foot and ankle surgery and/or trauma were evaluated. The results revealed a mean measurement of 24.12°, 13.20°, 74.32%, 16.41°, 26.64°, 8.37°, and 43.41° for the tTCA, CCA, THU, CIA, TDA, lTFA, and lTCA, respectively. These were generally in line with the reported historical normal values. Descriptive statistical analysis demonstrated that the tTCA, THU, and TDA met the standards to be considered normally distributed but that the CCA, CIA, lTFA, and lTCA demonstrated data characteristics of both parametric and nonparametric distributions. Furthermore, only the CIA (R = -0.2428) and lTCA (R = -0.2449) demonstrated substantial correlation with the body mass index. No differentiations in deformity progression were observed when the radiographic parameters were plotted against each other to lead to a quantitative basis for defining "normal" versus "abnormal" measurements. Copyright © 2015 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
A proposed aquatic plant community biotic index for Wisconsin lakes
Nichols, S.; Weber, S.; Shaw, B.
2000-01-01
The Aquatic Macrophyte Community Index (AMCI) is a multipurpose tool developed to assess the biological quality of aquatic plant communities in lakes. It can be used to specifically analyze aquatic plant communities or as part of a multimetric system to assess overall lake quality for regulatory, planning, management, educational, or research purposes. The components of the index are maximum depth of plant growth; percentage of the littoral zone vegetated; Simpson's diversity index; the relative frequencies of submersed, sensitive, and exotic species; and taxa number. Each parameter was scaled based on data distributions from a statewide database, and scaled values were totaled for the AMCI value. AMCI values were grouped and tested by ecoregion and lake type (natural lakes and impoundments) to define quality on a regional basis. This analysis suggested that aquatic plant communities are divided into four groups: (1) Northern Lakes and Forests lakes and impoundments, (2) North-Central Hardwood Forests lakes and impoundments, (3) Southeastern Wisconsin Till Plains lakes, and (4) Southeastern Wisconsin Till Plains impoundments, Driftless Area Lakes, and Mississippi River Backwater lakes. AMCI values decline from group 1 to group 4 and reflect general water quality and human use trends in Wisconsin. The upper quartile of AMCI values in any region are the highest quality or benchmark plant communities. The interquartile range consists of normally impacted communities for the region and the lower quartile contains severely impacted or degraded plant communities. When AMCI values were applied to case studies, the values reflected known impacts to the lakes. However, quality criteria cannot be used uncritically, especially in lakes that initially have low nutrient levels.The Aquatic Macrophyte Community Index (AMCI) is a multipurpose tool developed to assess the biological quality of aquatic plant communities in lakes. It can be used to specifically analyze aquatic plant communities or as part of a multimetric system to assess overall lake quality for regulatory, planning, management, educational, or research purposes. The components of the index are maximum depth of plant growth; percentage of the littoral zone vegetated; Simpson's diversity index; the relative frequencies of submersed, sensitive, and exotic species; and taxa number. Each parameter was scaled based on data distributions from a statewide database, and scaled values were totaled for the AMCI value, AMCI values were grouped and tested by ecoregion and lake type (natural lakes and impoundments) to define quality on a regional basis. This analysis suggested that aquatic plant communities are divided into four groups: (1) Northern Lakes and Forests lakes and impoundments, (2) North-Central Hardwood Forests lakes and impoundments, (3) Southeastern Wisconsin Till Plains lakes, and (4) Southeastern Wisconsin Till Plains impoundments, Driftless Area Lakes, and Mississippi River Backwater lakes. AMCI values decline from group 1 to group 4 and reflect general water quality and human use trends in Wisconsin. The upper quartile of AMCI values in any region are the highest quality or benchmark plant communities. The interquartile range consists of normally impacted communities for the region and the lower quartile contains severely impacted or degraded plant communities. When AMCI values were applied to case studies, the values reflected known impacts to the lakes. However, quality criteria cannot be used uncritically, especially in lakes that initially have low nutrient levels.In Wisconsin, the Aquatic Macrophyte Community Index (AMCI) was developed and used to define the quality of aquatic macrophyte communities in northern Wisconsin flowages. In this study, the AMCI concept was expanded to lakes and impoundments on a statewide basis. The parameters selected were the maximum depth of plant growth, percentage of littoral area vegetated, Simpson's diversity index, relative frequen
Kinetic approach to the study of froth flotation applied to a lepidolite ore
NASA Astrophysics Data System (ADS)
Vieceli, Nathália; Durão, Fernando O.; Guimarães, Carlos; Nogueira, Carlos A.; Pereira, Manuel F. C.; Margarido, Fernanda
2016-07-01
The number of published studies related to the optimization of lithium extraction from low-grade ores has increased as the demand for lithium has grown. However, no study related to the kinetics of the concentration stage of lithium-containing minerals by froth flotation has yet been reported. To establish a factorial design of batch flotation experiments, we conducted a set of kinetic tests to determine the most selective alternative collector, define a range of pulp pH values, and estimate a near-optimum flotation time. Both collectors (Aeromine 3000C and Armeen 12D) provided the required flotation selectivity, although this selectivity was lost in the case of pulp pH values outside the range between 2 and 4. Cumulative mineral recovery curves were used to adjust a classical kinetic model that was modified with a non-negative parameter representing a delay time. The computation of the near-optimum flotation time as the maximizer of a separation efficiency (SE) function must be performed with caution. We instead propose to define the near-optimum flotation time as the time interval required to achieve 95%-99% of the maximum value of the SE function.
Descriptive quantitative analysis of hallux abductovalgus transverse plane radiographic parameters.
Meyr, Andrew J; Myers, Adam; Pontious, Jane
2014-01-01
Although the transverse plane radiographic parameters of the first intermetatarsal angle (IMA), hallux abductus angle (HAA), and the metatarsal-sesamoid position (MSP) form the basis of preoperative procedure selection and postoperative surgical evaluation of the hallux abductovalgus deformity, the so-called normal values of these measurements have not been well established. The objectives of the present study were to (1) evaluate the descriptive statistics of the first IMA, HAA, and MSP from a large patient population and (2) to determine an objective basis for defining "normal" versus "abnormal" measurements. Anteroposterior foot radiographs from 373 consecutive patients without a history of previous foot and ankle surgery and/or trauma were evaluated for the measurements of the first IMA, HAA, and MSP. The results revealed a mean measurement of 9.93°, 17.59°, and position 3.63 for the first IMA, HAA, and MSP, respectively. An advanced descriptive analysis demonstrated data characteristics of both parametric and nonparametric distributions. Furthermore, clear differentiations in deformity progression were appreciated when the variables were graphically depicted against each other. This could represent a quantitative basis for defining "normal" versus "abnormal" values. From the results of the present study, we have concluded that these radiographic parameters can be more conservatively reported and analyzed using nonparametric descriptive and comparative statistics within medical studies and that the combination of a first IMA, HAA, and MSP at or greater than approximately 10°, 18°, and position 4, respectively, appears to be an objective "tipping point" in terms of deformity progression and might represent an upper limit of acceptable in terms of surgical deformity correction. Copyright © 2014 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Castelletti, Davide; Demir, Begüm; Bruzzone, Lorenzo
2014-10-01
This paper presents a novel semisupervised learning (SSL) technique defined in the context of ɛ-insensitive support vector regression (SVR) to estimate biophysical parameters from remotely sensed images. The proposed SSL method aims to mitigate the problems of small-sized biased training sets without collecting any additional samples with reference measures. This is achieved on the basis of two consecutive steps. The first step is devoted to inject additional priors information in the learning phase of the SVR in order to adapt the importance of each training sample according to distribution of the unlabeled samples. To this end, a weight is initially associated to each training sample based on a novel strategy that defines higher weights for the samples located in the high density regions of the feature space while giving reduced weights to those that fall into the low density regions of the feature space. Then, in order to exploit different weights for training samples in the learning phase of the SVR, we introduce a weighted SVR (WSVR) algorithm. The second step is devoted to jointly exploit labeled and informative unlabeled samples for further improving the definition of the WSVR learning function. To this end, the most informative unlabeled samples that have an expected accurate target values are initially selected according to a novel strategy that relies on the distribution of the unlabeled samples in the feature space and on the WSVR function estimated at the first step. Then, we introduce a restructured WSVR algorithm that jointly uses labeled and unlabeled samples in the learning phase of the WSVR algorithm and tunes their importance by different values of regularization parameters. Experimental results obtained for the estimation of single-tree stem volume show the effectiveness of the proposed SSL method.
Melchardt, Thomas; Troppan, Katharina; Weiss, Lukas; Hufnagl, Clemens; Neureiter, Daniel; Tränkenschuh, Wolfgang; Schlick, Konstantin; Huemer, Florian; Deutsch, Alexander; Neumeister, Peter; Greil, Richard; Pichler, Martin; Egle, Alexander
2015-12-01
Several serum parameters have been evaluated for adding prognostic value to clinical scoring systems in diffuse large B-cell lymphoma (DLBCL), but none of the reports used multivariate testing of more than one parameter at a time. The goal of this study was to validate widely available serum parameters for their independent prognostic impact in the era of the National Comprehensive Cancer Network-International Prognostic Index (NCCN-IPI) score to determine which were the most useful. This retrospective bicenter analysis includes 515 unselected patients with DLBCL who were treated with rituximab and anthracycline-based chemoimmunotherapy between 2004 and January 2014. Anemia, high C-reactive protein, and high bilirubin levels had an independent prognostic value for survival in multivariate analyses in addition to the NCCN-IPI, whereas neutrophil-to-lymphocyte ratio, high gamma-glutamyl transferase levels, and platelets-to-lymphocyte ratio did not. In our cohort, we describe the most promising markers to improve the NCCN-IPI. Anemia and high C-reactive protein levels retain their power in multivariate testing even in the era of the NCCN-IPI. The negative role of high bilirubin levels may be associated as a marker of liver function. Further studies are warranted to incorporate these markers into prognostic models and define their role opposite novel molecular markers. Copyright © 2015 by the National Comprehensive Cancer Network.
Development and validation of a habitat suitability model for ...
We developed a spatially-explicit, flexible 3-parameter habitat suitability model that can be used to identify and predict areas at higher risk for non-native dwarf eelgrass (Zostera japonica) invasion. The model uses simple environmental parameters (depth, nearshore slope, and salinity) to quantitatively describe habitat suitable for Z. japonica invasion based on ecology and physiology from the primary literature. Habitat suitability is defined with values ranging from zero to one, where one denotes areas most conducive to Z. japonica and zero denotes areas not likely to support Z. japonica growth. The model was applied to Yaquina Bay, Oregon, USA, an area that has well documented Z. japonica expansion over the last two decades. The highest suitability values for Z. japonica occurred in the mid to upper portions of the intertidal zone, with larger expanses occurring in the lower estuary. While the upper estuary did contain suitable habitat, most areas were not as large as in the lower estuary, due to inappropriate depth, a steeply sloping intertidal zone, and lower salinity. The lowest suitability values occurred below the lower intertidal zone, within the Yaquina River channel. The model was validated by comparison to a multi-year time series of Z. japonica maps, revealing a strong predictive capacity. Sensitivity analysis performed to evaluate the contribution of each parameter to the model prediction revealed that depth was the most important factor. Sh
Quasar spectral variability from the XMM-Newton serendipitous source catalogue
NASA Astrophysics Data System (ADS)
Serafinelli, R.; Vagnetti, F.; Middei, R.
2017-04-01
Context. X-ray spectral variability analyses of active galactic nuclei (AGN) with moderate luminosities and redshifts typically show a "softer when brighter" behaviour. Such a trend has rarely been investigated for high-luminosity AGNs (Lbol ≳ 1044 erg/s), nor for a wider redshift range (e.g. 0 ≲ z ≲ 5). Aims: We present an analysis of spectral variability based on a large sample of 2700 quasars, measured at several different epochs, extracted from the fifth release of the XMM-Newton Serendipitous Source Catalogue. Methods: We quantified the spectral variability through the parameter β defined as the ratio between the change in the photon index Γ and the corresponding logarithmic flux variation, β = -ΔΓ/Δlog FX. Results: Our analysis confirms a softer when brighter behaviour for our sample, extending the previously found general trend to high luminosity and redshift. We estimate an ensemble value of the spectral variability parameter β = -0.69 ± 0.03. We do not find dependence of β on redshift, X-ray luminosity, black hole mass or Eddington ratio. A subsample of radio-loud sources shows a smaller spectral variability parameter. There is also some change with the X-ray flux, with smaller β (in absolute value) for brighter sources. We also find significant correlations for a small number of individual sources, indicating more negative values for some sources.
Xue, Zhe; Song, Guan-Yang; Liu, Xin; Zhang, Hui; Wu, Guan; Qian, Yi; Feng, Hua
2018-03-20
The purpose of the study was to quantify the patellar J sign using traditional computed tomography (CT) scans. Fifty-three patients (fifty-three knees) who suffered from recurrent patellar instability were included and analyzed. The patellar J sign was evaluated pre-operatively during active knee flexion and extension. It was defined as positive when there was obvious lateral patellar translation, and negative when there was not. The CT scans were performed in all patients with full knee extension; and the parameters including bisect offset index (BOI), patellar-trochlear-groove (PTG) distance, and patellar lateral tilt angle (PLTA) were measured on the axial slices. All the three parameters were compared between the J sign-positive group (study group) and the J sign-negative group (control group). In addition, the optimal thresholds of the three CT scan parameters for predicting the positive patellar J sign were determined with receiver operating characteristic (ROC) curves, and the diagnostic values were assessed by the area under the curve (AUC). Among the fifty-three patients (fifty-three knees), thirty-seven (70%) showed obvious lateral patellar translation, which were defined as positive J sign (study group), and the remaining sixteen (30%) who showed no lateral translation were defined as negative J sign (control group). The mean values of the three CT parameters in the study group were all significantly larger compared to the control group, including BOI (121 ± 28% vs 88 ± 12%, P = 0.038), PTG distance (5.2 ± 6.6 mm vs - 4.4 ± 5.2 mm, P < 0.05), and PLTA (34.9 ± 10.5° vs 25.7 ± 3.4°, P = 0.001). Furthermore, the evaluation of ROC analysis showed that the AUC of BOI was the largest (AUC = 0.906) among the three parameters, and the optimal threshold of BOI to predict the positive patellar J sign was 97.5% (Sensitivity = 83.3%, Specificity = 87.5%). In this study, the prevalence of positive patellar J sign was 70%. The BOI measured from the axial CT scans of the knee joint can be used as an appropriate predictor to differentiate the positive J sign from the negative J sign, highlighting that the excessive lateral patellar translation on axial CT scan indicates positive patellar J sign. IV.
NASA Technical Reports Server (NTRS)
Panontin, Tina L.; Sheppard, Sheri D.
1994-01-01
The use of small laboratory specimens to predict the integrity of large, complex structures relies on the validity of single parameter fracture mechanics. Unfortunately, the constraint loss associated with large scale yielding, whether in a laboratory specimen because of its small size or in a structure because it contains shallow flaws loaded in tension, can cause the breakdown of classical fracture mechanics and the loss of transferability of critical, global fracture parameters. Although the issue of constraint loss can be eliminated by testing actual structural configurations, such an approach can be prohibitively costly. Hence, a methodology that can correct global fracture parameters for constraint effects is desirable. This research uses micromechanical analyses to define the relationship between global, ductile fracture initiation parameters and constraint in two specimen geometries (SECT and SECB with varying a/w ratios) and one structural geometry (circumferentially cracked pipe). Two local fracture criteria corresponding to ductile fracture micromechanisms are evaluated: a constraint-modified, critical strain criterion for void coalescence proposed by Hancock and Cowling and a critical void ratio criterion for void growth based on the Rice and Tracey model. Crack initiation is assumed to occur when the critical value in each case is reached over some critical length. The primary material of interest is A516-70, a high-hardening pressure vessel steel sensitive to constraint; however, a low-hardening structural steel that is less sensitive to constraint is also being studied. Critical values of local fracture parameters are obtained by numerical analysis and experimental testing of circumferentially notched tensile specimens of varying constraint (e.g., notch radius). These parameters are then used in conjunction with large strain, large deformation, two- and three-dimensional finite element analyses of the geometries listed above to predict crack initiation loads and to calculate the associated (critical) global fracture parameters. The loads are verified experimentally, and microscopy is used to measure pre-crack length, crack tip opening displacement (CTOD), and the amount of stable crack growth. Results for A516-70 steel indicate that the constraint-modified, critical strain criterion with a critical length approximately equal to the grain size (0.0025 inch) provides accurate predictions of crack initiation. The critical void growth criterion is shown to considerably underpredict crack initiation loads with the same critical length. The relationship between the critical value of the J-integral for ductile crack initiation and crack depth for SECT and SECB specimens has been determined using the constraint-modified, critical strain criterion, demonstrating that this micromechanical model can be used to correct in-plane constraint effects due to crack depth and bending vs. tension loading. Finally, the relationship developed for the SECT specimens is used to predict the behavior of circumferentially cracked pipe specimens.
Consistent van der Waals Radii for the Whole Main Group
Mantina, Manjeera; Chamberlin, Adam C.; Valero, Rosendo; Cramer, Christopher J.; Truhlar, Donald G.
2013-01-01
Atomic radii are not precisely defined but are nevertheless widely used parameters in modeling and understanding molecular structure and interactions. The van der Waals radii determined by Bondi from molecular crystals and noble gas crystals are the most widely used values, but Bondi recommended radius values for only 28 of the 44 main-group elements in the periodic table. In the present article we present atomic radii for the other 16; these new radii were determined in a way designed to be compatible with Bondi’s scale. The method chosen is a set of two-parameter correlations of Bondi’s radii with repulsive-wall distances calculated by relativistic coupled-cluster electronic structure calculations. The newly determined radii (in Å) are Be, 1.53; B, 1.92; Al, 1.84; Ca, 2.31; Ge, 2.11; Rb, 3.03; Sr, 2.50; Sb, 2.06; Cs, 3.43; Ba, 2.68; Bi, 2.07; Po, 1.97; At, 2.02; Rn, 2.20; Fr, 3.48; and Ra, 2.83. PMID:19382751
Consistent van der Waals radii for the whole main group.
Mantina, Manjeera; Chamberlin, Adam C; Valero, Rosendo; Cramer, Christopher J; Truhlar, Donald G
2009-05-14
Atomic radii are not precisely defined but are nevertheless widely used parameters in modeling and understanding molecular structure and interactions. The van der Waals radii determined by Bondi from molecular crystals and data for gases are the most widely used values, but Bondi recommended radius values for only 28 of the 44 main-group elements in the periodic table. In the present Article, we present atomic radii for the other 16; these new radii were determined in a way designed to be compatible with Bondi's scale. The method chosen is a set of two-parameter correlations of Bondi's radii with repulsive-wall distances calculated by relativistic coupled-cluster electronic structure calculations. The newly determined radii (in A) are Be, 1.53; B, 1.92; Al, 1.84; Ca, 2.31; Ge, 2.11; Rb, 3.03; Sr, 2.49; Sb, 2.06; Cs, 3.43; Ba, 2.68; Bi, 2.07; Po, 1.97; At, 2.02; Rn, 2.20; Fr, 3.48; and Ra, 2.83.
NASA Astrophysics Data System (ADS)
Fallica, Roberto; Stowers, Jason K.; Grenville, Andrew; Frommhold, Andreas; Robinson, Alex P. G.; Ekinci, Yasin
2016-07-01
The dynamic absorption coefficients of several chemically amplified resists (CAR) and non-CAR extreme ultraviolet (EUV) photoresists are measured experimentally using a specifically developed setup in transmission mode at the x-ray interference lithography beamline of the Swiss Light Source. The absorption coefficient α and the Dill parameters ABC were measured with unprecedented accuracy. In general, the α of resists match very closely with the theoretical value calculated from elemental densities and absorption coefficients, whereas exceptions are observed. In addition, through the direct measurements of the absorption coefficients and dose-to-clear values, we introduce a new figure of merit called chemical sensitivity to account for all the postabsorption chemical reaction ongoing in the resist, which also predicts a quantitative clearing volume and clearing radius, due to the photon absorption in the resist. These parameters may help provide deeper insight into the underlying mechanisms of the EUV concepts of clearing volume and clearing radius, which are then defined and quantitatively calculated.
A biomechanical triphasic approach to the transport of nondilute solutions in articular cartilage.
Abazari, Alireza; Elliott, Janet A W; Law, Garson K; McGann, Locksley E; Jomha, Nadr M
2009-12-16
Biomechanical models for biological tissues such as articular cartilage generally contain an ideal, dilute solution assumption. In this article, a biomechanical triphasic model of cartilage is described that includes nondilute treatment of concentrated solutions such as those applied in vitrification of biological tissues. The chemical potential equations of the triphasic model are modified and the transport equations are adjusted for the volume fraction and frictional coefficients of the solutes that are not negligible in such solutions. Four transport parameters, i.e., water permeability, solute permeability, diffusion coefficient of solute in solvent within the cartilage, and the cartilage stiffness modulus, are defined as four degrees of freedom for the model. Water and solute transport in cartilage were simulated using the model and predictions of average concentration increase and cartilage weight were fit to experimental data to obtain the values of the four transport parameters. As far as we know, this is the first study to formulate the solvent and solute transport equations of nondilute solutions in the cartilage matrix. It is shown that the values obtained for the transport parameters are within the ranges reported in the available literature, which confirms the proposed model approach.
Identifying Bearing Rotordynamic Coefficients using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Miller, Brad A.; Howard, Samuel A.
2008-01-01
An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor-dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter s performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor-bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.
Complex mode indication function and its applications to spatial domain parameter estimation
NASA Astrophysics Data System (ADS)
Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.
1988-10-01
This paper introduces the concept of the Complex Mode Indication Function (CMIF) and its application in spatial domain parameter estimation. The concept of CMIF is developed by performing singular value decomposition (SVD) of the Frequency Response Function (FRF) matrix at each spectral line. The CMIF is defined as the eigenvalues, which are the square of the singular values, solved from the normal matrix formed from the FRF matrix, [ H( jω)] H[ H( jω)], at each spectral line. The CMIF appears to be a simple and efficient method for identifying the modes of the complex system. The CMIF identifies modes by showing the physical magnitude of each mode and the damped natural frequency for each root. Since multiple reference data is applied in CMIF, repeated roots can be detected. The CMIF also gives global modal parameters, such as damped natural frequencies, mode shapes and modal participation vectors. Since CMIF works in the spatial domain, uneven frequency spacing data such as data from spatial sine testing can be used. A second-stage procedure for accurate damped natural frequency and damping estimation as well as mode shape scaling is also discussed in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delgado-Acosta, E. G.; Napsuciale, Mauro; Rodriguez, Simon
We develop a second order formalism for massive spin 1/2 fermions based on the projection over Poincare invariant subspaces in the ((1/2),0)+(0,(1/2)) representation of the homogeneous Lorentz group. Using the U(1){sub em} gauge principle we obtain a second order description for the electromagnetic interactions of a spin 1/2 fermion with two free parameters, the gyromagnetic factor g and a parameter {xi} related to odd-parity Lorentz structures. We calculate Compton scattering in this formalism. In the particular case g=2, {xi}=0, and for states with well-defined parity, we recover Dirac results. In general, we find the correct classical limit and a finitemore » value r{sub c}{sup 2} for the forward differential cross section, independent of the photon energy and of the value of the parameters g and {xi}. The differential cross section vanishes at high energies for all g, {xi} except in the forward direction. The total cross section at high energies vanishes only for g=2, {xi}=0. We argue that this formalism is more convenient than Dirac theory in the description of low energy electromagnetic properties of baryons and illustrate the point with the proton case.« less
A Biomechanical Triphasic Approach to the Transport of Nondilute Solutions in Articular Cartilage
Abazari, Alireza; Elliott, Janet A.W.; Law, Garson K.; McGann, Locksley E.; Jomha, Nadr M.
2009-01-01
Abstract Biomechanical models for biological tissues such as articular cartilage generally contain an ideal, dilute solution assumption. In this article, a biomechanical triphasic model of cartilage is described that includes nondilute treatment of concentrated solutions such as those applied in vitrification of biological tissues. The chemical potential equations of the triphasic model are modified and the transport equations are adjusted for the volume fraction and frictional coefficients of the solutes that are not negligible in such solutions. Four transport parameters, i.e., water permeability, solute permeability, diffusion coefficient of solute in solvent within the cartilage, and the cartilage stiffness modulus, are defined as four degrees of freedom for the model. Water and solute transport in cartilage were simulated using the model and predictions of average concentration increase and cartilage weight were fit to experimental data to obtain the values of the four transport parameters. As far as we know, this is the first study to formulate the solvent and solute transport equations of nondilute solutions in the cartilage matrix. It is shown that the values obtained for the transport parameters are within the ranges reported in the available literature, which confirms the proposed model approach. PMID:20006942
Model based recovery of histological parameters starting from reflectance spectra of the colon
NASA Astrophysics Data System (ADS)
Hidovic-Rowe, Dzena; Claridge, Ela
2005-06-01
Colon cancer alters the tissue macro-architecture. Changes include increase in blood content and distortion of the collagen matrix, which affect the reflectance spectra of the colon and its colouration. We have developed a physics-based model for predicting colon tissue spectra. The colon structure is represented by three layers: mucosa, submucosa and smooth muscle. Each layer is represented by parameters defining its optical properties: molar concentration and absorption coefficients of haemoglobins, describing absorption of light; size and density of collagen fibres; refractive index of the medium and collagen fibres, describing light scattering; and layer thicknesses. Spectra were calculated using the Monte Carlo method. The output of the model was compared to experimental data comprising 50 spectra acquired in vivo from normal tissue. The extracted histological parameters showed good agreement with known values. An experiment was carried out to study the differences between normal and abnormal tissue. These were characterised by increased blood content and decreased collagen density, which is consistent with known differences between normal and abnormal tissue. This suggests that histological quantities of the colon could be computed from its reflectance spectra. The method is likely to have diagnostic value in the early detection of colon cancer.
Cawello, Willi; Schäfer, Carina
2014-08-01
Frequent plasma sampling to monitor pharmacokinetic (PK) profile of antiepileptic drugs (AEDs), is invasive, costly and time consuming. For drugs with a well-defined PK profile, such as AED lacosamide, equations can accurately approximate PK parameters from one steady-state plasma sample. Equations were derived to approximate steady-state peak and trough lacosamide plasma concentrations (Cpeak,ss and Ctrough,ss, respectively) and area under concentration-time curve during dosing interval (AUCτ,ss) from one plasma sample. Lacosamide (ka: ∼2 h(-1); ke: ∼0.05 h(-1), corresponding to half-life of 13 h) was calculated to reach Cpeak,ss after ∼1 h (tmax,ss). Equations were validated by comparing approximations to reference PK parameters obtained from single plasma samples drawn 3-12h following lacosamide administration, using data from double-blind, placebo-controlled, parallel-group PK study. Values of relative bias (accuracy) between -15% and +15%, and root mean square error (RMSE) values≤15% (precision) were considered acceptable for validation. Thirty-five healthy subjects (12 young males; 11 elderly males, 12 elderly females) received lacosamide 100mg/day for 4.5 days. Equation-derived PK values were compared to reference mean Cpeak,ss, Ctrough,ss and AUCτ,ss values. Equation-derived PK data had a precision of 6.2% and accuracy of -8.0%, 2.9%, and -0.11%, respectively. Equation-derived versus reference PK values for individual samples obtained 3-12h after lacosamide administration showed correlation (R2) range of 0.88-0.97 for AUCτ,ss. Correlation range for Cpeak,ss and Ctrough,ss was 0.65-0.87. Error analyses for individual sample comparisons were independent of time. Derived equations approximated lacosamide Cpeak,ss, Ctrough,ss and AUCτ,ss using one steady-state plasma sample within validation range. Approximated PK parameters were within accepted validation criteria when compared to reference PK values. Copyright © 2014 Elsevier B.V. All rights reserved.
A fuzzy set approach for reliability calculation of valve controlling electric actuators
NASA Astrophysics Data System (ADS)
Karmachev, D. P.; Yefremov, A. A.; Luneva, E. E.
2017-02-01
The oil and gas equipment and electric actuators in particular frequently perform in various operational modes and under dynamic environmental conditions. These factors affect equipment reliability measures in a vague, uncertain way. To eliminate the ambiguity, reliability model parameters could be defined as fuzzy numbers. We suggest a technique that allows constructing fundamental fuzzy-valued performance reliability measures based on an analysis of electric actuators failure data in accordance with the amount of work, completed before the failure, instead of failure time. Also, this paper provides a computation example of fuzzy-valued reliability and hazard rate functions, assuming Kumaraswamy complementary Weibull geometric distribution as a lifetime (reliability) model for electric actuators.
Parrish, Rudolph S.; Smith, Charles N.
1990-01-01
A quantitative method is described for testing whether model predictions fall within a specified factor of true values. The technique is based on classical theory for confidence regions on unknown population parameters and can be related to hypothesis testing in both univariate and multivariate situations. A capability index is defined that can be used as a measure of predictive capability of a model, and its properties are discussed. The testing approach and the capability index should facilitate model validation efforts and permit comparisons among competing models. An example is given for a pesticide leaching model that predicts chemical concentrations in the soil profile.
Masoli, Stefano; Rizza, Martina F; Sgritta, Martina; Van Geit, Werner; Schürmann, Felix; D'Angelo, Egidio
2017-01-01
In realistic neuronal modeling, once the ionic channel complement has been defined, the maximum ionic conductance (G i-max ) values need to be tuned in order to match the firing pattern revealed by electrophysiological recordings. Recently, selection/mutation genetic algorithms have been proposed to efficiently and automatically tune these parameters. Nonetheless, since similar firing patterns can be achieved through different combinations of G i-max values, it is not clear how well these algorithms approximate the corresponding properties of real cells. Here we have evaluated the issue by exploiting a unique opportunity offered by the cerebellar granule cell (GrC), which is electrotonically compact and has therefore allowed the direct experimental measurement of ionic currents. Previous models were constructed using empirical tuning of G i-max values to match the original data set. Here, by using repetitive discharge patterns as a template, the optimization procedure yielded models that closely approximated the experimental G i-max values. These models, in addition to repetitive firing, captured additional features, including inward rectification, near-threshold oscillations, and resonance, which were not used as features. Thus, parameter optimization using genetic algorithms provided an efficient modeling strategy for reconstructing the biophysical properties of neurons and for the subsequent reconstruction of large-scale neuronal network models.
A second-order shock-expansion method applicable to bodies of revolution near zero lift
NASA Technical Reports Server (NTRS)
1957-01-01
A second-order shock-expansion method applicable to bodies of revolution is developed by the use of the predictions of the generalized shock-expansion method in combination with characteristics theory. Equations defining the zero-lift pressure distributions and the normal-force and pitching-moment derivatives are derived. Comparisons with experimental results show that the method is applicable at values of the similarity parameter, the ratio of free-stream Mach number to nose fineness ratio, from about 0.4 to 2.
Seaweeds from the Portuguese coast: A potential food resource?
NASA Astrophysics Data System (ADS)
Soares, C.; Machado, S.; Vieira, E. F.; Morais, S.; Teles, M. T.; Correia, M.; Carvalho, A.; Domingues, V. F.; Ramalhosa, M. J.; Delerue-Matos, C.; Antunes, F.
2017-09-01
The Portuguese coast presents a large amount of potentially edible seaweeds that are underexploited. The identification of different macroalgae species and their availability in the northern and central coast of the continental territory was assessed. The nutritional value of seaweeds is discussed based on a literature review (when available) focused on data for species collected in Portugal with the aim to define the most important nutritional parameters that should be characterized in the samples. Possible health concerns related with the presence of contaminants are also considered.
Tool Efficiency Analysis model research in SEMI industry
NASA Astrophysics Data System (ADS)
Lei, Ma; Nana, Zhang; Zhongqiu, Zhang
2018-06-01
One of the key goals in SEMI industry is to improve equipment through put and ensure equipment production efficiency maximization. This paper is based on SEMI standards in semiconductor equipment control, defines the transaction rules between different tool states, and presents a TEA system model which is to analysis tool performance automatically based on finite state machine. The system was applied to fab tools and verified its effectiveness successfully, and obtained the parameter values used to measure the equipment performance, also including the advices of improvement.
Cheng, Xiaofei; Ni, Bin; Liu, Qi; Chen, Jinshui; Guan, Huapeng
2013-01-01
The goal of this study was to determine which paraspinal approach provided a better transverse screw angle (TSA) for each vertebral level in lower lumbar surgery. Axial computed tomography (CT) images of 100 patients, from L3 to S1, were used to measure the angulation parameters, including transverse pedicle angle (TPA) and transverse cleavage plane angle (TCPA) of entry from the two approaches. The difference value between TCPA and TPA, defined as difference angle (DA), was calculated. Statistical differences of DA obtained by the two approaches and the angulation parameters between sexes, and the correlation between each angulation parameter and age or body mass index (BMI) were analyzed. TPA ranged from about 16° at L3 to 30° at S1. TCPA through the Wiltse's and Weaver's approach ranged from about -10° and 25° at L3 to 12° and 32° at S1, respectively. The absolute values of DA through the Weaver's approach were significantly lower than those through the Wiltse's approach at each level. The angulation parameters showed no significant difference with sex and no significant correlation with age or BMI. In the lower lumbar vertebrae (L3-L5) and S1, pedicle screw placement through the Weaver's approach may more easily yield the preferred TSA consistent with TPA than that through the Wiltse's approach. The reference values obtained in this paper may be applied regardless of sex, age or BMI and the descriptive statistical results may be used as references for applying the two paraspinal approaches.
Crustal Fracturing Field and Presence of Fluid as Revealed by Seismic Anisotropy
NASA Astrophysics Data System (ADS)
Pastori, M.; Piccinini, D.; de Gori, P.; Margheriti, L.; Barchi, M. R.; di Bucci, D.
2010-12-01
In the last three years, we developed, tested and improved an automatic analysis code (Anisomat+) to calculate the shear wave splitting parameters, fast polarization direction (φ) and delay time (∂t). The code is a set of MatLab scripts able to retrieve crustal anisotropy parameters from three-component seismic recording of local earthquakes using horizontal component cross-correlation method. The analysis procedure consists in choosing an appropriate frequency range, that better highlights the signal containing the shear waves, and a length of time window on the seismogram centered on the S arrival (the temporal window contains at least one cycle of S wave). The code was compared to other two automatic analysis code (SPY and SHEBA) and tested on three Italian areas (Val d’Agri, Tiber Valley and L’Aquila surrounding) along the Apennine mountains. For each region we used the anisotropic parameters resulting from the automatic computation as a tool to determine the fracture field geometries connected with the active stress field. We compare the temporal variations of anisotropic parameters to the evolution of vp/vs ratio for the same seismicity. The anisotropic fast directions are used to define the active stress field (EDA model), finding a general consistence between fast direction and main stress indicators (focal mechanism and borehole break-out). The magnitude of delay time is used to define the fracture field intensity finding higher value in the volume where micro-seismicity occurs. Furthermore we studied temporal variations of anisotropic parameters and vp/vs ratio in order to explain if fluids play an important role in the earthquake generation process. The close association of anisotropic and vp/vs parameters variations and seismicity rate changes supports the hypothesis that the background seismicity is influenced by the fluctuation of pore fluid pressure in the rocks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgood, G.O.; Dress, W.B.; Kercel, S.W.
1999-05-10
A major problem with cavitation in pumps and other hydraulic devices is that there is no effective method for detecting or predicting its inception. The traditional approach is to declare the pump in cavitation when the total head pressure drops by some arbitrary value (typically 3o/0) in response to a reduction in pump inlet pressure. However, the pump is already cavitating at this point. A method is needed in which cavitation events are captured as they occur and characterized by their process dynamics. The object of this research was to identify specific features of cavitation that could be used asmore » a model-based descriptor in a context-dependent condition-based maintenance (CD-CBM) anticipatory prognostic and health assessment model. This descriptor was based on the physics of the phenomena, capturing the salient features of the process dynamics. An important element of this concept is the development and formulation of the extended process feature vector @) or model vector. Thk model-based descriptor encodes the specific information that describes the phenomena and its dynamics and is formulated as a data structure consisting of several elements. The first is a descriptive model abstracting the phenomena. The second is the parameter list associated with the functional model. The third is a figure of merit, a single number between [0,1] representing a confidence factor that the functional model and parameter list actually describes the observed data. Using this as a basis and applying it to the cavitation problem, any given location in a flow loop will have this data structure, differing in value but not content. The extended process feature vector is formulated as follows: E`> [ , {parameter Iist}, confidence factor]. (1) For this study, the model that characterized cavitation was a chirped-exponentially decaying sinusoid. Using the parameters defined by this model, the parameter list included frequency, decay, and chirp rate. Based on this, the process feature vector has the form: @=> [, {01 = a, ~= b, ~ = c}, cf = 0.80]. (2) In this experiment a reversible catastrophe was examined. The reason for this is that the same catastrophe could be repeated to ensure the statistical significance of the data.« less
Zener Diode Compact Model Parameter Extraction Using Xyce-Dakota Optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buchheit, Thomas E.; Wilcox, Ian Zachary; Sandoval, Andrew J
This report presents a detailed process for compact model parameter extraction for DC circuit Zener diodes. Following the traditional approach of Zener diode parameter extraction, circuit model representation is defined and then used to capture the different operational regions of a real diode's electrical behavior. The circuit model contains 9 parameters represented by resistors and characteristic diodes as circuit model elements. The process of initial parameter extraction, the identification of parameter values for the circuit model elements, is presented in a way that isolates the dependencies between certain electrical parameters and highlights both the empirical nature of the extraction andmore » portions of the real diode physical behavior which of the parameters are intended to represent. Optimization of the parameters, a necessary part of a robost parameter extraction process, is demonstrated using a 'Xyce-Dakota' workflow, discussed in more detail in the report. Among other realizations during this systematic approach of electrical model parameter extraction, non-physical solutions are possible and can be difficult to avoid because of the interdependencies between the different parameters. The process steps described are fairly general and can be leveraged for other types of semiconductor device model extractions. Also included in the report are recommendations for experiment setups for generating optimum dataset for model extraction and the Parameter Identification and Ranking Table (PIRT) for Zener diodes.« less
NASA Technical Reports Server (NTRS)
Ricko, Martina; Adler, Robert F.; Huffman, George J.
2016-01-01
Climatology and variations of recent mean and intense precipitation over a near-global (50 deg. S 50 deg. N) domain on a monthly and annual time scale are analyzed. Data used to derive daily precipitation to examine the effects of spatial and temporal coverage of intense precipitation are from the current Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) 3B42 version 7 precipitation product, with high spatial and temporal resolution during 1998 - 2013. Intense precipitation is defined by several different parameters, such as a 95th percentile threshold of daily precipitation, a mean precipitation that exceeds that percentile, or a fixed threshold of daily precipitation value [e.g., 25 and 50 mm day(exp -1)]. All parameters are used to identify the main characteristics of spatial and temporal variation of intense precipitation. High correlations between examined parameters are observed, especially between climatological monthly mean precipitation and intense precipitation, over both tropical land and ocean. Among the various parameters examined, the one best characterizing intense rainfall is a fraction of daily precipitation Great than or equal to 25 mm day(exp. -1), defined as a ratio between the intense precipitation above the used threshold and mean precipitation. Regions that experience an increase in mean precipitation likely experience a similar increase in intense precipitation, especially during the El Nino Southern Oscillation (ENSO) events. Improved knowledge of this intense precipitation regime and its strong connection to mean precipitation given by the fraction parameter can be used for monitoring of intense rainfall and its intensity on a global to regional scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arques, P.
1998-07-01
The relationships that seem to exist between energy and man are presented in this paper. Habitually, social coefficients are connected to the gross domestic product; some parameters with correlations are: birth rate, infant mortality rate, death rate, literacy, etc. Along with energy these define the optimal energy consumption per capita; the author presents the correlation between these parameters and energy consumed per capita. There exists a high correlation between energy consumption per capita and gross domestic product per capita. The set of parameters considered are correlated with similar values relative to these two parameters. Using data collected on a groupmore » of the different countries of the world, a table of 165 countries and 22 variables has been drawn up. From the [Country x variable] matrix, a correlation table is calculated and a factorial analysis is applied to this matrix. The first factorial plan comprises 57% of the information contained in this table. Results from this first factorial plan are presented. These parameters are analyzed: influence of a country's latitude on its inhabitants' consumption; relationship between consumed energy and gross domestic product; women's fertility rate; birth rate per 1000 population; sex ratio; life expectancy at birth; rate of literacy; death rate; population growth rate. Finally, it is difficult to define precise criteria for: an optimal distribution of population according to age, but with a power consumed of above 300 W per capita, the population becomes younger; the birth rate per 1000 population; the total fertility rate per woman; the population growth rate. The authors determine that optimal energy is approximately between 200 W and 677 W inclusive.« less
On the Concept of Varying Influence Radii for a Successive Corrections Objective Analysis
NASA Technical Reports Server (NTRS)
Achtemeier, Gary L.
1991-01-01
There has been a long standing concept by those who use successive corrections objective analysis that the way to obtain the most accurate objective analysis is first, to analyze for the long wavelengths and then to build in the details of the shorter wavelengths by successively decreasing the influence of the more distant observations upon the interpolated values. Using the Barnes method, the filter characteristics were compared for families of response curves that pass through a common point at a reference wavelength. It was found that the filter cutoff is a maximum if the filter parameters that determine the influence of observations are unchanged for both the initial and corrections passes. This information was used to define and test the following hypothesis. If accuracy is defined by how well the method retains desired wavelengths and removes undesired wavelengths, then the Barnes method gives the most accurate analyses if the filter parameter on the initial and corrections passes are the same. This hypothesis does not follow the usual conceptual approach to successive corrections analysis.
Spatial Rack Drives Pitch Configurations: Essence and Content
NASA Astrophysics Data System (ADS)
Abadjieva, Emilia; Abadjiev, Valentin; Naganawa, Akihiro
2018-03-01
The practical realization of all types of mechanical motions converters is preceded by solving the task of their kinematic synthesis. In this way, the determination of the optimal values of the constant geometrical parameters of the chosen structure of the created mechanical system is achieved. The searched result is a guarantee of the preliminary defined kinematic characteristics of the synthesized transmission and in the first place, to guarantee the law of motions transformation. The kinematic synthesis of mechanical transmissions is based on adequate mathematical modelling of the process of motions transformation and on the object, realizing this transformation. Basic primitives of the mathematical models for synthesis upon a pitch contact point are geometric and kinematic pitch configurations. Their dimensions and mutual position in space are the input parameters for the processes of design and elaboration of the synthesized mechanical device. The study presented here is a brief review of the theory of pitch configurations. It is an independent scientific branch of the spatial gearing theory (theory of hyperboloid gears). On this basis, the essence and content of the corresponding primitives, applicable to the synthesis of spatial rack drives, are defined.
The utility of rat jejunal permeability for biopharmaceutics classification system.
Zakeri-Milani, Parvin; Valizadeh, Hadi; Tajerzadeh, Hosnieh; Islambulchilar, Ziba
2009-12-01
The biopharmaceutical classification system has been developed to provide a scientific approach for classifying drug compounds based on their dose/solubility ratio and human intestinal permeability. Therefore in this study a new classification is presented, which is based on a correlation between rat and human intestinal permeability values. In situ technique in rat jejunum was used to determine the effective intestinal permeability of tested drugs. Then three dimensionless parameters--dose number, absorption number, and dissolution number (D(o), A(n), and D(n))--were calculated for each drug. Four classes of drugs were defined, that is, class I, D(0) < 0.5, P(eff(rat)) > 5.09 x 10(-5) cm/s; class II, D(o) > 1, P(eff(rat)) > 5.09 x 10( -5) cm/s; class III, D(0) < 0.5, P(eff(rat)) < 4.2 x 10(-5) cm/s; and class IV, D(o) > 1, P(eff(rat)) < 4.2 x 10(-5) cm/s. A region of borderline drugs (0.5 < D(o) < 1, 4.2 x 10(-5) < P(eff(rat)) < 5.09 x 10(-5) cm/s) was also defined. According to obtained results and proposed classification for drugs, it is concluded that drugs could be categorized correctly based on dose number and their intestinal permeability values in rat model using single-pass intestinal perfusion technique. This classification enables us to remark defined characteristics for intestinal absorption of all four classes using suitable cutoff points for both dose number and rat effective intestinal permeability values.
Bansal, Sanjiv Kumar; Agarwal, Sarita; Daga, Mridul Kumar
2016-01-01
The high prevalence, severity, and prematurity of coronary artery disease (CAD) in the Indian population cannot be completely explained by the conventional lipid parameters and the existing lipid indices. To calculate newly defined advanced atherogenic index (AAI) in premature CAD patients and compare it between cases and controls and Correlate its values with the existing indices. One hundred and twenty premature CAD patients and an equal number of age and sex matched healthy individuals were included in this study. Lipid profile and nonconventional lipid parameters like oxidized Low density lipoprotein (OX LDL), small dense LDL (SD LDL), lipoprotein (a) apolipoprotein B (Apo B), and apolipoprotein A1 (Apo A1) were estimated and their values were used to define AAI and existing lipid indices like AI, lipid tetrad index (LTI) and lipid pentad index (LPI). The mean age of cases and controls was 37.29 + 4.50 and 36.13 + 3.53 years, respectively. The value of AAI was highly significant in cases (3461.22 ± 45.20) as compared to controls (305.84 ± 21.80). AAI has shown better statistical significance and correlation (P < 0.0001, r = 0.737) as compared to the earlier indices such as AI (P < 0.01, r = 0.52), LTI (P < 0.001, r = 0.677) and LPI (P < 0.001, r = 0.622) in premature CAD. Kolmogorov D statistic and cumulative distribution function plot has shown that AAI can discriminate cases and controls more accurately as compared to the earlier indices. Statistically AAI appears to be a better marker of consolidated lipid risk in premature CAD patients as compared to the earlier indices.
GilPavas, Edison; Molina-Tirado, Kevin; Gómez-García, Miguel Angel
2009-01-01
An electrocoagulation process was used for the treatment of oily wastewater generated from an automotive industry in Medellín (Colombia). An electrochemical cell consisting of four parallel electrodes (Fe and Al) in bipolar configuration was implemented. A multifactorial experimental design was used for evaluating the influence of several parameters including: type and arrangement of electrodes, pH, and current density. Oil and grease removal was defined as the response variable for the statistical analysis. Additionally, the BOD(5), COD, and TOC were monitored during the treatment process. According to the results, at the optimum parameter values (current density = 4.3 mA/cm(2), distance between electrodes = 1.5 cm, Fe as anode, and pH = 12) it was possible to reach a c.a. 95% oils removal, COD and mineralization of 87.4% and 70.6%, respectively. A final biodegradability (BOD(5)/COD) of 0.54 was reached.
Singular Instantons and Painlevé VI
NASA Astrophysics Data System (ADS)
Muñiz Manasliski, Richard
2016-06-01
We consider a two parameter family of instantons, which is studied in [Sadun L., Comm. Math. Phys. 163 (1994), 257-291], invariant under the irreducible action of SU_2 on S^4, but which are not globally defined. We will see that these instantons produce solutions to a one parameter family of Painlevé VI equations (P_VI}) and we will give an explicit expression of the map between instantons and solutions to P_{VI}. The solutions are algebraic only for that values of the parameters which correspond to the instantons that can be extended to all of S^4. This work is a generalization of [Muñiz Manasliski R., Contemp. Math., Vol. 434, Amer. Math. Soc., Providence, RI, 2007, 215-222] and [Muñiz Manasliski R., J. Geom. Phys. 59 (2009), 1036-1047, arXiv:1602.07221], where instantons without singularities are studied.
A computer program to trace seismic ray distribution in complex two-dimensional geological models
Yacoub, Nazieh K.; Scott, James H.
1970-01-01
A computer program has been developed to trace seismic rays and their amplitudes and energies through complex two-dimensional geological models, for which boundaries between elastic units are defined by a series of digitized X-, Y-coordinate values. Input data for the program includes problem identification, control parameters, model coordinates and elastic parameter for the elastic units. The program evaluates the partitioning of ray amplitude and energy at elastic boundaries, computes the total travel time, total travel distance and other parameters for rays arising at the earth's surface. Instructions are given for punching program control cards and data cards, and for arranging input card decks. An example of printer output for a simple problem is presented. The program is written in FORTRAN IV language. The listing of the program is shown in the Appendix, with an example output from a CDC-6600 computer.
Influenza antiviral therapeutics.
Mayburd, Anatoly L
2010-01-01
In this review we conducted a landscaping study of the therapeutic anti-influenza agents, limiting the scope by exclusion of vaccines. The resulting 2800 patent publications were classified into 23 distinct technological sectors. The mechanism of action, the promise and drawbacks of the corresponding technological sectors were explored on comparative basis. A set of quantitative parameters was defined based on landscaping procedure that appears to correlate with the practical success of a given class of therapeutics. Thus, the sectors not considered promising from the mechanistic side were also displaying low value of the classifying parameters. The parameters were combined into a probabilistic Marketing Prediction Score, assessing a likelihood of a given sector to produce a marketable product. The proposed analytical methodology may be useful for automatic search and assessment of technologies for the goals of acquisition, investment and competitive bidding. While not being a substitute for an expert evaluation, it provides an initial assessment suitable for implementation with large-scale automated landscaping.
Experimental analysis of green roof substrate detention characteristics.
Yio, Marcus H N; Stovin, Virginia; Werdin, Jörg; Vesuviano, Gianni
2013-01-01
Green roofs may make an important contribution to urban stormwater management. Rainfall-runoff models are required to evaluate green roof responses to specific rainfall inputs. The roof's hydrological response is a function of its configuration, with the substrate - or growing media - providing both retention and detention of rainfall. The objective of the research described here is to quantify the detention effects due to green roof substrates, and to propose a suitable hydrological modelling approach. Laboratory results from experimental detention tests on green roof substrates are presented. It is shown that detention increases with substrate depth and as a result of increasing substrate organic content. Model structures based on reservoir routing are evaluated, and it is found that a one-parameter reservoir routing model coupled with a parameter that describes the delay to start of runoff best fits the observed data. Preliminary findings support the hypothesis that the reservoir routing parameter values can be defined from the substrate's physical characteristics.
MRI Texture Analysis of Background Parenchymal Enhancement of the Breast
Woo, Jun; Amano, Maki; Yanagisawa, Fumi; Yamamoto, Hiroshi; Tani, Mayumi
2017-01-01
Purpose The purpose of this study was to determine texture parameters reflecting the background parenchymal enhancement (BPE) of the breast, which were acquired using texture analysis (TA). Methods We investigated 52 breasts of the 26 subjects who underwent dynamic contrast-enhanced MRI. One experienced reader scored BPE visually (i.e., minimal, mild, moderate, and marked). TA, including 12 texture parameters, was performed to distinguish the BPE scores quantitatively. Relationships between the visual BPE scores and texture parameters were evaluated using analysis of variance and receiver operating characteristic analysis. Results The variance and skewness of signal intensity were useful for differentiating between moderate and mild or minimal BPE or between mild and minimal BPE, respectively, with the cutoff value of 356.7 for variance and that of 0.21 for skewness. Some TA features could be useful for defining breast lesions from the BPE. Conclusion TA may be useful for quantifying the BPE of the breast. PMID:28812015
Phase space analysis for anisotropic universe with nonlinear bulk viscosity
NASA Astrophysics Data System (ADS)
Sharif, M.; Mumtaz, Saadia
2018-06-01
In this paper, we discuss phase space analysis of locally rotationally symmetric Bianchi type I universe model by taking a noninteracting mixture of dust like and viscous radiation like fluid whose viscous pressure satisfies a nonlinear version of the Israel-Stewart transport equation. An autonomous system of equations is established by defining normalized dimensionless variables. In order to investigate stability of the system, we evaluate corresponding critical points for different values of the parameters. We also compute power-law scale factor whose behavior indicates different phases of the universe model. It is found that our analysis does not provide a complete immune from fine-tuning because the exponentially expanding solution occurs only for a particular range of parameters. We conclude that stable solutions exist in the presence of nonlinear model for bulk viscosity with different choices of the constant parameter m for anisotropic universe.
Tokunaga self-similarity arises naturally from time invariance
NASA Astrophysics Data System (ADS)
Kovchegov, Yevgeniy; Zaliapin, Ilya
2018-04-01
The Tokunaga condition is an algebraic rule that provides a detailed description of the branching structure in a self-similar tree. Despite a solid empirical validation and practical convenience, the Tokunaga condition lacks a theoretical justification. Such a justification is suggested in this work. We define a geometric branching process G (s ) that generates self-similar rooted trees. The main result establishes the equivalence between the invariance of G (s ) with respect to a time shift and a one-parametric version of the Tokunaga condition. In the parameter region where the process satisfies the Tokunaga condition (and hence is time invariant), G (s ) enjoys many of the symmetries observed in a critical binary Galton-Watson branching process and reproduces the latter for a particular parameter value.
Genuine non-self-averaging and ultraslow convergence in gelation.
Cho, Y S; Mazza, M G; Kahng, B; Nagler, J
2016-08-01
In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.
A step-defined sedentary lifestyle index: <5000 steps/day.
Tudor-Locke, Catrine; Craig, Cora L; Thyfault, John P; Spence, John C
2013-02-01
Step counting (using pedometers or accelerometers) is widely accepted by researchers, practitioners, and the general public. Given the mounting evidence of the link between low steps/day and time spent in sedentary behaviours, how few steps/day some populations actually perform, and the growing interest in the potentially deleterious effects of excessive sedentary behaviours on health, an emerging question is "How many steps/day are too few?" This review examines the utility, appropriateness, and limitations of using a reoccurring candidate for a step-defined sedentary lifestyle index: <5000 steps/day. Adults taking <5000 steps/day are more likely to have a lower household income and be female, older, of African-American vs. European-American heritage, a current vs. never smoker, and (or) living with chronic disease and (or) disability. Little is known about how contextual factors (e.g., built environment) foster such low levels of step-defined physical activity. Unfavorable indicators of body composition and cardiometabolic risk have been consistently associated with taking <5000 steps/day. The acute transition (3-14 days) of healthy active young people from higher (>10 000) to lower (<5000 or as low as 1500) daily step counts induces reduced insulin sensitivity and glycemic control, increased adiposity, and other negative changes in health parameters. Although few alternative values have been considered, the continued use of <5000 steps/day as a step-defined sedentary lifestyle index for adults is appropriate for researchers and practitioners and for communicating with the general public. There is little evidence to advocate any specific value indicative of a step-defined sedentary lifestyle index in children and adolescents.
NASA Astrophysics Data System (ADS)
Liu, Zhiguo; Yan, Guangyao; Mu, Zhitao; Li, Xudong
2018-01-01
The accelerated pitting corrosion test of 7B04 aluminum alloy specimen was carried out according to the spectrum which simulated airport environment, and the corresponding pitting corrosion damage was obtained and was defined through three parameters A and B and C which respectively denoted the corrosion pit surface length and width and corrosion pit depth. The ratio between three parameters could determine the morphology characteristics of corrosion pits. On this basis the stress concentration factor of typical corrosion pit morphology under certain load conditions was quantitatively analyzed. The research shows that the corrosion pits gradually incline to be ellipse in surface and moderate in depth, and most value of B/A and C/A lies in 1 between 4 and few maximum exceeds 4; The stress concentration factor Kf of corrosion pits is obviously affected by the its morphology, the value of Kf increases with corrosion pits depth increasement under certain corrosion pits surface geometry. Also, the value of Kf decreases with surface width increasement under certain corrosion pits depth. The research conclusion can set theory basis for corrosion fatigue life analysis of aircraft aluminum alloy structure.
Leistra, Minze; Wolters, André; van den Berg, Frederik
2008-06-01
Volatilisation of pesticides from crop canopies can be an important emission pathway. In addition to pesticide properties, competing processes in the canopy and environmental conditions play a part. A computation model is being developed to simulate the processes, but only some of the input data can be obtained directly from the literature. Three well-defined experiments on the volatilisation of radiolabelled parathion-methyl (as example compound) from plants in a wind tunnel system were simulated with the computation model. Missing parameter values were estimated by calibration against the experimental results. The resulting thickness of the air boundary layer, rate of plant penetation and rate of phototransformation were compared with a diversity of literature data. The sequence of importance of the canopy processes was: volatilisation > plant penetration > phototransformation. Computer simulation of wind tunnel experiments, with radiolabelled pesticide sprayed on plants, yields values for the rate coefficients of processes at the plant surface. As some input data for simulations are not required in the framework of registration procedures, attempts to estimate missing parameter values on the basis of divergent experimental results have to be continued. Copyright (c) 2008 Society of Chemical Industry.
Relative effects of survival and reproduction on the population dynamics of emperor geese
Schmutz, Joel A.; Rockwell, Robert F.; Petersen, Margaret R.
1997-01-01
Populations of emperor geese (Chen canagica) in Alaska declined sometime between the mid-1960s and the mid-1980s and have increased little since. To promote recovery of this species to former levels, managers need to know how much their perturbations of survival and/or reproduction would affect population growth rate (λ). We constructed an individual-based population model to evaluate the relative effect of altering mean values of various survival and reproductive parameters on λ and fall age structure (AS, defined as the proportion of juv), assuming additive rather than compensatory relations among parameters. Altering survival of adults had markedly greater relative effects on λ than did equally proportionate changes in either juvenile survival or reproductive parameters. We found the opposite pattern for relative effects on AS. Due to concerns about bias in the initial parameter estimates used in our model, we used 5 additional sets of parameter estimates with this model structure. We found that estimates of survival based on aerial survey data gathered each fall resulted in models that corresponded more closely to independent estimates of λ than did models that used mark-recapture estimates of survival. This disparity suggests that mark-recapture estimates of survival are biased low. To further explore how parameter estimates affected estimates of λ, we used values of survival and reproduction found in other goose species, and we examined the effect of an hypothesized correlation between an individual's clutch size and the subsequent survival of her young. The rank order of parameters in their relative effects on λ was consistent for all 6 parameter sets we examined. The observed variation in relative effects on λ among the 6 parameter sets is indicative of how relative effects on λ may vary among goose populations. With this knowledge of the relative effects of survival and reproductive parameters on λ, managers can make more informed decisions about which parameters to influence through management or to target for future study.
Starn, J. Jeffrey; Stone, Janet Radway; Mullaney, John R.
2000-01-01
Contributing areas to public-supply wells at the Southbury Training School in Southbury, Connecticut, were mapped by simulating ground-water flow in stratified glacial deposits in the lower Transylvania Brook watershed. The simulation used nonlinear regression methods and informational statistics to estimate parameters of a ground-water flow model using drawdown data from an aquifer test. The goodness of fit of the model and the uncertainty associated with model predictions were statistically measured. A watershed-scale model, depicting large-scale ground-water flow in the Transylvania Brook watershed, was used to estimate the distribution of groundwater recharge. Estimates of recharge from 10 small basins in the watershed differed on the basis of the drainage characteristics of each basin. Small basins having well-defined stream channels contributed less ground-water recharge than basins having no defined channels because potential ground-water recharge was carried away in the stream channel. Estimates of ground-water recharge were used in an aquifer-scale parameter-estimation model. Seven variations of the ground-water-flow system were posed, each representing the ground-water-flow system in slightly different but realistic ways. The model that most closely reproduced measured hydraulic heads and flows with realistic parameter values was selected as the most representative of the ground-water-flow system and was used to delineate boundaries of the contributing areas. The model fit revealed no systematic model error, which indicates that the model is likely to represent the major characteristics of the actual system. Parameter values estimated during the simulation are as follows: horizontal hydraulic conductivity of coarse-grained deposits, 154 feet per day; vertical hydraulic conductivity of coarse-grained deposits, 0.83 feet per day; horizontal hydraulic conductivity of fine-grained deposits, 29 feet per day; specific yield, 0.007; specific storage, 1.6E-05. Average annual recharge was estimated using the watershed-scale model with no parameter estimation and was determined to be 24 inches per year in the valley areas and 9 inches per year in the upland areas. The parameter estimates produced in the model are similar to expected values, with two exceptions. The estimated specific yield of the stratified glacial deposits is lower than expected, which could be caused by the layered nature of the deposits. The recharge estimate produced by the model was also lower?about 32 percent of the average annual rate. This could be caused by the timing of the aquifer test with respect to the annual cycle of ground-water recharge, and by some of the expected recharge going to parts of the flow system that were not simulated. The data used in the calibration were collected during an aquifer test from October 30 to November 4, 1996. The model fit was very good, as indicated by the correlation coefficient (0.999) between the weighted simulated values and weighted observed values. The model also reproduced the general rise in ground-water levels caused by ground-water recharge and the cyclic fluctuations caused by pumping prior to the aquifer test. Contributing areas were delineated using a particle-tracking procedure. Hypothetical particles of water were introduced at each model cell in the top layer and were tracked to determine whether or not they reached the pumped well. A deterministic contributing area was calculated using the calibrated model, and a probabilistic contributing area was calculated using a Monte Carlo approach along with the calibrated model. The Monte Carlo simulation was done, using the parameter variance/covariance matrix generated by the regression model, to estimate probabilities associated with the contributing area to the wells. The probabilities arise from uncertainty in the estimated parameter values, which in turn arise from the adequacy of the data available to comprehensively describe the groundwater-flow sy
Manevska, Nevena; Stojanoski, Sinisa; Pop Gjorceva, Daniela; Todorovska, Lidija; Miladinova, Daniela; Zafirova, Beti
2017-09-01
Introduction Muscle perfusion is a physiologic process that can undergo quantitative assessment and thus define the range of normal values of perfusion indexes and perfusion reserve. The investigation of the microcirculation has a crucial role in determining the muscle perfusion. Materials and method The study included 30 examinees, 24-74 years of age, without a history of confirmed peripheral artery disease and all had normal findings on Doppler ultrasonography and pedo-brachial index of lower extremity (PBI). 99mTc-MIBI tissue muscle perfusion scintigraphy of lower limbs evaluates tissue perfusion in resting condition "rest study" and after workload "stress study", through quantitative parameters: Inter-extremity index (for both studies), left thigh/right thigh (LT/RT) left calf/right calf (LC/RC) and perfusion reserve (PR) for both thighs and calves. Results In our investigated group we assessed the normal values of quantitative parameters of perfusion indexes. Indexes ranged for LT/RT in rest study 0.91-1.05, in stress study 0.92-1.04. LC/RC in rest 0.93-1.07 and in stress study 0.93-1.09. The examinees older than 50 years had insignificantly lower perfusion reserve of these parameters compared with those younger than 50, LC (p=0.98), and RC (p=0.6). Conclusion This non-invasive scintigraphic method allows in individuals without peripheral artery disease to determine the range of normal values of muscle perfusion at rest and stress condition and to clinically implement them in evaluation of patients with peripheral artery disease for differentiating patients with normal from those with impaired lower limbs circulation.
Gao, Mingwu; Cheng, Hao-Min; Sung, Shih-Hsien; Chen, Chen-Huan; Olivier, Nicholas Bari; Mukkamala, Ramakrishna
2017-07-01
pulse transit time (PTT) varies with blood pressure (BP) throughout the cardiac cycle, yet, because of wave reflection, only one PTT value at the diastolic BP level is conventionally estimated from proximal and distal BP waveforms. The objective was to establish a technique to estimate multiple PTT values at different BP levels in the cardiac cycle. a technique was developed for estimating PTT as a function of BP (to indicate the PTT value for every BP level) from proximal and distal BP waveforms. First, a mathematical transformation from one waveform to the other is defined in terms of the parameters of a nonlinear arterial tube-load model accounting for BP-dependent arterial compliance and wave reflection. Then, the parameters are estimated by optimally fitting the waveforms to each other via the model-based transformation. Finally, PTT as a function of BP is specified by the parameters. The technique was assessed in animals and patients in several ways including the ability of its estimated PTT-BP function to serve as a subject-specific curve for calibrating PTT to BP. the calibration curve derived by the technique during a baseline period yielded bias and precision errors in mean BP of 5.1 ± 0.9 and 6.6 ± 1.0 mmHg, respectively, during hemodynamic interventions that varied mean BP widely. the new technique may permit, for the first time, estimation of PTT values throughout the cardiac cycle from proximal and distal waveforms. the technique could potentially be applied to improve arterial stiffness monitoring and help realize cuff-less BP monitoring.
Real, Jose T; Folgado, José; Molina Mendez, Mercedes; Martinez-Hervás, Sergio; Peiro, Marta; Ascaso, Juan F
2016-01-01
To study new risk factors for peripheral macroangiopathy (PM) in patients with diabetes, as oxidative stress (OS) and its interaction with classical risk factors: age, Lp(a), plasma homocysteine values and HbA1c. We studied 204 type2 diabetic (T2DM) patients, consecutive selected form a reference hospital and a secondary hospital form our Community (2009-2010). Design was a case (ABI<0.89) control (ABI0.9-1.2) study. PM was defined using ankle brachial index (ABI). Thirty nine T2DM subjects presented ABI>1.2 and were excluded. Clinical and biological parameters were determined using standard methods. Comparing clinical and biological parameters obtained in both studied groups (T2DM+ABI<0.9 vs T2DM+ABI0.9-1.2), we found statistical significant differences in age, evolution time of diabetes, Lp(a) and plasma homocysteine values. No differences were found in OS parameters: reduced glutathione, oxidized glutathione and maloldialdehide between studied groups. Plasma homocysteine values were an independent risk factor for the presence of PM and were related to evolution time of diabetes and reduced glutathione. We have confirmed that Lp(a) and independently plasma homocysteine values were related to PM in T2DM subjects. No association with PM and OS markers (GSH, GSSG and MDA) were found in T2DM with more than 10years of evolution time of their disease and high prevalence of chronic complications. Copyright © 2016 Sociedad Española de Arteriosclerosis. Publicado por Elsevier España, S.L.U. All rights reserved.
Reference values for clinical laboratory parameters in young adults in Maputo, Mozambique.
Tembe, Nelson; Joaquim, Orvalho; Alfai, Eunice; Sitoe, Nádia; Viegas, Edna; Macovela, Eulalia; Gonçalves, Emilia; Osman, Nafissa; Andersson, Sören; Jani, Ilesh; Nilsson, Charlotta
2014-01-01
Clinical laboratory reference values from North American and European populations are currently used in most Africans countries due to the absence of locally derived reference ranges, despite previous studies reporting significant differences between populations. Our aim was to define reference ranges for both genders in 18 to 24 year-old Mozambicans in preparation for clinical vaccine trials. A cross-sectional study including 257 volunteers (102 males and 155 females) between 18 and 24 years was performedat a youth clinic in Maputo, Mozambique. All volunteers were clinically healthy and human immunodeficiency virus, Hepatitis B virus and syphilis negative.Median and 95% reference ranges were calculated for immunological, hematological and chemistry parameters. Ranges were compared with those reported based on populations in other African countries and the US. The impact of applying US NIH Division of AIDS (DAIDS) toxicity tables was assessed. The immunology ranges were comparable to those reported for the US and western Kenya.There were significant gender differences in CD4+ T cell values 713 cells/µL in males versus 824 cells/µL in females (p<0.0001). Hematologic values differed from the US values but were similar to reports of populations in western Kenya and Uganda. The lower and upper limits of the ranges for hemoglobin, hematocrit, red blood cells, white blood cells and lymphocytes were somewhat lower than those from these African countries. The chemistry values were comparable to US values, with few exceptions. The upper limits for ALT, AST, bilirubin, cholesterol and triglycerides were higher than those from the US. DAIDStables for adverse events predicted 297 adverse events and 159 (62%) of the volunteers would have been excluded. This study is the first to determine normal laboratory parameters in Mozambique. Our results underscore the necessity of establishing region-specific clinical reference ranges for proper patient management and safe conduct of clinical trials.
Ionic network analysis of tectosilicates: the example of coesite at variable pressure.
Reifenberg, Melina; Thomas, Noel W
2018-04-01
The method of ionic network analysis [Thomas (2017). Acta Cryst. B73, 74-86] is extended to tectosilicates through the example of coesite, the high-pressure polymorph of SiO 2 . The structural refinements of Černok et al. [Z. Kristallogr. (2014), 229, 761-773] are taken as the starting point for applying the method. Its purpose is to predict the unit-cell parameters and atomic coordinates at (p-T-X) values in-between those of diffraction experiments. The essential development step for tectosilicates is to define a pseudocubic parameterization of the O 4 cages of the SiO 4 tetrahedra. The six parameters a PC , b PC , c PC , α PC , β PC and γ PC allow a full quantification of the tetrahedral structure, i.e. distortion and enclosed volume. Structural predictions for coesite require that two separate quasi-planar networks are defined, one for the silicon ions and the other for the O 4 cage midpoints. A set of parametric curves is used to describe the evolution with pressure of these networks and the pseudocubic parameters. These are derived by fitting to the crystallographic data. Application of the method to monoclinic feldspars and to quartz and cristobalite is discussed. Further, a novel two-parameter quantification of the degree of tetrahedral distortion is described. At pressures in excess of ca 20.45 GPa it is not possible to find a self-consistent solution to the parametric curves for coesite, pointing to the likelihood of a phase transition.
GRCop-84 Rolling Parameter Study
NASA Technical Reports Server (NTRS)
Loewenthal, William S.; Ellis, David L.
2008-01-01
This report is a section of the final report on the GRCop-84 task of the Constellation Program and incorporates the results obtained between October 2000 and September 2005, when the program ended. NASA Glenn Research Center (GRC) has developed a new copper alloy, GRCop-84 (Cu-8 at.% Cr-4 at.% Nb), for rocket engine main combustion chamber components that will improve rocket engine life and performance. This work examines the sensitivity of GRCop-84 mechanical properties to rolling parameters as a means to better define rolling parameters for commercial warm rolling. Experiment variables studied were total reduction, rolling temperature, rolling speed, and post rolling annealing heat treatment. The responses were tensile properties measured at 23 and 500 C, hardness, and creep at three stress-temperature combinations. Understanding these relationships will better define boundaries for a robust commercial warm rolling process. The four processing parameters were varied within limits consistent with typical commercial production processes. Testing revealed that the rolling-related variables selected have a minimal influence on tensile, hardness, and creep properties over the range of values tested. Annealing had the expected result of lowering room temperature hardness and strength while increasing room temperature elongations with 600 C (1112 F) having the most effect. These results indicate that the process conditions to warm roll plate and sheet for these variables can range over wide levels without negatively impacting mechanical properties. Incorporating broader process ranges in future rolling campaigns should lower commercial rolling costs through increased productivity.
Direct and Absolute Quantification of over 1800 Yeast Proteins via Selected Reaction Monitoring*
Lawless, Craig; Holman, Stephen W.; Brownridge, Philip; Lanthaler, Karin; Harman, Victoria M.; Watkins, Rachel; Hammond, Dean E.; Miller, Rebecca L.; Sims, Paul F. G.; Grant, Christopher M.; Eyers, Claire E.; Beynon, Robert J.
2016-01-01
Defining intracellular protein concentration is critical in molecular systems biology. Although strategies for determining relative protein changes are available, defining robust absolute values in copies per cell has proven significantly more challenging. Here we present a reference data set quantifying over 1800 Saccharomyces cerevisiae proteins by direct means using protein-specific stable-isotope labeled internal standards and selected reaction monitoring (SRM) mass spectrometry, far exceeding any previous study. This was achieved by careful design of over 100 QconCAT recombinant proteins as standards, defining 1167 proteins in terms of copies per cell and upper limits on a further 668, with robust CVs routinely less than 20%. The selected reaction monitoring-derived proteome is compared with existing quantitative data sets, highlighting the disparities between methodologies. Coupled with a quantification of the transcriptome by RNA-seq taken from the same cells, these data support revised estimates of several fundamental molecular parameters: a total protein count of ∼100 million molecules-per-cell, a median of ∼1000 proteins-per-transcript, and a linear model of protein translation explaining 70% of the variance in translation rate. This work contributes a “gold-standard” reference yeast proteome (including 532 values based on high quality, dual peptide quantification) that can be widely used in systems models and for other comparative studies. PMID:26750110
Barmpoutis, Angelos
2010-01-01
Registration of Diffusion-Weighted MR Images (DW-MRI) can be achieved by registering the corresponding 2nd-order Diffusion Tensor Images (DTI). However, it has been shown that higher-order diffusion tensors (e.g. order-4) outperform the traditional DTI in approximating complex fiber structures such as fiber crossings. In this paper we present a novel method for unbiased group-wise non-rigid registration and atlas construction of 4th-order diffusion tensor fields. To the best of our knowledge there is no other existing method to achieve this task. First we define a metric on the space of positive-valued functions based on the Riemannian metric of real positive numbers (denoted by ℝ+). Then, we use this metric in a novel functional minimization method for non-rigid 4th-order tensor field registration. We define a cost function that accounts for the 4th-order tensor re-orientation during the registration process and has analytic derivatives with respect to the transformation parameters. Finally, the tensor field atlas is computed as the minimizer of the variance defined using the Riemannian metric. We quantitatively compare the proposed method with other techniques that register scalar-valued or diffusion tensor (rank-2) representations of the DWMRI. PMID:20436782
NASA Astrophysics Data System (ADS)
Yang, Ming-Hsu; Chou, Dean-Yi; Zhao, Hui; Liang, Zhi-Chao
2012-08-01
The solar acoustic waves around a sunspot are modified because of the interaction with the sunspot. The interaction can be viewed as that the sunspot, excited by the incident wave, generates the scattered wave, and the scattered wave is added to the incident wave to form the total wave around the sunspot. We define an interaction parameter, which could be complex, describing the interaction between the acoustic waves and the sunspot. The scattered wavefunction on the surface can be expressed as a two-dimensional integral of the product of the Green's function, the wavefunction, and the two-dimensional interaction parameter over the sunspot area for the Born approximation of different orders. We assume a simple model for the two-dimensional interaction parameter distribution: its absolute value is axisymmetric with a Gaussian distribution and its phase is a constant. The measured scattered wavefunctions of various modes for NOAAs 11084 and 11092 are fitted to the theoretical scattered wavefunctions to determine the three model parameters, magnitude, Gaussian radius, and phase, for the Born approximation of different orders. The three model parameters converge to some values at high-order Born approximations. The result of the first-order Born approximation is significantly different from the convergent value in some cases. The rate of convergence depends on the sunspot size and wavelength. It converges more rapidly for the smaller sunspot and longer wavelength. The magnitude increases with mode frequency and degree for each radial order. The Gaussian radius is insensitive to frequency and degree. The spatial range of the interaction parameter is greater than that of the continuum intensity deficit, but smaller than that of the acoustic power deficit of the sunspot. The phase versus phase speed falls into a small range. This suggests that the phase could be a function phase speed. NOAAs 11084 and 11092 have a similar magnitude and phase, although the ratio of their sizes is 0.75.
NASA Astrophysics Data System (ADS)
Niedermeier, Dennis; Augustin-Bauditz, Stefanie; Hartmann, Susan; Wex, Heike; Ignatius, Karoliina; Stratmann, Frank
2015-05-01
The immersion freezing behavior of droplets containing size-segregated, monodisperse feldspar particles was investigated. For all particle sizes investigated, a leveling off of the frozen droplet fraction was observed reaching a plateau within the heterogeneous freezing temperature regime (T >- 38°C). The frozen fraction in the plateau region was proportional to the particle surface area. Based on these findings, an asymptotic value for ice active surface site density ns, which we named ns⋆, could be determined for the investigated feldspar sample. The comparison of these results with those of other studies not only elucidates the general feasibility of determining such an asymptotic value but also shows that the value of ns⋆ strongly depends on the method of the particle surface area determination. However, such an asymptotic value might be an important input parameter for atmospheric modeling applications. At least it shows that care should be taken when ns is extrapolated to lower or higher temperature.
Introduction to the Neutrosophic Quantum Theory
NASA Astrophysics Data System (ADS)
Smarandache, Florentin
2014-10-01
Neutrosophic Quantum Theory (NQT) is the study of the principle that certain physical quantities can assume neutrosophic values, instead of discrete values as in quantum theory. These quantities are thus neutrosophically quantized. A neutrosophic values (neutrosophic amount) is expressed by a set (mostly an interval) that approximates (or includes) a discrete value. An oscillator can lose or gain energy by some neutrosophic amount (we mean neither continuously nor discretely, but as a series of integral sets: S, 2S, 3S, ..., where S is a set). In the most general form, one has an ensemble of sets of sets, i.e. R1S1 ,R2S2 ,R3S3 , ..., where all Rn and Sn are sets that may vary in function of time and of other parameters. Several such sets may be equal, or may be reduced to points, or may be empty. {The multiplication of two sets A and B is classically defined as: AB ={ab, a??A and b??B}. And similarly a number n times a set A is defined as: nA ={na, a??A}.} The unit of neutrosophic energy is Hν , where H is a set (in particular an interval) that includes Planck constant h, and ν is the frequency. Therefore, an oscillator could change its energy by a neutrosophic number of quanta: Hν , 2H ν, 3H ν, etc. For example, when H is an interval [h1 ,h2 ] , with 0 <=h1 <=h2 , that contains Planck constant h, then one has: [h1 ν ,h2 ν ], [2h1 ν , 2h2 ν ], [3h1 ν , 3h2 ν ],..., as series of intervals of energy change of the oscillator. The most general form of the units of neutrosophic energy is Hnνn , where all Hn and νn are sets that similarly as above may vary in function of time and of other oscillator and environment parameters. Neutrosophic quantum theory combines classical mechanics and quantum mechanics.
Predicting the Cosmological Constant from the CausalEntropic Principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bousso, Raphael; Harnik, Roni; Kribs, Graham D.
2007-02-20
We compute the expected value of the cosmological constant in our universe from the Causal Entropic Principle. Since observers must obey the laws of thermodynamics and causality, it asserts that physical parameters are most likely to be found in the range of values for which the total entropy production within a causally connected region is maximized. Despite the absence of more explicit anthropic criteria, the resulting probability distribution turns out to be in excellent agreement with observation. In particular, we find that dust heated by stars dominates the entropy production, demonstrating the remarkable power of this thermodynamic selection criterion. Themore » alternative approach--weighting by the number of ''observers per baryon''--is less well-defined, requires problematic assumptions about the nature of observers, and yet prefers values larger than present experimental bounds.« less
Keivanian, Farshid; Mehrshad, Nasser; Bijari, Abolfazl
2016-01-01
D Flip-Flop as a digital circuit can be used as a timing element in many sophisticated circuits. Therefore the optimum performance with the lowest power consumption and acceptable delay time will be critical issue in electronics circuits. The newly proposed Dual-Edge Triggered Static D Flip-Flop circuit layout is defined as a multi-objective optimization problem. For this, an optimum fuzzy inference system with fuzzy rules is proposed to enhance the performance and convergence of non-dominated sorting Genetic Algorithm-II by adaptive control of the exploration and exploitation parameters. By using proposed Fuzzy NSGA-II algorithm, the more optimum values for MOSFET channel widths and power supply are discovered in search space than ordinary NSGA types. What is more, the design parameters involving NMOS and PMOS channel widths and power supply voltage and the performance parameters including average power consumption and propagation delay time are linked. To do this, the required mathematical backgrounds are presented in this study. The optimum values for the design parameters of MOSFETs channel widths and power supply are discovered. Based on them the power delay product quantity (PDP) is 6.32 PJ at 125 MHz Clock Frequency, L = 0.18 µm, and T = 27 °C.
Aerosol optical properties in the southeastern United States in summer - Part 1: Hygroscopic growth
NASA Astrophysics Data System (ADS)
Brock, C. A.; Wagner, N. L.; Anderson, B. E.; Attwood, A. R.; Beyersdorf, A.; Campuzano-Jost, P.; Carlton, A. G.; Day, D. A.; Diskin, G. S.; Gordon, T. D.; Jimenez, J. L.; Lack, D. A.; Liao, J.; Markovic, M. Z.; Middlebrook, A. M.; Ng, N. L.; Perring, A. E.; Richardson, M. S.; Schwarz, J. P.; Washenfelder, R. A.; Welti, A.; Xu, L.; Ziemba, L. D.; Murphy, D. M.
2015-09-01
Aircraft observations of meteorological, trace gas, and aerosol properties were made during May-September 2013 in the southeastern United States (US) under fair-weather, afternoon conditions with well-defined planetary boundary layer structure. Optical extinction at 532 nm was directly measured at three relative humidities and compared with extinction calculated from measurements of aerosol composition and size distribution using the κ-Köhler approximation for hygroscopic growth. Using this approach, the hygroscopicity parameter κ for the organic fraction of the aerosol must have been < 0.10 to be consistent with 75 % of the observations within uncertainties. This subsaturated κ value for the organic aerosol in the southeastern US is consistent with several field studies in rural environments. We present a new parameterization of the change in aerosol extinction as a function of relative humidity that better describes the observations than does the widely used power-law (gamma, γ) parameterization. This new single-parameter κext formulation is based upon κ-Köhler and Mie theories and relies upon the well-known approximately linear relationship between particle volume (or mass) and optical extinction (Charlson et al., 1967). The fitted parameter, κext, is nonlinearly related to the chemically derived κ parameter used in κ-Köhler theory. The values of κext we determined from airborne measurements are consistent with independent observations at a nearby ground site.
On the Use of the Beta Distribution in Probabilistic Resource Assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olea, Ricardo A., E-mail: olea@usgs.gov
2011-12-15
The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. Themore » beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution.« less
Pichler, Josef; Pachinger, Corinna; Pelz, Manuela; Kleiser, Raimund
2013-05-01
To develop a magnetic resonance imaging (MRI) metric that is useful for therapy monitoring in patients with relapsed glioblastoma (GBM) during treatment with the antiangiogenic monoclonal antibody bevacizumab (Bev). We evaluated the feasibility of tumour volume measurement with our software tool in clinical routine and tried to establish reproducible and quantitative parameters for surveillance of patients on treatment with antiangiogenic drugs. In this retrospective institutional pilot study, 18 patients (11 men, 7 women; mean age 53.5) with recurrent GBM received bevacizumab and irinotecan every two weeks as second line therapy. Follow up scans were assessed every two to four months. Data were collected on a 1.5 T MR System (Siemens, Symphony) with the standard head coil using our standardized tumour protocol. Volumetric measurement was performed with a commercial available software stroketool in FLAIR and T1-c imaging with following procedure: Pre-processing involved cutting noise and electing a Gaussian of 3 × 3 to smooth images, selecting a ROI (region of interest) in healthy brain area of the contra lateral side with quantifying the intensity value, adding 20% to this value to define the threshold level. Only values above this threshold are left corresponding to the tumour lesion. For the volumetric measurement the detected tumour area was circuited in all slices and finally summing up all values and multiplied by slice thickness to get the whole volume. With McDonalds criteria progression was indicated in 14 out of 18 patients. In contrast, volumetric measurement showed an increase of contrast enhancement of >25%, defined as threshold for progression, in 11 patients (78%) and in 12 patients (85%) in FLAIR volume, respectively. 6 patients revealed that volumes in MRI increased earlier than the last scan, which was primarily defined as the date of progression with McDonald criteria, changing PFS after re-evaluation of the tumour volumes from 6.8 to 5.6 months. In this pilot study the applied imaging estimates objectively tumour response and progression compared to the bi-dimensional measurement. The quantitative parameters are reproducible and also applicable for the diffuse infiltrating lesions. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chiu, Y.; Nishikawa, T.
2013-12-01
With the increasing complexity of parameter-structure identification (PSI) in groundwater modeling, there is a need for robust, fast, and accurate optimizers in the groundwater-hydrology field. For this work, PSI is defined as identifying parameter dimension, structure, and value. In this study, Voronoi tessellation and differential evolution (DE) are used to solve the optimal PSI problem. Voronoi tessellation is used for automatic parameterization, whereby stepwise regression and the error covariance matrix are used to determine the optimal parameter dimension. DE is a novel global optimizer that can be used to solve nonlinear, nondifferentiable, and multimodal optimization problems. It can be viewed as an improved version of genetic algorithms and employs a simple cycle of mutation, crossover, and selection operations. DE is used to estimate the optimal parameter structure and its associated values. A synthetic numerical experiment of continuous hydraulic conductivity distribution was conducted to demonstrate the proposed methodology. The results indicate that DE can identify the global optimum effectively and efficiently. A sensitivity analysis of the control parameters (i.e., the population size, mutation scaling factor, crossover rate, and mutation schemes) was performed to examine their influence on the objective function. The proposed DE was then applied to solve a complex parameter-estimation problem for a small desert groundwater basin in Southern California. Hydraulic conductivity, specific yield, specific storage, fault conductance, and recharge components were estimated simultaneously. Comparison of DE and a traditional gradient-based approach (PEST) shows DE to be more robust and efficient. The results of this work not only provide an alternative for PSI in groundwater models, but also extend DE applications towards solving complex, regional-scale water management optimization problems.
NASA Astrophysics Data System (ADS)
Moret-Fernández, D.; Latorre, B.
2017-01-01
The water retention curve (θ(h)), which defines the relationship between the volumetric water content (θ) and the matric potential (h), is of paramount importance to characterize the hydraulic behaviour of soils. Because current methods to estimate θ(h) are, in general, tedious and time consuming, alternative procedures to determine θ(h) are needed. Using an upward infiltration curve, the main objective of this work is to present a method to determine the parameters of the van Genuchten (1980) water retention curve (α and n) from the sorptivity (S) and the β parameter defined in the 1D infiltration equation proposed by Haverkamp et al. (1994). The first specific objective is to present an equation, based on the Haverkamp et al. (1994) analysis, which allows describing an upward infiltration process. Secondary, assuming a known saturated hydraulic conductivity, Ks, calculated on a finite soil column by the Darcy's law, a numerical procedure to calculate S and β by the inverse analysis of an exfiltration curve is presented. Finally, the α and n values are numerically calculated from Ks, S and β. To accomplish the first specific objective, cumulative upward infiltration curves simulated with HYDRUS-1D for sand, loam, silt and clay soils were compared to those calculated with the proposed equation, after applying the corresponding β and S calculated from the theoretical Ks, α and n. The same curves were used to: (i) study the influence of the exfiltration time on S and β estimations, (ii) evaluate the limits of the inverse analysis, and (iii) validate the feasibility of the method to estimate α and n. Next, the θ(h) parameters estimated with the numerical method on experimental soils were compared to those obtained with pressure cells. The results showed that the upward infiltration curve could be correctly described by the modified Haverkamp et al. (1994) equation. While S was only affected by early-time exfiltration data, the β parameter had a significant influence on the long-time exfiltration curve, which accuracy increased with time. The 1D infiltration model was only suitable for β < 1.7 (sand, loam and silt). After omitting the clay soil, an excellent relationship (R2 = 0.99, p < 0.005) was observed between the theoretical α and n values of the synthetic soils and those estimated from the inverse analysis. Consistent results, with a significant relationship (p < 0.001) between the n values estimated with the pressure cell and the upward infiltration analysis, were also obtained on the experimental soils.
Epidemiologic research using probabilistic outcome definitions.
Cai, Bing; Hennessy, Sean; Lo Re, Vincent; Small, Dylan S
2015-01-01
Epidemiologic studies using electronic healthcare data often define the presence or absence of binary clinical outcomes by using algorithms with imperfect specificity, sensitivity, and positive predictive value. This results in misclassification and bias in study results. We describe and evaluate a new method called probabilistic outcome definition (POD) that uses logistic regression to estimate the probability of a clinical outcome using multiple potential algorithms and then uses multiple imputation to make valid inferences about the risk ratio or other epidemiologic parameters of interest. We conducted a simulation to evaluate the performance of the POD method with two variables that can predict the true outcome and compared the POD method with the conventional method. The simulation results showed that when the true risk ratio is equal to 1.0 (null), the conventional method based on a binary outcome provides unbiased estimates. However, when the risk ratio is not equal to 1.0, the traditional method, either using one predictive variable or both predictive variables to define the outcome, is biased when the positive predictive value is <100%, and the bias is very severe when the sensitivity or positive predictive value is poor (less than 0.75 in our simulation). In contrast, the POD method provides unbiased estimates of the risk ratio both when this measure of effect is equal to 1.0 and not equal to 1.0. Even when the sensitivity and positive predictive value are low, the POD method continues to provide unbiased estimates of the risk ratio. The POD method provides an improved way to define outcomes in database research. This method has a major advantage over the conventional method in that it provided unbiased estimates of risk ratios and it is easy to use. Copyright © 2014 John Wiley & Sons, Ltd.
Dopamine Receptor-Specific Contributions to the Computation of Value.
Burke, Christopher J; Soutschek, Alexander; Weber, Susanna; Raja Beharelle, Anjali; Fehr, Ernst; Haker, Helene; Tobler, Philippe N
2018-05-01
Dopamine is thought to play a crucial role in value-based decision making. However, the specific contributions of different dopamine receptor subtypes to the computation of subjective value remain unknown. Here we demonstrate how the balance between D1 and D2 dopamine receptor subtypes shapes subjective value computation during risky decision making. We administered the D2 receptor antagonist amisulpride or placebo before participants made choices between risky options. Compared with placebo, D2 receptor blockade resulted in more frequent choice of higher risk and higher expected value options. Using a novel model fitting procedure, we concurrently estimated the three parameters that define individual risk attitude according to an influential theoretical account of risky decision making (prospect theory). This analysis revealed that the observed reduction in risk aversion under amisulpride was driven by increased sensitivity to reward magnitude and decreased distortion of outcome probability, resulting in more linear value coding. Our data suggest that different components that govern individual risk attitude are under dopaminergic control, such that D2 receptor blockade facilitates risk taking and expected value processing.
Snyder, David A; Montelione, Gaetano T
2005-06-01
An important open question in the field of NMR-based biomolecular structure determination is how best to characterize the precision of the resulting ensemble of structures. Typically, the RMSD, as minimized in superimposing the ensemble of structures, is the preferred measure of precision. However, the presence of poorly determined atomic coordinates and multiple "RMSD-stable domains"--locally well-defined regions that are not aligned in global superimpositions--complicate RMSD calculations. In this paper, we present a method, based on a novel, structurally defined order parameter, for identifying a set of core atoms to use in determining superimpositions for RMSD calculations. In addition we present a method for deciding whether to partition that core atom set into "RMSD-stable domains" and, if so, how to determine partitioning of the core atom set. We demonstrate our algorithm and its application in calculating statistically sound RMSD values by applying it to a set of NMR-derived structural ensembles, superimposing each RMSD-stable domain (or the entire core atom set, where appropriate) found in each protein structure under consideration. A parameter calculated by our algorithm using a novel, kurtosis-based criterion, the epsilon-value, is a measure of precision of the superimposition that complements the RMSD. In addition, we compare our algorithm with previously described algorithms for determining core atom sets. The methods presented in this paper for biomolecular structure superimposition are quite general, and have application in many areas of structural bioinformatics and structural biology.
Rerksuppaphol, Sanguansak; Rerksuppaphol, Lakkana
2015-11-01
Obesity is considered to be a risk of metabolic syndrome; however, data on the prevalence of metabolic syndrome in Thai obese children are scarce. The present study aims to determine the prevalence of metabolic syndrome in Thai obese children. A cross-sectional study was conducted on 113 obese children who were students of a public elementary school in Ongkharak district, Thailand, in 2013. Anthropometric data, blood pressure and biochemical parameters were measured. Metabolic syndrome was defined using modified 'the National Cholesterol Education Program/Adult Treatment Panel III (NCEP/ATPIII)' criteria. The prevalence of metabolic syndrome in obese children was 50.4%. Children with metabolic syndrome had significantly higher waist circumference (86.9 vs. 82.4 cm, p-value = 0.049), biceps skinfold thickness (17.2 vs. 14.9 mm, p-value = 0.017), suprailiac skinfold thickness (36.5 vs. 31.8 mm, p-value = 0.019), systolic blood pressure (119.7 vs. 112.6 mmHg, p-value = 0.007), diastolic blood pressure (73.7 vs. 69.0 mmHg, p-value = 0.022), fasting blood glucose (97.4 vs. 93.6 mg/dL, p-value = 0.009) and triglyceride levels (140.0 vs. 85.6 mg/dL, p-value < 0.001) than those without metabolic syndrome. HDL-cholesterol was significantly lower in children with metabolic syndrome than those in without metabolic syndrome (48.7 vs. 63.1 mg/dL, p-value < 0.001). Of the sample, approximately half of children with obesity had metabolic syndrome. The prevalence of metabolic syndrome appears to be on the increase. Strategies for childhood obesity and metabolic syndrome prevention are urgently needed for Thai children.
Civil Navigation Signal Status
2015-04-29
8/15 defined CNAV Message Types - Led to pre-operational use beginning 28 Apr 14 • Planned live-sky event fall of 2015 - Incorporate Midi Almanac...Parameters, Text, 18 eight-bit ASCII characters 37 Clock & Midi Almanac SV Clock Correction Parameters, Midi Almanac parameters 15 Defined Message Types
Fidelity deviation in quantum teleportation
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Ryu, Junghee; Kaszlikowski, Dagomir
2018-04-01
We analyze the performance of quantum teleportation in terms of average fidelity and fidelity deviation. The average fidelity is defined as the average value of the fidelities over all possible input states and the fidelity deviation is their standard deviation, which is referred to as a concept of fluctuation or universality. In the analysis, we find the condition to optimize both measures under a noisy quantum channel—we here consider the so-called Werner channel. To characterize our results, we introduce a 2D space defined by the aforementioned measures, in which the performance of the teleportation is represented as a point with the channel noise parameter. Through further analysis, we specify some regions drawn for different channel conditions, establishing the connection to the dissimilar contributions of the entanglement to the teleportation and the Bell inequality violation.
Batstone, D J; Torrijos, M; Ruiz, C; Schmidt, J E
2004-01-01
The model structure in anaerobic digestion has been clarified following publication of the IWA Anaerobic Digestion Model No. 1 (ADM1). However, parameter values are not well known, and uncertainty and variability in the parameter values given is almost unknown. Additionally, platforms for identification of parameters, namely continuous-flow laboratory digesters, and batch tests suffer from disadvantages such as long run times, and difficulty in defining initial conditions, respectively. Anaerobic sequencing batch reactors (ASBRs) are sequenced into fill-react-settle-decant phases, and offer promising possibilities for estimation of parameters, as they are by nature, dynamic in behaviour, and allow repeatable behaviour to establish initial conditions, and evaluate parameters. In this study, we estimated parameters describing winery wastewater (most COD as ethanol) degradation using data from sequencing operation, and validated these parameters using unsequenced pulses of ethanol and acetate. The model used was the ADM1, with an extension for ethanol degradation. Parameter confidence spaces were found by non-linear, correlated analysis of the two main Monod parameters; maximum uptake rate (k(m)), and half saturation concentration (K(S)). These parameters could be estimated together using only the measured acetate concentration (20 points per cycle). From interpolating the single cycle acetate data to multiple cycles, we estimate that a practical "optimal" identifiability could be achieved after two cycles for the acetate parameters, and three cycles for the ethanol parameters. The parameters found performed well in the short term, and represented the pulses of acetate and ethanol (within 4 days of the winery-fed cycles) very well. The main discrepancy was poor prediction of pH dynamics, which could be due to an unidentified buffer with an overall influence the same as a weak base (possibly CaCO3). Based on this work, ASBR systems are effective for parameter estimation, especially for comparative wastewater characterisation. The main disadvantages are heavy computational requirements for multiple cycles, and difficulty in establishing the correct biomass concentration in the reactor, though the last is also a disadvantage for continuous fixed film reactors, and especially, batch tests.
Teubner, Diana; Paulus, Martin; Veith, Michael; Klein, Roland
2015-02-01
Piscifaunal health depends upon the state and quality of the aquatic environment. Variations in physical condition of fish may therefore be attributed to changes in environmental quality. Based on time series of up to 20 years of biometric data of bream from multiple sampling sites of the German environmental specimen bank (ESB), this study assessed whether changes in biometric parameters are able to indicate long-term alterations in fish health and environmental quality. Evaluated biometric parameters of fish health comprised length and weight of individuals of a defined age class, the condition factor, lipid content and hepatosomatic index (HSI). Although there are negative trends of the HSI, the overall development of health parameters can be interpreted as positive. This seems to suggest that health parameters conclusively mirror the long-term improvement of water quality in the selected rivers. However, the applicability of the condition factor as well as lipid content as indicators for fish health remained subject to restrictions. Altogether, the results from the ESB confirmed the high value of biometric parameters for monitoring of long-term changes in state and quality of aquatic ecosystems.
The 3D model: explaining densification and deformation mechanisms by using 3D parameter plots.
Picker, Katharina M
2004-04-01
The aim of the study was to analyze very differently deforming materials using 3D parameter plots and consequently to gain deeper insights into the densification and deformation process described with the 3D model in order to define an ideal tableting excipient. The excipients used were dicalcium phosphate dihydrate (DCPD), sodium chloride (NaCl), microcrystalline cellulose (MCC), xylitol, mannitol, alpha-lactose monohydrate, maltose, hydroxypropyl methylcellulose (HPMC), sodium carboxymethylcellulose (NaCMC), cellulose acetate (CAC), maize starch, potato starch, pregelatinized starch, and maltodextrine. All of the materials were tableted to graded maximum relative densities (rhorel, max) using an eccentric tableting machine. The data which resulted, namely force, displacement, and time, were analyzed by the application of 3D modeling. Different particle size fractions of DCPD, CAC, and MCC were analyzed in addition. Brittle deforming materials such as DCPD exhibited a completely different 3D parameter plot, with low time plasticity, d, and low pressure plasticity, e, and a strong decrease in omega values when densification increased, in contrast to the plastically deforming MCC, which had much higher d, e, and omega values. e and omega values changed only slightly when densification increased for MCC. NaCl showed less of a decrease in omega values than DCPD did, and the d and e values were between those of MCC and DCPD. The sugar alcohols, xylitol and mannitol, behaved in a similar fashion to sodium chloride. This is also valid for the crystalline sugars, alpha-lactose monohydrate, and maltose. However, the sugars are more brittle than the sugar alcohols. The cellulose derivatives, HPMC, NaCMC, and CAC, are as plastic as MCC, however, their elasticity depends on substitution indicated by lower (more elastic) or higher (less elastic) omega values. The native starches, maize starch and potato starch, are very elastic, and pregelatinized starch and maltodextrine are less elastic and exhibited higher omega values. Deformation behavior as shown in 3D parameter plots depends on particle size for polymers such as CAC and MCC; however, it does not depend on particle size for brittle materials such as DCPD. An ideally deforming tableting excipient should exhibit high e, d, and omega values with a constant ratio of e and omega at increasing densification.
Total solar eclipse effects on VLF signals: Observations and modeling
NASA Astrophysics Data System (ADS)
Clilverd, Mark A.; Rodger, Craig J.; Thomson, Neil R.; Lichtenberger, János; Steinbach, Péter; Cannon, Paul; Angling, Matthew J.
During the total solar eclipse observed in Europe on August 11, 1999, measurements were made of the amplitude and phase of four VLF transmitters in the frequency range 16-24 kHz. Five receiver sites were set up, and significant variations in phase and amplitude are reported for 17 paths, more than any previously during an eclipse. Distances from transmitter to receiver ranged from 90 to 14,510 km, although the majority were <2000 km. Typically, positive amplitude changes were observed throughout the whole eclipse period on path lengths <2000 km, while negative amplitude changes were observed on paths >10,000 km. Negative phase changes were observed on most paths, independent of path length. Although there was significant variation from path to path, the typical changes observed were ~3 dB and ~50°. The changes observed were modeled using the Long Wave Propagation Capability waveguide code. Maximum eclipse effects occurred when the Wait inverse scale height parameter β was 0.5 km-1 and the effective ionospheric height parameter H' was 79 km, compared with β=0.43km-1 and H'=71km for normal daytime conditions. The resulting changes in modeled amplitude and phase show good agreement with the majority of the observations. The modeling undertaken provides an interpretation of why previous estimates of height change during eclipses have shown such a range of values. A D region gas-chemistry model was compared with electron concentration estimates inferred from the observations made during the solar eclipse. Quiet-day H' and β parameters were used to define the initial ionospheric profile. The gas-chemistry model was then driven only by eclipse-related solar radiation levels. The calculated electron concentration values at 77 km altitude throughout the period of the solar eclipse show good agreement with the values determined from observations at all times, which suggests that a linear variation in electron production rate with solar ionizing radiation is reasonable. At times of minimum electron concentration the chemical model predicts that the D region profile would be parameterized by the same β and H' as the LWPC model values, and rocket profiles, during totality and can be considered a validation of the chemical processes defined within the model.
NASA Astrophysics Data System (ADS)
Kitt, R.; Kalda, J.
2006-03-01
The question of optimal portfolio is addressed. The conventional Markowitz portfolio optimisation is discussed and the shortcomings due to non-Gaussian security returns are outlined. A method is proposed to minimise the likelihood of extreme non-Gaussian drawdowns of the portfolio value. The theory is called Leptokurtic, because it minimises the effects from “fat tails” of returns. The leptokurtic portfolio theory provides an optimal portfolio for investors, who define their risk-aversion as unwillingness to experience sharp drawdowns in asset prices. Two types of risks in asset returns are defined: a fluctuation risk, that has Gaussian distribution, and a drawdown risk, that deals with distribution tails. These risks are quantitatively measured by defining the “noise kernel” — an ellipsoidal cloud of points in the space of asset returns. The size of the ellipse is controlled with the threshold parameter: the larger the threshold parameter, the larger return are accepted for investors as normal fluctuations. The return vectors falling into the kernel are used for calculation of fluctuation risk. Analogously, the data points falling outside the kernel are used for the calculation of drawdown risks. As a result the portfolio optimisation problem becomes three-dimensional: in addition to the return, there are two types of risks involved. Optimal portfolio for drawdown-averse investors is the portfolio minimising variance outside the noise kernel. The theory has been tested with MSCI North America, Europe and Pacific total return stock indices.
Growth Angle - a Microscopic View
NASA Technical Reports Server (NTRS)
Mazurak, K.; Volz, M. P.; Croll, A.
2017-01-01
The growth angle that is formed between the side of the growing crystal and the melt meniscus is an important parameter in the detached Bridgman crystal growth method, where it determines the extent of the crystal-crucible wall gap, and in the Czochralski and float zone methods, where it influences the size and stability of the crystals. The growth angle is a non-equilibrium parameter, defined for the crystal growth process only. For a melt-crystal interface translating towards the crystal (melting), there is no specific angle defined between the melt and the sidewall of the solid. In this case, the corner at the triple line becomes rounded, and the angle between the sidewall and the incipience of meniscus can take a number of values, depending on the position of the triple line. In this work, a microscopic model is developed in which the fluid interacts with the solid surface through long range van der Waals or Casimir dispersive forces. This growth angle model is applied to Si and Ge and compared with the macroscopic approach of Herring. In the limit of a rounded corner with a large radius of curvature, the wetting of the melt on the crystal is defined by the contact angle. The proposed microscopic approach addresses the interesting issue of the transition from a contact angle to a growth angle as the radius of curvature decreases.
Chaotic and stable perturbed maps: 2-cycles and spatial models
NASA Astrophysics Data System (ADS)
Braverman, E.; Haroutunian, J.
2010-06-01
As the growth rate parameter increases in the Ricker, logistic and some other maps, the models exhibit an irreversible period doubling route to chaos. If a constant positive perturbation is introduced, then the Ricker model (but not the classical logistic map) experiences period doubling reversals; the break of chaos finally gives birth to a stable two-cycle. We outline the maps which demonstrate a similar behavior and also study relevant discrete spatial models where the value in each cell at the next step is defined only by the values at the cell and its nearest neighbors. The stable 2-cycle in a scalar map does not necessarily imply 2-cyclic-type behavior in each cell for the spatial generalization of the map.
Wu, Rongli; Watanabe, Yoshiyuki; Arisawa, Atsuko; Takahashi, Hiroto; Tanaka, Hisashi; Fujimoto, Yasunori; Watabe, Tadashi; Isohashi, Kayako; Hatazawa, Jun; Tomiyama, Noriyuki
2017-10-01
This study aimed to compare the tumor volume definition using conventional magnetic resonance (MR) and 11C-methionine positron emission tomography (MET/PET) images in the differentiation of the pre-operative glioma grade by using whole-tumor histogram analysis of normalized cerebral blood volume (nCBV) maps. Thirty-four patients with histopathologically proven primary brain low-grade gliomas (n = 15) and high-grade gliomas (n = 19) underwent pre-operative or pre-biopsy MET/PET, fluid-attenuated inversion recovery, dynamic susceptibility contrast perfusion-weighted magnetic resonance imaging, and contrast-enhanced T1-weighted at 3.0 T. The histogram distribution derived from the nCBV maps was obtained by co-registering the whole tumor volume delineated on conventional MR or MET/PET images, and eight histogram parameters were assessed. The mean nCBV value had the highest AUC value (0.906) based on MET/PET images. Diagnostic accuracy significantly improved when the tumor volume was measured from MET/PET images compared with conventional MR images for the parameters of mean, 50th, and 75th percentile nCBV value (p = 0.0246, 0.0223, and 0.0150, respectively). Whole-tumor histogram analysis of CBV map provides more valuable histogram parameters and increases diagnostic accuracy in the differentiation of pre-operative cerebral gliomas when the tumor volume is derived from MET/PET images.
Study of eigenfrequencies with the help of Prony's method
NASA Astrophysics Data System (ADS)
Drobakhin, O. O.; Olevskyi, O. V.; Olevskyi, V. I.
2017-10-01
Eigenfrequencies can be crucial in the design of a construction. They define many parameters that determine limit parameters of the structure. Exceeding these values can lead to the structural failure of an object. It is especially important in the design of structures which support heavy equipment or are subjected to the forces of airflow. One of the most effective ways to acquire the frequencies' values is a computer-based numerical simulation. The existing methods do not allow to acquire the whole range of needed parameters. It is well known that Prony's method, is highly effective for the investigation of dynamic processes. Thus, it is rational to adapt Prony's method for such investigation. The Prony method has advantage in comparison with other numerical schemes because it provides the possibility to process not only the results of numerical simulation, but also real experimental data. The research was carried out for a computer model of a steel plate. The input data was obtained by using the Dassault Systems SolidWorks computer package with the Simulation add-on. We investigated the acquired input data with the help of Prony's method. The result of the numerical experiment shows that Prony's method can be used to investigate the mechanical eigenfrequencies with good accuracy. The output of Prony's method not only contains the information about values of frequencies themselves, but also contains data regarding the amplitudes, initial phases and decaying factors of any given mode of oscillation, which can also be used in engineering.
Analysis of composition-based metagenomic classification.
Higashi, Susan; Barreto, André da Motta Salles; Cantão, Maurício Egidio; de Vasconcelos, Ana Tereza Ribeiro
2012-01-01
An essential step of a metagenomic study is the taxonomic classification, that is, the identification of the taxonomic lineage of the organisms in a given sample. The taxonomic classification process involves a series of decisions. Currently, in the context of metagenomics, such decisions are usually based on empirical studies that consider one specific type of classifier. In this study we propose a general framework for analyzing the impact that several decisions can have on the classification problem. Instead of focusing on any specific classifier, we define a generic score function that provides a measure of the difficulty of the classification task. Using this framework, we analyze the impact of the following parameters on the taxonomic classification problem: (i) the length of n-mers used to encode the metagenomic sequences, (ii) the similarity measure used to compare sequences, and (iii) the type of taxonomic classification, which can be conventional or hierarchical, depending on whether the classification process occurs in a single shot or in several steps according to the taxonomic tree. We defined a score function that measures the degree of separability of the taxonomic classes under a given configuration induced by the parameters above. We conducted an extensive computational experiment and found out that reasonable values for the parameters of interest could be (i) intermediate values of n, the length of the n-mers; (ii) any similarity measure, because all of them resulted in similar scores; and (iii) the hierarchical strategy, which performed better in all of the cases. As expected, short n-mers generate lower configuration scores because they give rise to frequency vectors that represent distinct sequences in a similar way. On the other hand, large values for n result in sparse frequency vectors that represent differently metagenomic fragments that are in fact similar, also leading to low configuration scores. Regarding the similarity measure, in contrast to our expectations, the variation of the measures did not change the configuration scores significantly. Finally, the hierarchical strategy was more effective than the conventional strategy, which suggests that, instead of using a single classifier, one should adopt multiple classifiers organized as a hierarchy.
Scaling Linguistic Characterization of Precipitation Variability
NASA Astrophysics Data System (ADS)
Primo, C.; Gutierrez, J. M.
2003-04-01
Rainfall variability is influenced by changes in the aggregation of daily rainfall. This problem is of great importance for hydrological, agricultural and ecological applications. Rainfall averages, or accumulations, are widely used as standard climatic parameters. However different aggregation schemes may lead to the same average or accumulated values. In this paper we present a fractal method to characterize different aggregation schemes. The method provides scaling exponents characterizing weekly or monthly rainfall patterns for a given station. To this aim, we establish an analogy with linguistic analysis, considering precipitation as a discrete variable (e.g., rain, no rain). Each weekly, or monthly, symbolic precipitation sequence of observed precipitation is then considered as a "word" (in this case, a binary word) which defines a specific weekly rainfall pattern. Thus, each site defines a "language" characterized by the words observed in that site during a period representative of the climatology. Then, the more variable the observed weekly precipitation sequences, the more complex the obtained language. To characterize these languages, we first applied the Zipf's method obtaining scaling histograms of rank ordered frequencies. However, to obtain significant exponents, the scaling must be maintained some orders of magnitude, requiring long sequences of daily precipitation which are not available at particular stations. Thus this analysis is not suitable for applications involving particular stations (such as regionalization). Then, we introduce an alternative fractal method applicable to data from local stations. The so-called Chaos-Game method uses Iterated Function Systems (IFS) for graphically representing rainfall languages, in a way that complex languages define complex graphical patterns. The box-counting dimension and the entropy of the resulting patterns are used as linguistic parameters to quantitatively characterize the complexity of the patterns. We illustrate the high climatological discrimination power of the linguistic parameters in the Iberian peninsula, when compared with other standard techniques (such as seasonal mean accumulated precipitation). As an example, standard and linguistic parameters are used as inputs for a clustering regionalization method, comparing the resulting clusters.
Yan, Binjun; Li, Yao; Guo, Zhengtai; Qu, Haibin
2014-01-01
The concept of quality by design (QbD) has been widely accepted and applied in the pharmaceutical manufacturing industry. There are still two key issues to be addressed in the implementation of QbD for herbal drugs. The first issue is the quality variation of herbal raw materials and the second issue is the difficulty in defining the acceptable ranges of critical quality attributes (CQAs). To propose a feedforward control strategy and a method for defining the acceptable ranges of CQAs for the two issues. In the case study of the ethanol precipitation process of Danshen (Radix Salvia miltiorrhiza) injection, regression models linking input material attributes and process parameters to CQAs were built first and an optimisation model for calculating the best process parameters according to the input materials was established. Then, the feasible material space was defined and the acceptable ranges of CQAs for the previous process were determined. In the case study, satisfactory regression models were built with cross-validated regression coefficients (Q(2) ) all above 91 %. The feedforward control strategy was applied successfully to compensate the quality variation of the input materials, which was able to control the CQAs in the 90-110 % ranges of the desired values. In addition, the feasible material space for the ethanol precipitation process was built successfully, which showed the acceptable ranges of the CQAs for the concentration process. The proposed methodology can help to promote the implementation of QbD for herbal drugs. Copyright © 2013 John Wiley & Sons, Ltd.
The analysis of soil cores polluted with certain metals using the Box-Cox transformation.
Meloun, Milan; Sánka, Milan; Nemec, Pavel; Krítková, Sona; Kupka, Karel
2005-09-01
To define the soil properties for a given area or country including the level of pollution, soil survey and inventory programs are essential tools. Soil data transformations enable the expression of the original data on a new scale, more suitable for data analysis. In the computer-aided interactive analysis of large data files of soil characteristics containing outliers, the diagnostic plots of the exploratory data analysis (EDA) often find that the sample distribution is systematically skewed or reject sample homogeneity. Under such circumstances the original data should be transformed. The Box-Cox transformation improves sample symmetry and stabilizes spread. The logarithmic plot of a profile likelihood function enables the optimum transformation parameter to be found. Here, a proposed procedure for data transformation in univariate data analysis is illustrated on a determination of cadmium content in the plough zone of agricultural soils. A typical soil pollution survey concerns the determination of the elements Be (16 544 values available), Cd (40 317 values), Co (22 176 values), Cr (40 318 values), Hg (32 344 values), Ni (34 989 values), Pb (40 344 values), V (20 373 values) and Zn (36 123 values) in large samples.
Rainfall or parameter uncertainty? The power of sensitivity analysis on grouped factors
NASA Astrophysics Data System (ADS)
Nossent, Jiri; Pereira, Fernando; Bauwens, Willy
2017-04-01
Hydrological models are typically used to study and represent (a part of) the hydrological cycle. In general, the output of these models mostly depends on their input rainfall and parameter values. Both model parameters and input precipitation however, are characterized by uncertainties and, therefore, lead to uncertainty on the model output. Sensitivity analysis (SA) allows to assess and compare the importance of the different factors for this output uncertainty. Hereto, the rainfall uncertainty can be incorporated in the SA by representing it as a probabilistic multiplier. Such multiplier can be defined for the entire time series, or several of these factors can be determined for every recorded rainfall pulse or for hydrological independent storm events. As a consequence, the number of parameters included in the SA related to the rainfall uncertainty can be (much) lower or (much) higher than the number of model parameters. Although such analyses can yield interesting results, it remains challenging to determine which type of uncertainty will affect the model output most due to the different weight both types will have within the SA. In this study, we apply the variance based Sobol' sensitivity analysis method to two different hydrological simulators (NAM and HyMod) for four diverse watersheds. Besides the different number of model parameters (NAM: 11 parameters; HyMod: 5 parameters), the setup of our sensitivity and uncertainty analysis-combination is also varied by defining a variety of scenarios including diverse numbers of rainfall multipliers. To overcome the issue of the different number of factors and, thus, the different weights of the two types of uncertainty, we build on one of the advantageous properties of the Sobol' SA, i.e. treating grouped parameters as a single parameter. The latter results in a setup with a single factor for each uncertainty type and allows for a straightforward comparison of their importance. In general, the results show a clear influence of the weights in the different SA scenarios. However, working with grouped factors resolves this issue and leads to clear importance results.
Lafage, Renaud; Schwab, Frank; Challier, Vincent; Henry, Jensen K; Gum, Jeffrey; Smith, Justin; Hostin, Richard; Shaffrey, Christopher; Kim, Han J; Ames, Christopher; Scheer, Justin; Klineberg, Eric; Bess, Shay; Burton, Douglas; Lafage, Virginie
2016-01-01
Retrospective review of prospective, multicenter database. The aim of the study was to determine age-specific spino-pelvic parameters, to extrapolate age-specific Oswestry Disability Index (ODI) values from published Short Form (SF)-36 Physical Component Score (PCS) data, and to propose age-specific realignment thresholds for adult spinal deformity (ASD). The Scoliosis Research Society-Schwab classification offers a framework for defining alignment in patients with ASD. Although age-specific changes in spinal alignment and patient-reported outcomes have been established in the literature, their relationship in the setting of ASD operative realignment has not been reported. ASD patients who received operative or nonoperative treatment were consecutively enrolled. Patients were stratified by age, consistent with published US-normative values (Norms) of the SF-36 PCS (<35, 35-44, 45-54, 55-64, 65-74, >75 y old). At baseline, relationships between between radiographic spino-pelvic parameters (lumbar-pelvic mismatch [PI-LL], pelvic tilt [PT], sagittal vertical axis [SVA], and T1 pelvic angle [TPA]), age, and PCS were established using linear regression analysis; normative PCS values were then used to establish age-specific targets. Correlation analysis with ODI and PCS was used to determine age-specific ideal alignment. Baseline analysis included 773 patients (53.7 y old, 54% operative, 83% female). There was a strong correlation between ODI and PCS (r = 0.814, P < 0.001), allowing for the extrapolation of US-normative ODI by age group. Linear regression analysis (all with r > 0.510, P < 0.001) combined with US-normative PCS values demonstrated that ideal spino-pelvic values increased with age, ranging from PT = 10.9 degrees, PI-LL = -10.5 degrees, and SVA = 4.1 mm for patients under 35 years to PT = 28.5 degrees, PI-LL = 16.7 degrees, and SVA = 78.1 mm for patients over 75 years. Clinically, older patients had greater compensation, more degenerative loss of lordosis, and were more pitched forward. This study demonstrated that sagittal spino-pelvic alignment varies with age. Thus, operative realignment targets should account for age, with younger patients requiring more rigorous alignment objectives.
NASA Astrophysics Data System (ADS)
Khalili, Ashkan; Jha, Ratneshwar; Samaratunga, Dulip
2016-11-01
Wave propagation analysis in 2-D composite structures is performed efficiently and accurately through the formulation of a User-Defined Element (UEL) based on the wavelet spectral finite element (WSFE) method. The WSFE method is based on the first-order shear deformation theory which yields accurate results for wave motion at high frequencies. The 2-D WSFE model is highly efficient computationally and provides a direct relationship between system input and output in the frequency domain. The UEL is formulated and implemented in Abaqus (commercial finite element software) for wave propagation analysis in 2-D composite structures with complexities. Frequency domain formulation of WSFE leads to complex valued parameters, which are decoupled into real and imaginary parts and presented to Abaqus as real values. The final solution is obtained by forming a complex value using the real number solutions given by Abaqus. Five numerical examples are presented in this article, namely undamaged plate, impacted plate, plate with ply drop, folded plate and plate with stiffener. Wave motions predicted by the developed UEL correlate very well with Abaqus simulations. The results also show that the UEL largely retains computational efficiency of the WSFE method and extends its ability to model complex features.
Sampling ARG of multiple populations under complex configurations of subdivision and admixture.
Carrieri, Anna Paola; Utro, Filippo; Parida, Laxmi
2016-04-01
Simulating complex evolution scenarios of multiple populations is an important task for answering many basic questions relating to population genomics. Apart from the population samples, the underlying Ancestral Recombinations Graph (ARG) is an additional important means in hypothesis checking and reconstruction studies. Furthermore, complex simulations require a plethora of interdependent parameters making even the scenario-specification highly non-trivial. We present an algorithm SimRA that simulates generic multiple population evolution model with admixture. It is based on random graphs that improve dramatically in time and space requirements of the classical algorithm of single populations.Using the underlying random graphs model, we also derive closed forms of expected values of the ARG characteristics i.e., height of the graph, number of recombinations, number of mutations and population diversity in terms of its defining parameters. This is crucial in aiding the user to specify meaningful parameters for the complex scenario simulations, not through trial-and-error based on raw compute power but intelligent parameter estimation. To the best of our knowledge this is the first time closed form expressions have been computed for the ARG properties. We show that the expected values closely match the empirical values through simulations.Finally, we demonstrate that SimRA produces the ARG in compact forms without compromising any accuracy. We demonstrate the compactness and accuracy through extensive experiments. SimRA (Simulation based on Random graph Algorithms) source, executable, user manual and sample input-output sets are available for downloading at: https://github.com/ComputationalGenomics/SimRA CONTACT: : parida@us.ibm.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Zhang, Shaojie; Zhao, Luqiang; Delgado-Tellez, Ricardo; Bao, Hongjun
2018-03-01
Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs) of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs < 1 is tested for each pixel in n simulations which are integrated in a unique parameter. This parameter links the landslide probability to the uncertainties of soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.
N'gattia, A K; Coulibaly, D; Nzussouo, N Talla; Kadjo, H A; Chérif, D; Traoré, Y; Kouakou, B K; Kouassi, P D; Ekra, K D; Dagnan, N S; Williams, T; Tiembré, I
2016-09-13
In temperate regions, influenza epidemics occur in the winter and correlate with certain climatological parameters. In African tropical regions, the effects of climatological parameters on influenza epidemics are not well defined. This study aims to identify and model the effects of climatological parameters on seasonal influenza activity in Abidjan, Cote d'Ivoire. We studied the effects of weekly rainfall, humidity, and temperature on laboratory-confirmed influenza cases in Abidjan from 2007 to 2010. We used the Box-Jenkins method with the autoregressive integrated moving average (ARIMA) process to create models using data from 2007-2010 and to assess the predictive value of best model on data from 2011 to 2012. The weekly number of influenza cases showed significant cross-correlation with certain prior weeks for both rainfall, and relative humidity. The best fitting multivariate model (ARIMAX (2,0,0) _RF) included the number of influenza cases during 1-week and 2-weeks prior, and the rainfall during the current week and 5-weeks prior. The performance of this model showed an increase of >3 % for Akaike Information Criterion (AIC) and 2.5 % for Bayesian Information Criterion (BIC) compared to the reference univariate ARIMA (2,0,0). The prediction of the weekly number of influenza cases during 2011-2012 with the best fitting multivariate model (ARIMAX (2,0,0) _RF), showed that the observed values were within the 95 % confidence interval of the predicted values during 97 of 104 weeks. Including rainfall increases the performances of fitted and predicted models. The timing of influenza in Abidjan can be partially explained by rainfall influence, in a setting with little change in temperature throughout the year. These findings can help clinicians to anticipate influenza cases during the rainy season by implementing preventive measures.
NASA Astrophysics Data System (ADS)
Dauser, T.; García, J.; Walton, D. J.; Eikmann, W.; Kallman, T.; McClintock, J.; Wilms, J.
2016-05-01
Aims: The only relativistic reflection model that implements a parameter relating the intensity incident on an accretion disk to the observed intensity is relxill. The parameter used in earlier versions of this model, referred to as the reflection strength, is unsatisfactory; it has been superseded by a parameter that provides insight into the accretion geometry, namely the reflection fraction. The reflection fraction is defined as the ratio of the coronal intensity illuminating the disk to the coronal intensity that reaches the observer. Methods: The relxill model combines a general relativistic ray-tracing code and a photoionization code to compute the component of radiation reflected from an accretion that is illuminated by an external source. The reflection fraction is a particularly important parameter for relativistic models with well-defined geometry, such as the lamp post model, which is a focus of this paper. Results: Relativistic spectra are compared for three inclinations and for four values of the key parameter of the lamp post model, namely the height above the black hole of the illuminating, on-axis point source. In all cases, the strongest reflection is produced for low source heights and high spin. A low-spin black hole is shown to be incapable of producing enhanced relativistic reflection. Results for the relxill model are compared to those obtained with other models and a Monte Carlo simulation. Conclusions: Fitting data by using the relxill model and the recently implemented reflection fraction, the geometry of a system can be constrained. The reflection fraction is independent of system parameters such as inclination and black hole spin. The reflection-fraction parameter was implemented with the name refl_frac in all flavours of the relxill model, and the non-relativistic reflection model xillver, in v0.4a (18 January 2016).
NASA Technical Reports Server (NTRS)
Dauser, T.; Garcia, J.; Walton, D. J.; Eikmann, W.; Kallman, T.; McClintock, J.; Wilms, J.
2016-01-01
Aims. The only relativistic reflection model that implements a parameter relating the intensity incident on an accretion disk to the observed intensity is relxill. The parameter used in earlier versions of this model, referred to as the reflection strength, is unsatisfactory; it has been superseded by a parameter that provides insight into the accretion geometry, namely the reflection fraction. The reflection fraction is defined as the ratio of the coronal intensity illuminating the disk to the coronal intensity that reaches the observer. Methods. The relxill model combines a general relativistic ray-tracing code and a photoionization code to compute the component of radiation reflected from an accretion that is illuminated by an external source. The reflection fraction is a particularly important parameter for relativistic models with well-defined geometry, such as the lamp post model, which is a focus of this paper. Results. Relativistic spectra are compared for three inclinations and for four values of the key parameter of the lamp post model,namely the height above the black hole of the illuminating, on-axis point source. In all cases, the strongest reflection is produced for low source heights and high spin. A low-spin black hole is shown to be incapable of producing enhanced relativistic reflection. Results for the relxill model are compared to those obtained with other models and a Monte Carlo simulation. Conclusions. Fitting data by using the relxill model and the recently implemented reflection fraction, the geometry of a system can be constrained. The reflection-fraction is independent of system parameters such as inclination and black hole spin. The reflection-fraction parameter was implemented with the name reflec_frac all flavours of the relxill model, and the non-relativistic reflection model xillver, in v0.4a (18 January 2016).
Using sensitivity analysis in model calibration efforts
Tiedeman, Claire; Hill, Mary C.
2003-01-01
In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.
Natural Environmental Service Support to NASA Vehicle, Technology, and Sensor Development Programs
NASA Technical Reports Server (NTRS)
1993-01-01
The research performed under this contract involved definition of the natural environmental parameters affecting the design, development, and operation of space and launch vehicles. The Universities Space Research Association (USRA) provided the manpower and resources to accomplish the following tasks: defining environmental parameters critical for design, development, and operation of launch vehicles; defining environmental forecasts required to assure optimal utilization of launch vehicles; and defining orbital environments of operation and developing models on environmental parameters affecting launch vehicle operations.
NASA Astrophysics Data System (ADS)
Lanen, Theo A.; Watt, David W.
1995-10-01
Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.
Liquefaction potential index: Field assessment
Toprak, S.; Holzer, T.L.
2003-01-01
Cone penetration test (CPT) soundings at historic liquefaction sites in California were used to evaluate the predictive capability of the liquefaction potential index (LPI), which was defined by Iwasaki et al. in 1978. LPI combines depth, thickness, and factor of safety of liquefiable material inferred from a CPT sounding into a single parameter. LPI data from the Monterey Bay region indicate that the probability of surface manifestations of liquefaction is 58 and 93%, respectively, when LPI equals or exceeds 5 and 15. LPI values also generally correlate with surface effects of liquefaction: Decreasing from a median of 12 for soundings in lateral spreads to 0 for soundings where no surface effects were reported. The index is particularly promising for probabilistic liquefaction hazard mapping where it may be a useful parameter for characterizing the liquefaction potential of geologic units.
Disordered λ φ4+ρ φ6 Landau-Ginzburg model
NASA Astrophysics Data System (ADS)
Diaz, R. Acosta; Svaiter, N. F.; Krein, G.; Zarro, C. A. D.
2018-03-01
We discuss a disordered λ φ4+ρ φ6 Landau-Ginzburg model defined in a d -dimensional space. First we adopt the standard procedure of averaging the disorder-dependent free energy of the model. The dominant contribution to this quantity is represented by a series of the replica partition functions of the system. Next, using the replica-symmetry ansatz in the saddle-point equations, we prove that the average free energy represents a system with multiple ground states with different order parameters. For low temperatures we show the presence of metastable equilibrium states for some replica fields for a range of values of the physical parameters. Finally, going beyond the mean-field approximation, the one-loop renormalization of this model is performed, in the leading-order replica partition function.
Analysis of counting data: Development of the SATLAS Python package
NASA Astrophysics Data System (ADS)
Gins, W.; de Groote, R. P.; Bissell, M. L.; Granados Buitrago, C.; Ferrer, R.; Lynch, K. M.; Neyens, G.; Sels, S.
2018-01-01
For the analysis of low-statistics counting experiments, a traditional nonlinear least squares minimization routine may not always provide correct parameter and uncertainty estimates due to the assumptions inherent in the algorithm(s). In response to this, a user-friendly Python package (SATLAS) was written to provide an easy interface between the data and a variety of minimization algorithms which are suited for analyzinglow, as well as high, statistics data. The advantage of this package is that it allows the user to define their own model function and then compare different minimization routines to determine the optimal parameter values and their respective (correlated) errors. Experimental validation of the different approaches in the package is done through analysis of hyperfine structure data of 203Fr gathered by the CRIS experiment at ISOLDE, CERN.
Quantified Event Automata: Towards Expressive and Efficient Runtime Monitors
NASA Technical Reports Server (NTRS)
Barringer, Howard; Falcone, Ylies; Havelund, Klaus; Reger, Giles; Rydeheard, David
2012-01-01
Runtime verification is the process of checking a property on a trace of events produced by the execution of a computational system. Runtime verification techniques have recently focused on parametric specifications where events take data values as parameters. These techniques exist on a spectrum inhabited by both efficient and expressive techniques. These characteristics are usually shown to be conflicting - in state-of-the-art solutions, efficiency is obtained at the cost of loss of expressiveness and vice-versa. To seek a solution to this conflict we explore a new point on the spectrum by defining an alternative runtime verification approach.We introduce a new formalism for concisely capturing expressive specifications with parameters. Our technique is more expressive than the currently most efficient techniques while at the same time allowing for optimizations.
Approximate solution of space and time fractional higher order phase field equation
NASA Astrophysics Data System (ADS)
Shamseldeen, S.
2018-03-01
This paper is concerned with a class of space and time fractional partial differential equation (STFDE) with Riesz derivative in space and Caputo in time. The proposed STFDE is considered as a generalization of a sixth-order partial phase field equation. We describe the application of the optimal homotopy analysis method (OHAM) to obtain an approximate solution for the suggested fractional initial value problem. An averaged-squared residual error function is defined and used to determine the optimal convergence control parameter. Two numerical examples are studied, considering periodic and non-periodic initial conditions, to justify the efficiency and the accuracy of the adopted iterative approach. The dependence of the solution on the order of the fractional derivative in space and time and model parameters is investigated.
Midi-maxi computer interaction in the interpretation of nuclear medicine procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schlapper, G.A.
1977-01-01
A study of renal function with an Anger Gamma Camera coupled with a Digital Equipment Corporation Gamma-11 System and an IBM System 370 demonstrates the potential of quantitative determinations of physiological function through the application of midi-maxi computer interaction in the interpretation of nuclear medicine procedures. It is shown that radiotracers can provide an opportunity to assess physiological processes of renal function by noninvasively following the path of a tracer as a function of time. Time-activity relationships obtained over seven anatomically defined regions are related to parameters of a seven compartment model employed to describe the renal clearance process. Themore » values obtained for clinically significant parameters agree with known renal pathophysiology. Differentiation of failure of acute, chronic, and obstructive forms is indicated.« less
Transition to collective oscillations in finite Kuramoto ensembles
NASA Astrophysics Data System (ADS)
Peter, Franziska; Pikovsky, Arkady
2018-03-01
We present an alternative approach to finite-size effects around the synchronization transition in the standard Kuramoto model. Our main focus lies on the conditions under which a collective oscillatory mode is well defined. For this purpose, the minimal value of the amplitude of the complex Kuramoto order parameter appears as a proper indicator. The dependence of this minimum on coupling strength varies due to sampling variations and correlates with the sample kurtosis of the natural frequency distribution. The skewness of the frequency sample determines the frequency of the resulting collective mode. The effects of kurtosis and skewness hold in the thermodynamic limit of infinite ensembles. We prove this by integrating a self-consistency equation for the complex Kuramoto order parameter for two families of distributions with controlled kurtosis and skewness, respectively.
Criticality triggers the emergence of collective intelligence in groups.
De Vincenzo, Ilario; Giannoccaro, Ilaria; Carbone, Giuseppe; Grigolini, Paolo
2017-08-01
A spinlike model mimicking human behavior in groups is employed to investigate the dynamics of the decision-making process. Within the model, the temporal evolution of the state of systems is governed by a time-continuous Markov chain. The transition rates of the resulting master equation are defined in terms of the change of interaction energy between the neighboring agents (change of the level of conflict) and the change of a locally defined agent fitness. Three control parameters can be identified: (i) the social interaction strength βJ measured in units of social temperature, (ii) the level of confidence β^{'} that each individual has on his own expertise, and (iii) the level of knowledge p that identifies the expertise of each member. Based on these three parameters, the phase diagrams of the system show that a critical transition front exists where a sharp and concurrent change in fitness and consensus takes place. We show that at the critical front, the information leakage from the fitness landscape to the agents is maximized. This event triggers the emergence of the collective intelligence of the group, and in the end it leads to a dramatic improvement in the decision-making performance of the group. The effect of size M of the system is also investigated, showing that, depending on the value of the control parameters, increasing M may be either beneficial or detrimental.
Criticality triggers the emergence of collective intelligence in groups
NASA Astrophysics Data System (ADS)
De Vincenzo, Ilario; Giannoccaro, Ilaria; Carbone, Giuseppe; Grigolini, Paolo
2017-08-01
A spinlike model mimicking human behavior in groups is employed to investigate the dynamics of the decision-making process. Within the model, the temporal evolution of the state of systems is governed by a time-continuous Markov chain. The transition rates of the resulting master equation are defined in terms of the change of interaction energy between the neighboring agents (change of the level of conflict) and the change of a locally defined agent fitness. Three control parameters can be identified: (i) the social interaction strength β J measured in units of social temperature, (ii) the level of confidence β' that each individual has on his own expertise, and (iii) the level of knowledge p that identifies the expertise of each member. Based on these three parameters, the phase diagrams of the system show that a critical transition front exists where a sharp and concurrent change in fitness and consensus takes place. We show that at the critical front, the information leakage from the fitness landscape to the agents is maximized. This event triggers the emergence of the collective intelligence of the group, and in the end it leads to a dramatic improvement in the decision-making performance of the group. The effect of size M of the system is also investigated, showing that, depending on the value of the control parameters, increasing M may be either beneficial or detrimental.
Relations for estimating unit-hydrograph parameters in New Mexico
Waltemeyer, Scott D.
2001-01-01
Data collected from 20 U.S. Geological Survey streamflow-gaging stations, most of which were operated in New Mexico between about 1969 and 1977, were used to define hydrograph characteristics for small New Mexico streams. Drainage areas for the gaging stations ranged from 0.23 to 18.2 square miles. Observed values for the hydrograph characteristics were determined for 87 of the most significant rainfall-runoff events at these gaging stations and were used to define regional regression relations with basin characteristics. Regional relations defined lag time (tl), time of concentration (tc), and time to peak (tp) as functions of stream length and basin shape. The regional equation developed for time of concentration for New Mexico agrees well with the Kirpich equation developed for Tennessee. The Kirpich equation is based on stream length and channel slope, whereas the New Mexico equation is based on stream length and basin shape. Both equations, however, underestimate tc when applied to larger basins where tc is greater than about 2 hours. The median ratio between tp and tc for the observed data was 0.66, which equals the value (0.67) recommended by the Natural Resources Conservation Service (formerly the Soil Conservation Service). However, the median ratio between tl and tc was only 0.42, whereas the commonly used ratio is 0.60. A relation also was developed between unit-peak discharge (qu) and time of concentration. The unit-peak discharge relation is similar in slope to the Natural Resources Conservation Service equation, but the equation developed for New Mexico in this study produces estimates of qu that range from two to three times as large as those estimated from the Natural Resources Conservation Service equation. An average value of 833 was determined for the empirical constant Kp. A default value of 484 has been used by the Natural Resources Conservation Service when site-specific data are not available. The use of a lower value of Kp in calculations generally results in a lower peak discharge. A relation between the empirical constant Kp and average channel slope was defined in this study. The predicted Kp values from the equation ranged from 530 to 964 for the 20 flood-hydrograph gaging stations. The standard error of estimate for the equation is 36 percent.
Miglioranza, Lúcia H S; Breganó, José Wander; Dichi, Isaias; Matsuo, Tiemi; Dichi, Jane Bandeira; Barbosa, Décio Sabbatini
2009-02-01
To find the ideal combination of Fe fortifier and its food vehicle is an essential measure in developing countries. However, its cost also plays an important role. In the present study, the effect on blood parameter values of corn flour-derived products fortified with powdered elemental Fe in the form of H2-reduced Fe was investigated in children and adolescents. One hundred and sixty-two individuals (eighty-six boys and seventy-six girls) from public educational centres in Londrina, Paraná (southern Brazil) participated in the study. Fe-deficiency anaemia (IDA) was defined when Hb and serum ferritin values fell below 12 g/dl and 20 microg/l, respectively; Fe deficiency (ID) was considered when serum ferritin was below 20 microg/l. The prevalence of ID and IDA decreased from 18.0 % and 14.9 %, values found at the beginning of the study, to respectively 5.6 % and 1.2 % after 6 months. Changes from altered to normal values occurred more often than normal to altered values with transferrin saturation (14.2 % v. 6.8 %; P < 0.04) and ferritin (12.4 % v. 0 %; P < 0.001). Hb, transferrin saturation and ferritin showed differences between normal and altered parameters after 6 months (P < 0.001). A pronounced reduction in the prevalence of ID and IDA was observed in children and adolescents following 6 months' ingestion of corn flour-derived products enriched with elemental Fe.
Cary, L.E.
1984-01-01
The U.S. Geological Survey 's precipitation-runoff modeling system was tested using 2 year 's data for the daily mode and 17 storms for the storm mode from a basin in southeastern Montana. Two hydrologic response unit delineations were studied. The more complex delineation did not provide superior results. In this application, the optimum numbers of hydrologic response units were 16 and 18 for the two alternatives. The first alternative with 16 units was modified to facilitate interfacing with the storm mode. A parameter subset was defined for the daily mode using sensitivity analysis. Following optimization, the simulated hydrographs approximated the observed hydrograph during the first year, a year of large snowfall. More runoff was simulated than observed during the second year. There was reasonable correspondence between the observed snowpack and the simulated snowpack the first season but poor the second. More soil moisture was withdrawn than was indicated by soil moisture observations. Optimization of parameters in the storm mode resulted in much larger values than originally estimated, commonly larger than published values of the Green and Ampt parameters. Following optimization, variable results were obtained. The results obtained are probably related to inadequate representation of basin infiltration characteristics and to precipitation variability. (USGS)
All Sky Cloud Coverage Monitoring for SONG-China Project
NASA Astrophysics Data System (ADS)
Tian, J. F.; Deng, L. C.; Yan, Z. Z.; Wang, K.; Wu, Y.
2016-05-01
In order to monitor the cloud distributions at Qinghai station, a site selected for SONG (Stellar Observations Network Group)-China node, the design of the proto-type of all sky camera (ASC) applied in Xinglong station is adopted. Both hardware and software improvements have been made in order to be more precise and deliver quantitative measurements. The ARM (Advanced Reduced Instruction Set Computer Machine) MCU (Microcontroller Unit) instead of PC is used to control the upgraded version of ASC. A much higher reliability has been realized in the current scheme. Independent of the positions of the Sun and Moon, the weather conditions are constantly changing, therefore it is difficult to get proper exposure parameters using only the temporal information of the major light sources. A realistic exposure parameters for the ASC can actually be defined using a real-time sky brightness monitor that is also installed at the same site. The night sky brightness value is a very sensitive function of the cloud coverage, and can be accurately measured by the sky quality monitor. We study the correlation between the exposure parameter and night sky brightness value, and give the mathematical relation. The images of the all sky camera are inserted into database directly. All sky quality images are archived in FITS format which can be used for further analysis.
Validity of strong lensing statistics for constraints on the galaxy evolution model
NASA Astrophysics Data System (ADS)
Matsumoto, Akiko; Futamase, Toshifumi
2008-02-01
We examine the usefulness of the strong lensing statistics to constrain the evolution of the number density of lensing galaxies by adopting the values of the cosmological parameters determined by recent Wilkinson Microwave Anisotropy Probe observation. For this purpose, we employ the lens-redshift test proposed by Kochanek and constrain the parameters in two evolution models, simple power-law model characterized by the power-law indexes νn and νv, and the evolution model by Mitchell et al. based on cold dark matter structure formation scenario. We use the well-defined lens sample from the Sloan Digital Sky Survey (SDSS) and this is similarly sized samples used in the previous studies. Furthermore, we adopt the velocity dispersion function of early-type galaxies based on SDSS DR1 and DR5. It turns out that the indexes of power-law model are consistent with the previous studies, thus our results indicate the mild evolution in the number and velocity dispersion of early-type galaxies out to z = 1. However, we found that the values for p and q used by Mitchell et al. are inconsistent with the presently available observational data. More complete sample is necessary to withdraw more realistic determination on these parameters.
Tam, James; Ahmad, Imad A Haidar; Blasko, Andrei
2018-06-05
A four parameter optimization of a stability indicating method for non-chromophoric degradation products of 1,2-distearoyl-sn-glycero-3-phosphocholine (DSPC), 1-stearoyl-sn-glycero-3-phosphocholine and 2-stearoyl-sn-glycero-3-phosphocholine was achieved using a reverse phase liquid chromatography-charged aerosol detection (RPLC-CAD) technique. Using the hydrophobic subtraction model of selectivity, a core-shell, polar embedded RPLC column was selected followed by gradient-temperature optimization, resulting in ideal relative peak placements for a robust, stability indicating separation. The CAD instrument parameters, power function value (PFV) and evaporator temperature were optimized for lysophosphatidylcholines to give UV absorbance detector-like linearity performance within a defined concentration range. The two lysophosphatidylcholines gave the same response factor in the selected conditions. System specific power function values needed to be set for the two RPLC-CAD instruments used. A custom flow-divert profile, sending only a portion of the column effluent to the detector, was necessary to mitigate detector response drifting effects. The importance of the PFV optimization for each instrument of identical build and how to overcome recovery issues brought on by the matrix effects from the lipid-RP stationary phase interaction is reported. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semat, M.A.
1960-01-01
Transport and deposit conditions of uraniferous minerals are breifly described. The synthesis of crystallograpic, physical, optical, and thermal properties permits defining the main characteristics of this mineralogical group. Tables to facilicate identification of the supergene uranium minerals are given on investigation by anion and cation; system, cleavages, cell parameters, interplanar spacings, refractive indices, optical barings; classification by decreasing values of the most intense line of the powder diagram; diagram for the three higher interplanar spacings; and diagram of the refractive indices. (auth)
NASA Astrophysics Data System (ADS)
Del Pino, S.; Labourasse, E.; Morel, G.
2018-06-01
We present a multidimensional asymptotic preserving scheme for the approximation of a mixture of compressible flows. Fluids are modelled by two Euler systems of equations coupled with a friction term. The asymptotic preserving property is mandatory for this kind of model, to derive a scheme that behaves well in all regimes (i.e. whatever the friction parameter value is). The method we propose is defined in ALE coordinates, using a Lagrange plus remap approach. This imposes a multidimensional definition and analysis of the scheme.
A non-asymptotic homogenization theory for periodic electromagnetic structures.
Tsukerman, Igor; Markel, Vadim A
2014-08-08
Homogenization of electromagnetic periodic composites is treated as a two-scale problem and solved by approximating the fields on both scales with eigenmodes that satisfy Maxwell's equations and boundary conditions as accurately as possible. Built into this homogenization methodology is an error indicator whose value characterizes the accuracy of homogenization. The proposed theory allows one to define not only bulk, but also position-dependent material parameters (e.g. in proximity to a physical boundary) and to quantify the trade-off between the accuracy of homogenization and its range of applicability to various illumination conditions.
Integrated Coding and Waveform Design Study.
1980-08-01
values of M, CDMA offers an efficiency of around 33%. Comparing these numbers to the fixed assigned system given in Section 2.2, we note that for the...is shown in Figure 3.1. The behavior of the ad- justable weight W2 is governed by a first order differential equation which Gabriel solves as W = w...the weight behavior is governed by Eq. (3.1), with the parameters in Eq. (3.1) defined by Eqs. (3.2), (3.5), (3.11), (3.15), (3.17), (3.18), (3.19), and
Curry, B. Brandon
1999-01-01
Continental ostracode occurrences reflect salinity, solute composition, temperature, flow conditions, and other environmental properties of the water they inhabit. Their occurrences also reflect the variability of many of these environmental parameters. Environmental tolerance indices (ETIs) offer a new way to express the nature of an ostracode's environment. As defined herein, ETIs range in value from zero to one, and may be calculated for continuous and binary variables. For continuous variables such as salinity, the ETI is the ratio of the range of values of salinity tolerated by an ostracode to the total range of salinity values from a representative database. In this investigation, the database of continuous variables consists of information from 341 sites located throughout the United States. Binary ETIs indicate whether an environmental variable such as flowing water affects ostracode presence or absence. The binary database consists of information from 784 sites primarily from Illinois, USA. ETIs were developed in this investigation to interpret paleohydrological changes implied by fossil ostracode successions. ETI profiles may be cast in terms of a weighted average, or on presence/absence. The profiles express ostracode tolerance of environmental parameters such as salinity or currents. Tolerance of a wide range of values is taken to indicate shallow water because shallow environments are conducive to thermal variability, short-term water residence, and the development of currents from wind-driven waves.
Lin, H-Y; Hwang-Gu, S-L; Gau, S S-F
2015-07-01
Intra-individual variability in reaction time (IIV-RT), defined by standard deviation of RT (RTSD), is considered as an endophenotype for attention-deficit/hyperactivity disorder (ADHD). Ex-Gaussian distributions of RT, rather than RTSD, could better characterize moment-to-moment fluctuations in neuropsychological performance. However, data of response variability based on ex-Gaussian parameters as an endophenotypic candidate for ADHD are lacking. We assessed 411 adolescents with clinically diagnosed ADHD based on the DSM-IV-TR criteria as probands, 138 unaffected siblings, and 138 healthy controls. The output parameters, mu, sigma, and tau, of an ex-Gaussian RT distribution were derived from the Conners' continuous performance test. Multi-level models controlling for sex, age, comorbidity, and use of methylphenidate were applied. Compared with unaffected siblings and controls, ADHD probands had elevated sigma value, omissions, commissions, and mean RT. Unaffected siblings formed an intermediate group in-between probands and controls in terms of tau value and RTSD. There was no between-group difference in mu value. Conforming to a context-dependent nature, unaffected siblings still had an intermediate tau value in-between probands and controls across different interstimulus intervals. Our findings suggest IIV-RT represented by tau may be a potential endophenotype for inquiry into genetic underpinnings of ADHD in the context of heterogeneity. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Optimization of the monitoring of landfill gas and leachate in closed methanogenic landfills.
Jovanov, Dejan; Vujić, Bogdana; Vujić, Goran
2018-06-15
Monitoring of the gas and leachate parameters in a closed landfill is a long-term activity defined by national legislative worldwide. Serbian Waste Disposal Law defines the monitoring of a landfill at least 30 years after its closing, but the definition of the monitoring extent (number and type of parameters) is incomplete. In order to define and clear all the uncertainties, this research focuses on process of monitoring optimization, using the closed landfill in Zrenjanin, Serbia, as the experimental model. The aim of optimization was to find representative parameters which would define the physical, chemical and biological processes in the closed methanogenic landfill and to make this process less expensive. Research included development of the five monitoring models with different number of gas and leachate parameters and each model has been processed in open source software GeoGebra which is often used for solving optimization problems. The results of optimization process identified the most favorable monitoring model which fulfills all the defined criteria not only from the point of view of mathematical analyses, but also from the point of view of environment protection. The final outcome of this research - the minimal required parameters which should be included in the landfill monitoring are precisely defined. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bedane, T.; Di Maio, L.; Scarfato, P.; Incarnato, L.; Marra, F.
2015-12-01
The barrier performance of multilayer polymeric films for food applications has been significantly improved by incorporating oxygen scavenging materials. The scavenging activity depends on parameters such as diffusion coefficient, solubility, concentration of scavenger loaded and the number of available reactive sites. These parameters influence the barrier performance of the film in different ways. Virtualization of the process is useful to characterize, design and optimize the barrier performance based on physical configuration of the films. Also, the knowledge of values of parameters is important to predict the performances. Inverse modeling and sensitivity analysis are sole way to find reasonable values of poorly defined, unmeasured parameters and to analyze the most influencing parameters. Thus, the objective of this work was to develop a model to predict barrier properties of multilayer film incorporated with reactive layers and to analyze and characterize their performances. Polymeric film based on three layers of Polyethylene terephthalate (PET), with a core reactive layer, at different thickness configurations was considered in the model. A one dimensional diffusion equation with reaction was solved numerically to predict the concentration of oxygen diffused into the polymer taking into account the reactive ability of the core layer. The model was solved using commercial software for different film layer configurations and sensitivity analysis based on inverse modeling was carried out to understand the effect of physical parameters. The results have shown that the use of sensitivity analysis can provide physical understanding of the parameters which highly affect the gas permeation into the film. Solubility and the number of available reactive sites were the factors mainly influencing the barrier performance of three layered polymeric film. Multilayer films slightly modified the steady transport properties in comparison to net PET, giving a small reduction in the permeability and oxygen transfer rate values. Scavenging capacity of the multilayer film increased linearly with the increase of the reactive layer thickness and the oxygen absorption reaction at short times decreased proportionally with the thickness of the external PET layer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bedane, T.; Di Maio, L.; Scarfato, P.
The barrier performance of multilayer polymeric films for food applications has been significantly improved by incorporating oxygen scavenging materials. The scavenging activity depends on parameters such as diffusion coefficient, solubility, concentration of scavenger loaded and the number of available reactive sites. These parameters influence the barrier performance of the film in different ways. Virtualization of the process is useful to characterize, design and optimize the barrier performance based on physical configuration of the films. Also, the knowledge of values of parameters is important to predict the performances. Inverse modeling and sensitivity analysis are sole way to find reasonable values ofmore » poorly defined, unmeasured parameters and to analyze the most influencing parameters. Thus, the objective of this work was to develop a model to predict barrier properties of multilayer film incorporated with reactive layers and to analyze and characterize their performances. Polymeric film based on three layers of Polyethylene terephthalate (PET), with a core reactive layer, at different thickness configurations was considered in the model. A one dimensional diffusion equation with reaction was solved numerically to predict the concentration of oxygen diffused into the polymer taking into account the reactive ability of the core layer. The model was solved using commercial software for different film layer configurations and sensitivity analysis based on inverse modeling was carried out to understand the effect of physical parameters. The results have shown that the use of sensitivity analysis can provide physical understanding of the parameters which highly affect the gas permeation into the film. Solubility and the number of available reactive sites were the factors mainly influencing the barrier performance of three layered polymeric film. Multilayer films slightly modified the steady transport properties in comparison to net PET, giving a small reduction in the permeability and oxygen transfer rate values. Scavenging capacity of the multilayer film increased linearly with the increase of the reactive layer thickness and the oxygen absorption reaction at short times decreased proportionally with the thickness of the external PET layer.« less
NASA Technical Reports Server (NTRS)
Norikane, L.; Freeman, A.; Way, J.; Okonek, S.; Casey, R.
1992-01-01
Recent updates to a geographical information system (GIS) called VICAR (Video Image Communication and Retrieval)/IBIS are described. The system is designed to handle data from many different formats (vector, raster, tabular) and many different sources (models, radar images, ground truth surveys, optical images). All the data are referenced to a single georeference plane, and average or typical values for parameters defined within a polygonal region are stored in a tabular file, called an info file. The info file format allows tracking of data in time, maintenance of links between component data sets and the georeference image, conversion of pixel values to `actual' values (e.g., radar cross-section, luminance, temperature), graph plotting, data manipulation, generation of training vectors for classification algorithms, and comparison between actual measurements and model predictions (with ground truth data as input).
Prediction of alpha factor values for fine pore aeration systems.
Gillot, S; Héduit, A
2008-01-01
The objective of this work was to analyse the impact of different geometric and operating parameters on the alpha factor value for fine bubble aeration systems equipped with EPDM membrane diffusers. Measurements have been performed on nitrifying plants operating under extended aeration and treating mainly domestic wastewater. Measurements performed on 14 nitrifying plants showed that, for domestic wastewater treatment under very low F/M ratios, the alpha factor is comprised between 0.44 and 0.98. A new composite variable (the Equivalent Contact Time, ECT) has been defined and makes it possible for a given aeration tank, knowing the MCRT, the clean water oxygen transfer coefficient and the supplied air flow rate, to predict the alpha factor value. ECT combines the effect on mass transfer of all generally accepted factors affecting oxygen transfer performances (air flow rate, diffuser submergence, horizontal flow). (c) IWA Publishing 2008.
Yao, Min; Wang, Wenjing; Zhou, Jieru; Sun, Minghua; Zhu, Jialiang; Chen, Pin; Wang, Xipeng
2017-04-01
This study was conducted to determine a more accurate imaging method for the diagnosis of cesarean scar diverticulum (CSD) and to identify the parameters of CSD strongly associated with prolonged menstrual bleeding. We enrolled 282 women with a history of cesarean section (CS) who presented with prolonged menstrual bleeding between January 2012 and May 2015. Transvaginal ultrasound, general magnetic resonance imaging (MRI) and contrast-enhanced MRI were used to diagnose CSD. Five parameters were compared among the imaging modalities: length, width, depth and thickness of the remaining muscular layer (TRM) of CSD and the depth/TRM ratio. Correlation between the five parameters and days of menstrual bleeding was performed. Finally, multivariate analysis was used to determine the parameters associated with menstrual bleeding longer than 14 days. Contrast-enhanced MRI yielded greater length or width or thinner TRM of CSD compared with MRI and transvaginal ultrasound. CSD size did not significantly differ between women who had undergone one and two CSs. Correlation analysis revealed that CSD (P = 0.038) and TRM (P = 0.003) lengths were significantly associated with days of menstrual bleeding. Longer than 14 days of bleeding was defined by cut-off values of 2.15 mm for TRM and 13.85 mm for length. TRM and number of CSs were strongly associated with menstrual bleeding longer than 14 days. CE-MRI is a relatively accurate and efficient imaging method for the diagnosis of CSD. A cut-off value of TRM of 2.15 mm is the most important parameter associated with menstrual bleeding longer than 14 days. © 2017 Japan Society of Obstetrics and Gynecology.
NASA Astrophysics Data System (ADS)
Grombein, T.; Seitz, K.; Heck, B.
2013-12-01
In general, national height reference systems are related to individual vertical datums defined by specific tide gauges. The discrepancy of these vertical datums causes height system biases that range in an order of 1-2 m at a global scale. Continental height systems can be connected by spirit leveling and gravity measurements along the leveling lines as performed for the definition of the European Vertical Reference Frame. In order to unify intercontinental height systems, an indirect connection is needed. For this purpose, global geopotential models derived from recent satellite missions like GOCE provide an important contribution. However, to achieve a highly-precise solution, a combination with local terrestrial gravity data is indispensable. Such combinations result in the solution of a Geodetic Boundary Value Problem (GBVP). In contrast to previous studies, mostly related to the traditional (scalar) free GBVP, the present paper discusses the use of the fixed GBVP for height system unification, where gravity disturbances instead of gravity anomalies are applied as boundary values. The basic idea of our approach is a conversion of measured gravity anomalies to gravity disturbances, where unknown datum parameters occur that can be associated with height system biases. In this way, the fixed GBVP can be extended by datum parameters for each datum zone. By evaluating the GBVP at GNSS/leveling benchmarks, the unknown datum parameters can be estimated in a least squares adjustment. Beside the developed theory, we present numerical results of a case study based on the spherical fixed GBVP and boundary values simulated by the use of the global geopotential model EGM2008. In a further step, the impact of approximations like linearization as well as topographic and ellipsoidal effects is taken into account by suitable reduction and correction terms.
Wahl, Jochen; Barleon, Lorenz; Morfeld, Peter; Lichtmeß, Andrea; Haas-Brähler, Sibylle; Pfeiffer, Norbert
2016-01-01
Purpose To develop an expert system for glaucoma screening in a working population based on a human expert procedure using images of optic nerve head (ONH), visual field (frequency doubling technology, FDT) and intraocular pressure (IOP). Methods 4167 of 13037 (32%) employees between 40 and 65 years of Evonik Industries were screened. An experienced glaucoma expert (JW) assessed papilla parameters and evaluated all individual screening results. His classification into “no glaucoma”, “possible glaucoma” and “probable glaucoma” was defined as “gold standard”. A screening model was developed which was tested versus the gold-standard. This model took into account the assessment of the ONH. Values and relationships of CDR and IOP and the FDT were considered additionally and a glaucoma score was generated. The structure of the screening model was specified a priori whereas values of the parameters were chosen post-hoc to optimize sensitivity and specificity of the algorithm. Simple screening models based on IOP and / or FDT were investigated for comparison. Results 111 persons (2.66%) were classified as glaucoma suspects, thereof 13 (0.31%) as probable and 98 (2.35%) as possible glaucoma suspects by the expert. Re-evaluation by the screening model revealed a sensitivity of 83.8% and a specificity of 99.6% for all glaucoma suspects. The positive predictive value of the model was 80.2%, the negative predictive value 99.6%. Simple screening models showed insufficient diagnostic accuracy. Conclusion Adjustment of ONH and symmetry parameters with respect to excavation and IOP in an expert system produced sufficiently satisfying diagnostic accuracy. This screening model seems to be applicable in such a working population with relatively low age and low glaucoma prevalence. Different experts should validate the model in different populations. PMID:27479301
NASA Astrophysics Data System (ADS)
Bora, Ram Prasad; Prabhakar, Rajeev
2009-10-01
In this study, diffusion constants [translational (DT) and rotational (DR)], correlation times [rotational (τrot) and internal (τint)], and the intramolecular order parameters (S2) of the Alzheimer amyloid-β peptides Aβ40 and Aβ42 have been calculated from 150 ns molecular dynamics simulations in aqueous solution. The computed parameters have been compared with the experimentally measured values. The calculated DT of 1.61×10-6 cm2/s and 1.43×10-6 cm2/s for Aβ40 and Aβ42, respectively, at 300 K was found to follow the correct trend defined by the Debye-Stokes-Einstein relation that its value should decrease with the increase in the molecular weight. The estimated DR for Aβ40 and Aβ42 at 300 K are 0.085 and 0.071 ns-1, respectively. The rotational (Crot(t)) and internal (Cint(t)) correlation functions of Aβ40 and Aβ42 were observed to decay at nano- and picosecond time scales, respectively. The significantly different time decays of these functions validate the factorization of the total correlation function (Ctot(t)) of Aβ peptides into Crot(t) and Cint(t). At both short and long time scales, the Clore-Szabo model that was used as Cint(t) provided the best behavior of Ctot(t) for both Aβ40 and Aβ42. In addition, an effective rotational correlation time of Aβ40 is also computed at 18 °C and the computed value (2.30 ns) is in close agreement with the experimental value of 2.45 ns. The computed S2 parameters for the central hydrophobic core, the loop region, and C-terminal domains of Aβ40 and Aβ42 are in accord with the previous studies.
Evaluating lubricating capacity of vegetal oils using Abbott-Firestone curve
NASA Astrophysics Data System (ADS)
Georgescu, C.; Cristea, G. C.; Dima, C.; Deleanu, L.
2017-02-01
The paper presents the change of functional parameters defined on the Abbott-Firestone curve in order to evaluate the surface quality of the balls from the four ball tester, after tests done with several vegetable oils. The tests were done using two grades of rapeseed oil (degummed and refined) and two grades of soybean oil (coarse and degummed) and a common transmission oil (T90). Test parameters were 200 N and 0.576 m/s (1500 rpm) for 60 minutes. For the refined rapeseed oil, the changes in shape of the Abbott-Firestone curves are more dramatic, these being characterized by high values of Spk (the average value for the wear scars on the three balls), thus being 40% of the sum Svk + Sk + Spk, percentage also obtained for the soybean oil, but the value Spk being lower. For the degummed soybean oil, the profile height of the wear scars are taller than those obtained after testing the coarse soybean oil, meaning that the degumming process has a negative influence on the worn surface quality and the lubricating capacity of this oil. Comparing the surface quality of the wear scars on fixed tested balls is a reliable method to point out the lubricant properties of the vegetable oils, especially if they are compared to a “classical” lubricant as a non-additivated transmission mineral oil T90. The best surface after testing was obtained for the soybean oil, followed by T90 oil and the degummed grades of the soybean oil and rapeseed oil (these three giving very close values for the functional parameters), but the refined rapeseed oil generated the poorest quality of the wear scars on the balls, under the same testing conditions.
Wahl, Jochen; Barleon, Lorenz; Morfeld, Peter; Lichtmeß, Andrea; Haas-Brähler, Sibylle; Pfeiffer, Norbert
2016-01-01
To develop an expert system for glaucoma screening in a working population based on a human expert procedure using images of optic nerve head (ONH), visual field (frequency doubling technology, FDT) and intraocular pressure (IOP). 4167 of 13037 (32%) employees between 40 and 65 years of Evonik Industries were screened. An experienced glaucoma expert (JW) assessed papilla parameters and evaluated all individual screening results. His classification into "no glaucoma", "possible glaucoma" and "probable glaucoma" was defined as "gold standard". A screening model was developed which was tested versus the gold-standard. This model took into account the assessment of the ONH. Values and relationships of CDR and IOP and the FDT were considered additionally and a glaucoma score was generated. The structure of the screening model was specified a priori whereas values of the parameters were chosen post-hoc to optimize sensitivity and specificity of the algorithm. Simple screening models based on IOP and / or FDT were investigated for comparison. 111 persons (2.66%) were classified as glaucoma suspects, thereof 13 (0.31%) as probable and 98 (2.35%) as possible glaucoma suspects by the expert. Re-evaluation by the screening model revealed a sensitivity of 83.8% and a specificity of 99.6% for all glaucoma suspects. The positive predictive value of the model was 80.2%, the negative predictive value 99.6%. Simple screening models showed insufficient diagnostic accuracy. Adjustment of ONH and symmetry parameters with respect to excavation and IOP in an expert system produced sufficiently satisfying diagnostic accuracy. This screening model seems to be applicable in such a working population with relatively low age and low glaucoma prevalence. Different experts should validate the model in different populations.
Roganović, Branka; Perišić, Nenad; Roganović, Ana
2016-07-01
Gastric ulcer may be benign or malignant. In terms of therapy and patient’s prognosis early detection of malignancy is very important. The aim of this study was to assess the usefulness of endoscopic ultrasound (EUS) in differentiation between benign and malignant gastric ulcer. A prospective study included 20 consecutive adult patients with malignant gastric ulceration and 20 consecutive adult patients with benign gastric ulceration. All the patients underwent EUS. A total of 6 parameters were analyzed: ulcer width, ulcer depth, the thickness of the gastric wall along the edge of ulceration (T0), the thickness of the gastric wall 2 cm from the edge of ulceration (T2), loss of layering structure of the gastric wall, and the presence of regional lymph nodes. EUS criteria for malignancy and a point-score of malignancy were defined. The critical value of total point-score was also calculated showing the best reliability parameters. There are 4 criteria for malignancy of gastric ulceration: T0 > 10 mm, T2 > 5 mm, EUS visualization of at least one lymph node, loss of layering structure of the gastric wall. Furthermore, T2 > 5 mm was the only EUS independent predictor of ulcer malignancy. The total point score of ≥ 4 was the cut-off pointscore value which gave the best reliability parameters in the assessment of malignant ulcers: sensitivity of 70%, specificity of 95%, positive predictive value of 93.3%, negative predictive value of 76% and accuracy of 82.5%. According to the results obtained in this study, we can conclude that EUS is usefull in differentiation between benign and malignant gastric ulcer.
NASA Technical Reports Server (NTRS)
Tatnall, Chistopher R.
1998-01-01
The counter-rotating pair of wake vortices shed by flying aircraft can pose a threat to ensuing aircraft, particularly on landing approach. To allow adequate time for the vortices to disperse/decay, landing aircraft are required to maintain certain fixed separation distances. The Aircraft Vortex Spacing System (AVOSS), under development at NASA, is designed to prescribe safe aircraft landing approach separation distances appropriate to the ambient weather conditions. A key component of the AVOSS is a ground sensor, to ensure, safety by making wake observations to verify predicted behavior. This task requires knowledge of a flowfield strength metric which gauges the severity of disturbance an encountering aircraft could potentially experience. Several proposed strength metric concepts are defined and evaluated for various combinations of metric parameters and sensor line-of-sight elevation angles. Representative populations of generating and following aircraft types are selected, and their associated wake flowfields are modeled using various wake geometry definitions. Strength metric candidates are then rated and compared based on the correspondence of their computed values to associated aircraft response values, using basic statistical analyses.
Three-dimensional elastic-plastic finite-element analyses of constraint variations in cracked bodies
NASA Technical Reports Server (NTRS)
Newman, J. C., Jr.; Bigelow, C. A.; Shivakumar, K. N.
1993-01-01
Three-dimensional elastic-plastic (small-strain) finite-element analyses were used to study the stresses, deformations, and constraint variations around a straight-through crack in finite-thickness plates for an elastic-perfectly plastic material under monotonic and cyclic loading. Middle-crack tension specimens were analyzed for thicknesses ranging from 1.25 to 20 mm with various crack lengths. Three local constraint parameters, related to the normal, tangential, and hydrostatic stresses, showed similar variations along the crack front for a given thickness and applied stress level. Numerical analyses indicated that cyclic stress history and crack growth reduced the local constraint parameters in the interior of a plate, especially at high applied stress levels. A global constraint factor alpha(sub g) was defined to simulate three-dimensional effects in two-dimensional crack analyses. The global constraint factor was calculated as an average through-the-thickness value over the crack-front plastic region. Values of alpha(sub g) were found to be nearly independent of crack length and were related to the stress-intensity factor for a given thickness.
Parameter sensitivity analysis of a lumped-parameter model of a chain of lymphangions in series.
Jamalian, Samira; Bertram, Christopher D; Richardson, William J; Moore, James E
2013-12-01
Any disruption of the lymphatic system due to trauma or injury can lead to edema. There is no effective cure for lymphedema, partly because predictive knowledge of lymphatic system reactions to interventions is lacking. A well-developed model of the system could greatly improve our understanding of its function. Lymphangions, defined as the vessel segment between two valves, are the individual pumping units. Based on our previous lumped-parameter model of a chain of lymphangions, this study aimed to identify the parameters that affect the system output the most using a sensitivity analysis. The system was highly sensitive to minimum valve resistance, such that variations in this parameter caused an order-of-magnitude change in time-average flow rate for certain values of imposed pressure difference. Average flow rate doubled when contraction frequency was increased within its physiological range. Optimum lymphangion length was found to be some 13-14.5 diameters. A peak of time-average flow rate occurred when transmural pressure was such that the pressure-diameter loop for active contractions was centered near maximum passive vessel compliance. Increasing the number of lymphangions in the chain improved the pumping in the presence of larger adverse pressure differences. For a given pressure difference, the optimal number of lymphangions increased with the total vessel length. These results indicate that further experiments to estimate valve resistance more accurately are necessary. The existence of an optimal value of transmural pressure may provide additional guidelines for increasing pumping in areas affected by edema.
Verification of the H2O Linelists with Theoretically Developed Tools
NASA Technical Reports Server (NTRS)
Ma, Qiancheng; Tipping, R.; Lavrentieva, N. N.; Dudaryonok, A. S.
2013-01-01
Two basic rules (i.e., the pair identity and the smooth variation rules) resulting from the properties of the energy levels and wave functions of H2O states govern how the spectroscopic parameters vary with the H2O lines within the individually defined groups of lines. With these rules, for those lines involving high j states in the same groups, variations of all their spectroscopic parameters (i.e., the transition frequency, intensity, pressure broadened half-width, pressure-induced shift, and temperature exponent) can be well monitored. Thus, the rules can serve as simple and effective tools to screen the H2O spectroscopic data listed in the HITRAN database and verify the latter's accuracies. By checking violations of the rules occurring among the data within the individual groups, possible errors can be picked up and also possible missing lines in the linelist whose intensities are above the threshold can be identified. We have used these rules to check the accuracies of the spectroscopic parameters and the completeness of the linelists for several important H2O vibrational bands. Based on our results, the accuracy of the line frequencies in HITRAN 2008 is consistent. For the line intensity, we have found that there are a substantial number of lines whose intensity values are questionable. With respect to other parameters, many mistakes have been found. The above claims are consistent with a well known fact that values of these parameters in HITRAN contain larger uncertainties. Furthermore, supplements of the missing line list consisting of line assignments and positions can be developed from the screening results.
Crystal-liquid Fugacity Ratio as a Surrogate Parameter for Intestinal Permeability.
Zakeri-Milani, Parvin; Fasihi, Zohreh; Akbari, Jafar; Jannatabadi, Ensieh; Barzegar-Jalali, Mohammad; Loebenberg, Raimar; Valizadeh, Hadi
We assessed the feasibility of using crystal-liquid fugacity ratio (CLFR) as an alternative parameter for intestinal permeability in the biopharmaceutical classification (BCS) of passively absorbed drugs. Dose number, fraction of dose absorbed, intestinal permeability, and intrinsic dissolution rate were used as the input parameters. CLFR was determined using thermodynamic parameters i.e., melting point, molar fusion enthalpy, and entropy of drug molecules obtained using differential scanning calorimetry. The CLFR values were in the range of 0.06-41.76 mole percent. There was a close relationship between CLFR and in vivo intestinal permeability (r > 0.8). CLFR values of greater than 2 mole percent corresponded to complete intestinal absorption. Applying CLFR versus dose number or intrinsic dissolution rate, more than 92% of tested drugs were correctly classified with respect to the reported classification system on the basis of human intestinal permeability and solubility. This investigation revealed that the CLFR might be an appropriate parameter for quantitative biopharmaceutical classification. This could be attributed to the fact that CLFR could be a measure of solubility of compounds in lipid bilayer which was found in this study to be directly proportional to the intestinal permeability of compounds. This classification enables researchers to define characteristics for intestinal absorption of all four BCS drug classes using suitable cutoff points for both intrinsic dissolution rate and crystal-liquid fugacity ratio. Therefore, it may be used as a surrogate for permeability studies. This article is open to POST-PUBLICATION REVIEW. Registered readers (see "For Readers") may comment by clicking on ABSTRACT on the issue's contents page.
NASA Technical Reports Server (NTRS)
Pierson, Willard J., Jr.
1989-01-01
The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.
Dominance-based ranking functions for interval-valued intuitionistic fuzzy sets.
Chen, Liang-Hsuan; Tu, Chien-Cheng
2014-08-01
The ranking of interval-valued intuitionistic fuzzy sets (IvIFSs) is difficult since they include the interval values of membership and nonmembership. This paper proposes ranking functions for IvIFSs based on the dominance concept. The proposed ranking functions consider the degree to which an IvIFS dominates and is not dominated by other IvIFSs. Based on the bivariate framework and the dominance concept, the functions incorporate not only the boundary values of membership and nonmembership, but also the relative relations among IvIFSs in comparisons. The dominance-based ranking functions include bipolar evaluations with a parameter that allows the decision-maker to reflect his actual attitude in allocating the various kinds of dominance. The relationship for two IvIFSs that satisfy the dual couple is defined based on four proposed ranking functions. Importantly, the proposed ranking functions can achieve a full ranking for all IvIFSs. Two examples are used to demonstrate the applicability and distinctiveness of the proposed ranking functions.
Braithwaite, Susan S.; Godara, Hemant; Song, Julie; Cairns, Bruce A.; Jones, Samuel W.; Umpierrez, Guillermo E.
2009-01-01
Background Algorithms for intravenous insulin infusion may assign the infusion rate (IR) by a two-step process. First, the previous insulin infusion rate (IRprevious) and the rate of change of blood glucose (BG) from the previous iteration of the algorithm are used to estimate the maintenance rate (MR) of insulin infusion. Second, the insulin IR for the next iteration (IRnext) is assigned to be commensurate with the MR and the distance of the current blood glucose (BGcurrent) from target. With use of a specific set of algorithm parameter values, a family of iso-MR curves is created, each giving IR as a function of MR and BG. Method To test the feasibility of estimating MR from the IRprevious and the previous rate of change of BG, historical hyperglycemic data points were used to compute the “maintenance rate cross step next estimate” (MRcsne). Historical cases had been treated with intravenous insulin infusion using a tabular protocol that estimated MR according to column-change rules. The mean IR on historical stable intervals (MRtrue), an estimate of the biologic value of MR, was compared to MRcsne during the hyperglycemic iteration immediately preceding the stable interval. Hypothetically calculated MRcsne-dependent IRnext was compared to IRnext assigned historically. An expanded theory of an algorithm is developed mathematically. Practical recommendations for computerization are proposed. Results The MRtrue determined on each of 30 stable intervals and the MRcsne during the immediately preceding hyperglycemic iteration differed, having medians with interquartile ranges 2.7 (1.2–3.7) and 3.2 (1.5–4.6) units/h, respectively. However, these estimates of MR were strongly correlated (R2 = 0.88). During hyperglycemia at 941 time points the IRnext assigned historically and the hypothetically calculated MRcsne-dependent IRnext differed, having medians with interquartile ranges 4.0 (3.0–6.0) and 4.6 (3.0–6.8) units/h, respectively, but these paired values again were correlated (R2 = 0.87). This article describes a programmable algorithm for intravenous insulin infusion. The fundamental equation of the algorithm gives the relationship among IR; the biologic parameter MR; and two variables expressing an instantaneous rate of change of BG, one of which must be zero at any given point in time and the other positive, negative, or zero, namely the rate of change of BG from below target (rate of ascent) and the rate of change of BG from above target (rate of descent). In addition to user-definable parameters, three special algorithm parameters discoverable in nature are described: the maximum rate of the spontaneous ascent of blood glucose during nonhypoglycemia, the glucose per daily dose of insulin exogenously mediated, and the MR at given patient time points. User-assignable parameters will facilitate adaptation to different patient populations. Conclusions An algorithm is described that estimates MR prior to the attainment of euglycemia and computes MR-dependent values for IRnext. Design features address glycemic variability, promote safety with respect to hypoglycemia, and define a method for specifying glycemic targets that are allowed to differ according to patient condition. PMID:20144334
Reliability, Risk and Cost Trade-Offs for Composite Designs
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1996-01-01
Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.
NASA Technical Reports Server (NTRS)
Li, C. J.; Devries, W. R.; Ludema, K. C.
1983-01-01
Measurements made with a stylus surface tracer which provides a digitized representation of a surface profile are discussed. Parameters are defined to characterize the height (e.g., RMS roughness, skewness, and kurtosis) and length (e.g., autocorrelation) of the surface topography. These are applied to the characterization of crank shaft journals which were manufactured by different grinding and lopping procedures known to give significant differences in crank shaft bearing life. It was found that three parameters (RMS roughness, skewness, and kurtosis) are necessary to adequately distinguish the character of these surfaces. Every surface specimen has a set of values for these three parameters. They can be regarded as a set coordinate in a space constituted by three characteristics axes. The various journal surfaces can be classified along with the determination of a proper wavelength cutoff (0.25 mm) by using a method of separated subspace. The finite radius of the stylus used for profile tracing gives an inherent measurement error as it passes over the fine structure of the surface. A mathematical model is derived to compensate for this error.
Orbital Signature Analyzer (OSA): A spacecraft health/safety monitoring and analysis tool
NASA Technical Reports Server (NTRS)
Weaver, Steven; Degeorges, Charles; Bush, Joy; Shendock, Robert; Mandl, Daniel
1993-01-01
Fixed or static limit sensing is employed in control centers to ensure that spacecraft parameters remain within a nominal range. However, many critical parameters, such as power system telemetry, are time-varying and, as such, their 'nominal' range is necessarily time-varying as well. Predicted data, manual limits checking, and widened limit-checking ranges are often employed in an attempt to monitor these parameters without generating excessive limits violations. Generating predicted data and manual limits checking are both resource intensive, while broadening limit ranges for time-varying parameters is clearly inadequate to detect all but catastrophic problems. OSA provides a low-cost solution by using analytically selected data as a reference upon which to base its limits. These limits are always defined relative to the time-varying reference data, rather than as fixed upper and lower limits. In effect, OSA provides individual limits tailored to each value throughout all the data. A side benefit of using relative limits is that they automatically adjust to new reference data. In addition, OSA provides a wealth of analytical by-products in its execution.
Exploring the free-energy landscape of a short peptide using an average force
NASA Astrophysics Data System (ADS)
Chipot, Christophe; Hénin, Jérôme
2005-12-01
The reversible folding of deca-alanine is chosen as a test case for characterizing a method that uses an adaptive biasing force (ABF) to escape from the minima and overcome the barriers of the free-energy landscape. This approach relies on the continuous estimation of a biasing force that yields a Hamiltonian in which no average force is exerted along the ordering parameter ξ. Optimizing the parameters that control how the ABF is applied, the method is shown to be extremely effective when a nonequivocal ordering parameter can be defined to explore the folding pathway of the peptide. Starting from a β-turn motif and restraining ξ to a region of the conformational space that extends from the α-helical state to an ensemble of extended structures, the ABF scheme is successful in folding the peptide chain into a compact α helix. Sampling of this conformation is, however, marginal when the range of ξ values embraces arrangements of greater compactness, hence demonstrating the inherent limitations of free-energy methods when ambiguous ordering parameters are utilized.
Viñes, Francesc; Lamiel-García, Oriol; Chul Ko, Kyoung; Yong Lee, Jin; Illas, Francesc
2017-04-30
The effect of the amount of Hartree-Fock mixing parameter (α) and of the screening parameter (w) defining the range separated HSE type hybrid functional is systematically studied for a series of seven metal oxides: TiO 2 , ZrO 2 , CuO 2 , ZnO, MgO, SnO 2 , and SrTiO 3 . First, reliable band gap values were determined by comparing the optimal α reproducing the experiment with the inverse of the experimental dielectric constant. Then, the effect of the w in the HSE functional on the calculated band gap was explored in detail. Results evidence the existence of a virtually infinite number of combinations of the two parameters which are able to reproduce the experimental band gap, without a unique pair able to describe the full studied set of materials. Nevertheless, the results point out the possibility of describing the electronic structure of these materials through a functional including a screened HF exchange and an appropriate correlation contribution. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Ely, D. Matthew
2006-01-01
Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.
Gulel, Okan; Akcay, Murat; Soylu, Korhan; Aksan, Gokhan; Yuksel, Serkan; Zengin, Halit; Meric, Murat; Sahin, Mahmut
2016-05-01
The coronary slow flow phenomenon (CSFP) is defined as a delayed distal vessel contrast opacification in the absence of obstructive epicardial coronary artery disease during coronary angiography. There is conflicting data in medical literature regarding the effects of CSFP on the left ventricular functions assessed by conventional echocardiography or tissue Doppler imaging. Therefore, we aimed to evaluate whether there is any abnormality in the myocardial deformation parameters (strain, strain rate (SR), rotation, twist) of the left ventricle obtained by speckle tracking echocardiography (STE) in patients with CSFP. Twenty patients with CSFP were included prospectively in the study. Another 20 patients with similar demographics and cardiovascular risk factors as well as normal coronary angiography were used as the control group. Two-dimensional echocardiographic images of the left ventricle from the apical long-axis, two-chamber, four-chamber, and parasternal short-axis views were used for STE analysis. The analysis of left ventricular circumferential deformation parameters showed that the averaged peak systolic strain, systolic SR, and early diastolic SR values were significantly lower in patients with CSFP (P = 0.009, P = 0.02, and P = 0.02, respectively). Among the left ventricular rotation and twist values, apical rotation was significantly lower in patients with CSFP (P = 0.02). Further, the mean thrombolysis in myocardial infarction frame count value was found to be negatively correlated with the averaged peak circumferential early diastolic SR (r = -0.35, P = 0.03). It was positively correlated with the averaged peak circumferential systolic strain (r = 0.47, P = 0.003) and circumferential systolic SR (r = 0.46, P = 0.005). Coronary slow flow phenomenon leads to significant alterations in the myocardial deformation parameters of the left ventricle as assessed by STE. Specifically, circumferential deformation parameters are affected in CSFP patients. © 2015, Wiley Periodicals, Inc.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1992-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
Fast temporal neural learning using teacher forcing
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)
1995-01-01
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.
Evaluation of passive transfer in captive greater kudu (Tragelaphus strepsiceros).
Hammond, Elizabeth E; Fiorello, Christine V
2011-12-01
Failure of passive transfer (FPT) in captive greater kudu (Tragelaphus strepsiceros) calves can lead to increased morbidity and mortality. In this retrospective study, serum samples from neonatal kudu calves were tested for immunoglobulin using different tests validated for domestic ruminants, including measurement of gamma globulin (GG) measured by protein electrophoresis, total solids (TS) measured by calibrated refractometry, total protein (TP) and globulins measured by colorimetry, gamma glutamyltransferase (GGT), and the zinc sulfate turbidity test (ZSTT). In a logistic regression model, TP, TS, globulins, and the natural log transform of GGT were the only significant parameters associated with FPT. Various historic parameters related to the dam, as well as calf weight, sex, glucose, and packed cell volume, were not significant. Based on the results, FPT in greater kudu is defined as GG of < 0.5 g/dl, a value lower than that in domestic cattle. TS measured by refractometry has an 80% sensitivity and a 100% specificity for FPT in greater kudu. With FPT defined as GG < 0.5 g/dl, kudu calves with a TS < 4.8 g/dl and a negative ZSTT have an increased probability of requiring medical intervention and additional diagnostics may be warranted.
Penalized nonparametric scalar-on-function regression via principal coordinates
Reiss, Philip T.; Miller, David L.; Wu, Pei-Shien; Hua, Wen-Yu
2016-01-01
A number of classical approaches to nonparametric regression have recently been extended to the case of functional predictors. This paper introduces a new method of this type, which extends intermediate-rank penalized smoothing to scalar-on-function regression. In the proposed method, which we call principal coordinate ridge regression, one regresses the response on leading principal coordinates defined by a relevant distance among the functional predictors, while applying a ridge penalty. Our publicly available implementation, based on generalized additive modeling software, allows for fast optimal tuning parameter selection and for extensions to multiple functional predictors, exponential family-valued responses, and mixed-effects models. In an application to signature verification data, principal coordinate ridge regression, with dynamic time warping distance used to define the principal coordinates, is shown to outperform a functional generalized linear model. PMID:29217963
Leen, Wilhelmina G.; Willemsen, Michèl A.; Wevers, Ron A.; Verbeek, Marcel M.
2012-01-01
Cerebrospinal fluid (CSF) analysis is an important tool in the diagnostic work-up of many neurological disorders, but reference ranges for CSF glucose, CSF/plasma glucose ratio and CSF lactate based on studies with large numbers of CSF samples are not available. Our aim was to define age-specific reference values. In 1993 The Nijmegen Observational CSF Study was started. Results of all CSF samples that were analyzed between 1993 and 2008 at our laboratory were systematically collected and stored in our computerized database. After exclusion of CSF samples with an unknown or elevated erythrocyte count, an elevated leucocyte count, elevated concentrations of bilirubin, free hemoglobin, or total protein 9,036 CSF samples were further studied for CSF glucose (n = 8,871), CSF/plasma glucose ratio (n = 4,516) and CSF lactate values (n = 7,614). CSF glucose, CSF/plasma glucose ratio and CSF lactate were age-, but not sex dependent. Age-specific reference ranges were defined as 5–95th percentile ranges. CSF glucose 5th percentile values ranged from 1.8 to 2.9 mmol/L and 95th percentile values from 3.8 to 5.6 mmol/L. CSF/plasma glucose ratio 5th percentile values ranged from 0.41 to 0.53 and 95th percentile values from 0.82 to 1.19. CSF lactate 5th percentile values ranged from 0.88 to 1.41 mmol/L and 95th percentile values from 2.00 to 2.71 mmol/L. Reference ranges for all three parameters were widest in neonates and narrowest in toddlers, with lower and upper limits increasing with age. These reference values allow a reliable interpretation of CSF results in everyday clinical practice. Furthermore, hypoglycemia was associated with an increased CSF/plasma glucose ratio, whereas hyperglycemia did not affect the CSF/plasma glucose ratio. PMID:22880096
Blood flow quantification using 1D CFD parameter identification
NASA Astrophysics Data System (ADS)
Brosig, Richard; Kowarschik, Markus; Maday, Peter; Katouzian, Amin; Demirci, Stefanie; Navab, Nassir
2014-03-01
Patient-specific measurements of cerebral blood flow provide valuable diagnostic information concerning cerebrovascular diseases rather than visually driven qualitative evaluation. In this paper, we present a quantitative method to estimate blood flow parameters with high temporal resolution from digital subtraction angiography (DSA) image sequences. Using a 3D DSA dataset and a 2D+t DSA sequence, the proposed algorithm employs a 1D Computational Fluid Dynamics (CFD) model for estimation of time-dependent flow values along a cerebral vessel, combined with an additional Advection Diffusion Equation (ADE) for contrast agent propagation. The CFD system, followed by the ADE, is solved with a finite volume approximation, which ensures the conservation of mass. Instead of defining a new imaging protocol to obtain relevant data, our cost function optimizes the bolus arrival time (BAT) of the contrast agent in 2D+t DSA sequences. The visual determination of BAT is common clinical practice and can be easily derived from and be compared to values, generated by a 1D-CFD simulation. Using this strategy, we ensure that our proposed method fits best to clinical practice and does not require any changes to the medical work flow. Synthetic experiments show that the recovered flow estimates match the ground truth values with less than 12% error in the mean flow rates.
Evaluation of Magnetic Diagnostics for MHD Equilibrium Reconstruction of LHD Discharges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sontag, Aaron C; Hanson, James D.; Lazerson, Sam
2011-01-01
Equilibrium reconstruction is the process of determining the set of parameters of an MHD equilibrium that minimize the difference between expected and experimentally observed signals. This is routinely performed in axisymmetric devices, such as tokamaks, and the reconstructed equilibrium solution is then the basis for analysis of stability and transport properties. The V3FIT code [1] has been developed to perform equilibrium reconstruction in cases where axisymmetry cannot be assumed, such as in stellarators. The present work is focused on using V3FIT to analyze plasmas in the Large Helical Device (LHD) [2], a superconducting, heliotron type device with over 25 MWmore » of heating power that is capable of achieving both high-beta ({approx}5%) and high density (>1 x 10{sup 21}/m{sup 3}). This high performance as well as the ability to drive tens of kiloamperes of toroidal plasma current leads to deviations in the equilibrium state from the vacuum flux surfaces. This initial study examines the effectiveness of using magnetic diagnostics as the observed signals in reconstructing experimental plasma parameters for LHD discharges. V3FIT uses the VMEC [3] 3D equilibrium solver to calculate an initial equilibrium solution with closed, nested flux surfaces based on user specified plasma parameters. This equilibrium solution is then used to calculate the expected signals for specified diagnostics. The differences between these expected signal values and the observed values provides a starting {chi}{sup 2} value. V3FIT then varies all of the fit parameters independently, calculating a new equilibrium and corresponding {chi}{sup 2} for each variation. A quasi-Newton algorithm [1] is used to find the path in parameter space that leads to a minimum in {chi}{sup 2}. Effective diagnostic signals must vary in a predictable manner with the variations of the plasma parameters and this signal variation must be of sufficient amplitude to be resolved from the signal noise. Signal effectiveness can be defined for a specific signal and specific reconstruction parameter as the dimensionless fractional reduction in the posterior parameter variance with respect to the signal variance. Here, {sigma}{sub i}{sup sig} is the variance of the ith signal and {sigma}{sub j}{sup param} param is the posterior variance of the jth fit parameter. The sum of all signal effectiveness values for a given reconstruction parameter is normalized to one. This quantity will be used to determine signal effectiveness for various reconstruction cases. The next section will examine the variation of the expected signals with changes in plasma pressure and the following section will show results for reconstructing model plasmas using these signals.« less
Convergence properties of η → 3π decays in chiral perturbation theory
NASA Astrophysics Data System (ADS)
Kolesár, Marián; Novotný, Jiří
2017-01-01
The convergence of the decay widths and some of the Dalitz plot parameters of the decay η → 3π seems problematic in low energy QCD. In the framework of resummed chiral perturbation theory, we explore the question of compatibility of experimental data with a reasonable convergence of a carefully defined chiral series. By treating the uncertainties in the higher orders statistically, we numerically generate a large set of theoretical predictions, which are then confronted with experimental information. In the case of the decay widths, the experimental values can be reconstructed for a reasonable range of the free parameters and thus no tension is observed, in spite of what some of the traditional calculations suggest. The Dalitz plot parameters a and d can be described very well too. When the parameters b and α are concerned, we find a mild tension for the whole range of the free parameters, at less than 2σ C.L. This can be interpreted in two ways - either some of the higher order corrections are indeed unexpectedly large or there is a specific configuration of the remainders, which is, however, not completely improbable.
System and method for motor parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less
Analytical performance evaluation of SAR ATR with inaccurate or estimated models
NASA Astrophysics Data System (ADS)
DeVore, Michael D.
2004-09-01
Hypothesis testing algorithms for automatic target recognition (ATR) are often formulated in terms of some assumed distribution family. The parameter values corresponding to a particular target class together with the distribution family constitute a model for the target's signature. In practice such models exhibit inaccuracy because of incorrect assumptions about the distribution family and/or because of errors in the assumed parameter values, which are often determined experimentally. Model inaccuracy can have a significant impact on performance predictions for target recognition systems. Such inaccuracy often causes model-based predictions that ignore the difference between assumed and actual distributions to be overly optimistic. This paper reports on research to quantify the effect of inaccurate models on performance prediction and to estimate the effect using only trained parameters. We demonstrate that for large observation vectors the class-conditional probabilities of error can be expressed as a simple function of the difference between two relative entropies. These relative entropies quantify the discrepancies between the actual and assumed distributions and can be used to express the difference between actual and predicted error rates. Focusing on the problem of ATR from synthetic aperture radar (SAR) imagery, we present estimators of the probabilities of error in both ideal and plug-in tests expressed in terms of the trained model parameters. These estimators are defined in terms of unbiased estimates for the first two moments of the sample statistic. We present an analytical treatment of these results and include demonstrations from simulated radar data.
On the Use of the Beta Distribution in Probabilistic Resource Assessments
Olea, R.A.
2011-01-01
The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. The beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution. ?? 2011 International Association for Mathematical Geology (outside the USA).
Parameter Estimation for Thurstone Choice Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vojnovic, Milan; Yun, Seyoung
We consider the estimation accuracy of individual strength parameters of a Thurstone choice model when each input observation consists of a choice of one item from a set of two or more items (so called top-1 lists). This model accommodates the well-known choice models such as the Luce choice model for comparison sets of two or more items and the Bradley-Terry model for pair comparisons. We provide a tight characterization of the mean squared error of the maximum likelihood parameter estimator. We also provide similar characterizations for parameter estimators defined by a rank-breaking method, which amounts to deducing one ormore » more pair comparisons from a comparison of two or more items, assuming independence of these pair comparisons, and maximizing a likelihood function derived under these assumptions. We also consider a related binary classification problem where each individual parameter takes value from a set of two possible values and the goal is to correctly classify all items within a prescribed classification error. The results of this paper shed light on how the parameter estimation accuracy depends on given Thurstone choice model and the structure of comparison sets. In particular, we found that for unbiased input comparison sets of a given cardinality, when in expectation each comparison set of given cardinality occurs the same number of times, for a broad class of Thurstone choice models, the mean squared error decreases with the cardinality of comparison sets, but only marginally according to a diminishing returns relation. On the other hand, we found that there exist Thurstone choice models for which the mean squared error of the maximum likelihood parameter estimator can decrease much faster with the cardinality of comparison sets. We report empirical evaluation of some claims and key parameters revealed by theory using both synthetic and real-world input data from some popular sport competitions and online labor platforms.« less
Land-surface parameter optimisation using data assimilation techniques: the adJULES system V1.0
NASA Astrophysics Data System (ADS)
Raoult, Nina M.; Jupp, Tim E.; Cox, Peter M.; Luke, Catherine M.
2016-08-01
Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed to search for locally optimum parameters by calibrating against observations. This paper describes adJULES in a data assimilation framework and demonstrates its ability to improve the model-data fit using eddy-covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85 % of the sites used in the study, at both the calibration and evaluation stages. The new improved parameters for JULES are presented along with the associated uncertainties for each parameter.
EPA worst case water microcosms for testing phage biocontrol of Salmonella.
McLaughlin, Michael R; Brooks, John P
2008-01-01
A microplate method was developed as a tool to test phages for their ability to control Salmonella in aqueous environments. The method used EPA (U.S. Environmental Protection Agency) worst case water (WCW) in 96-well plates. The WCW provided a consistent and relatively simple defined turbid aqueous matrix, high in total organic carbon (TOC) and total dissolved salts (TDS), to simulate swine lagoon effluent, without the inconvenience of malodor and confounding effects from other biological factors. The WCW was originally defined to simulate high turbidity and organic matter in water for testing point-of-use filtration devices. Use of WCW to simulate lagoon effluent for phage testing is a new and innovative application of this matrix. Control of physical and chemical parameters (TOC, TDS, turbidity, temperature, and pH) allowed precise evaluation of microbiological parameters (Salmonella and phages). In a typical application, wells containing WCW were loaded with Salmonella enterica susp. enterica serovar Typhimurium (ATCC14028) and treated with phages alone and in cocktail combinations. Mean Salmonella inactivation rates (k, where the lower the value, the greater the inactivation) of phage treatments ranged from -0.32 to -1.60 versus -0.004 for Salmonella controls. Mean log(10) reductions (the lower the value, the greater the reduction) of Salmonella phage treatments were -1.60 for phage PR04-1, -2.14 for phage PR37-96, and -2.14 for both phages in a sequential cocktail, versus -0.08 for Salmonella controls. The WCW microcosm system was an effective tool for evaluating the biocontrol potential of Salmonella phages.
Husak, Gregory J.; Michaelsen, Joel; Kyriakidis, P.; Verdin, James P.; Funk, Chris; Galu, Gideon
2011-01-01
Probabilistic forecasts are produced from a variety of outlets to help predict rainfall, and other meteorological events, for periods of 1 month or more. Such forecasts are expressed as probabilities of a rainfall event, e.g. being in the upper, middle, or lower third of the relevant distribution of rainfall in the region. The impact of these forecasts on the expectation for the event is not always clear or easily conveyed. This article proposes a technique based on Monte Carlo simulation for adjusting existing climatologic statistical parameters to match forecast information, resulting in new parameters defining the probability of events for the forecast interval. The resulting parameters are shown to approximate the forecasts with reasonable accuracy. To show the value of the technique as an application for seasonal rainfall, it is used with consensus forecast developed for the Greater Horn of Africa for the 2009 March-April-May season. An alternative, analytical approach is also proposed, and discussed in comparison to the first simulation-based technique.
Optimized theory for simple and molecular fluids.
Marucho, M; Montgomery Pettitt, B
2007-03-28
An optimized closure approximation for both simple and molecular fluids is presented. A smooth interpolation between Perkus-Yevick and hypernetted chain closures is optimized by minimizing the free energy self-consistently with respect to the interpolation parameter(s). The molecular version is derived from a refinement of the method for simple fluids. In doing so, a method is proposed which appropriately couples an optimized closure with the variant of the diagrammatically proper integral equation recently introduced by this laboratory [K. M. Dyer et al., J. Chem. Phys. 123, 204512 (2005)]. The simplicity of the expressions involved in this proposed theory has allowed the authors to obtain an analytic expression for the approximate excess chemical potential. This is shown to be an efficient tool to estimate, from first principles, the numerical value of the interpolation parameters defining the aforementioned closure. As a preliminary test, representative models for simple fluids and homonuclear diatomic Lennard-Jones fluids were analyzed, obtaining site-site correlation functions in excellent agreement with simulation data.
Stacking fault density and bond orientational order of fcc ruthenium nanoparticles
NASA Astrophysics Data System (ADS)
Seo, Okkyun; Sakata, Osami; Kim, Jae Myung; Hiroi, Satoshi; Song, Chulho; Kumara, Loku Singgappulige Rosantha; Ohara, Koji; Dekura, Shun; Kusada, Kohei; Kobayashi, Hirokazu; Kitagawa, Hiroshi
2017-12-01
We investigated crystal structure deviations of catalytic nanoparticles (NPs) using synchrotron powder X-ray diffraction. The samples were fcc ruthenium (Ru) NPs with diameters of 2.4, 3.5, 3.9, and 5.4 nm. We analyzed average crystal structures by applying the line profile method to a stacking fault model and local crystal structures using bond orientational order (BOO) parameters. The reflection peaks shifted depending on rules that apply to each stacking fault. We evaluated the quantitative stacking faults densities for fcc Ru NPs, and the stacking fault per number of layers was 2-4, which is quite large. Our analysis shows that the fcc Ru 2.4 nm-diameter NPs have a considerably high stacking fault density. The B factor tends to increase with the increasing stacking fault density. A structural parameter that we define from the BOO parameters exhibits a significant difference from the ideal value of the fcc structure. This indicates that the fcc Ru NPs are highly disordered.
van de Geijn, J; Fraass, B A
1984-01-01
The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from 60Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small number of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.
Net fractional depth dose: a basis for a unified analytical description of FDD, TAR, TMR, and TPR
DOE Office of Scientific and Technical Information (OSTI.GOV)
van de Geijn, J.; Fraass, B.A.
The net fractional depth dose (NFD) is defined as the fractional depth dose (FDD) corrected for inverse square law. Analysis of its behavior as a function of depth, field size, and source-surface distance has led to an analytical description with only seven model parameters related to straightforward physical properties. The determination of the characteristic parameter values requires only seven experimentally determined FDDs. The validity of the description has been tested for beam qualities ranging from /sup 60/Co gamma rays to 18-MV x rays, using published data from several different sources as well as locally measured data sets. The small numbermore » of model parameters is attractive for computer or hand-held calculator applications. The small amount of required measured data is important in view of practical data acquisition for implementation of a computer-based dose calculation system. The generating function allows easy and accurate generation of FDD, tissue-air ratio, tissue-maximum ratio, and tissue-phantom ratio tables.« less
Exploiting Bounded Signal Flow for Graph Orientation Based on Cause-Effect Pairs
NASA Astrophysics Data System (ADS)
Dorn, Britta; Hüffner, Falk; Krüger, Dominikus; Niedermeier, Rolf; Uhlmann, Johannes
We consider the following problem: Given an undirected network and a set of sender-receiver pairs, direct all edges such that the maximum number of "signal flows" defined by the pairs can be routed respecting edge directions. This problem has applications in communication networks and in understanding protein interaction based cell regulation mechanisms. Since this problem is NP-hard, research so far concentrated on polynomial-time approximation algorithms and tractable special cases. We take the viewpoint of parameterized algorithmics and examine several parameters related to the maximum signal flow over vertices or edges. We provide several fixed-parameter tractability results, and in one case a sharp complexity dichotomy between a linear-time solvable case and a slightly more general NP-hard case. We examine the value of these parameters for several real-world network instances. For many relevant cases, the NP-hard problem can be solved to optimality. In this way, parameterized analysis yields both deeper insight into the computational complexity and practical solving strategies.
Determination of functions of controlling drives of main executive mechanisms of mining excavators
NASA Astrophysics Data System (ADS)
Lagunova, Yu A.; Komissarov, A. P.; Lukashuk, O. A.
2018-03-01
It is shown that a special shovel is a feature of the structure of the drives of the main mechanisms (mechanisms of lifting and pressure) of career excavators with working equipment, the presence in the transfer device of a two-crank-lever mechanism of working equipment that connects the main mechanisms with the working body (bucket). In this case, the transformation of the mechanical energy parameters of the motors into energy-force parameters realized at the cutting edge of the bucket (teeth) takes place depending on the type of the kinematic scheme of the two-link-lever mechanism. The concept of “control function” defining the relationship between the parameters characterizing the position of the bucket in the face (the coordinates of the tip of the cutting edge of the bucket, the digging speed) and the required control level are introduced. These are the values of the lifting and head speeds ensuring the bucket movement along a given trajectory.
Electron energy spectrum and magnetic interactions in high-Tc superconductors
NASA Technical Reports Server (NTRS)
Turshevski, S. A.; Liechtenstein, A. I.; Antropov, V. P.; Gubanov, V. A.
1991-01-01
The character of magnetic interactions in La-Sr-Cu-O and Y-Ba-Cu-O systems is of primary importance for analysis of high-T(sub c) superconductivity in these compounds. Neutron diffraction experiments showed the antiferromagnetic ground state for nonsuperconducting La2CuO4 and YBa2Cu3O6 with the strongest antiferromagnetic superexchange being in the ab plane. The nonsuperconducting '1-2-3' system has two Neel temperatures T(sub N1) and T(sub N2). The first one corresponds to the ordering of Cu atoms in the CuO2 planes; T(sub N2) reflects the antiferromagnetic ordering of magnetic moments in CuO chains relatively to the moments in the planes T(sub N1) and T(sub N2) which depend strongly on the oxygen content. Researchers describe magnetic interactions in high-T superconductors based on the Linear Muffin-Tin Orbitals (LMTO) band structure calculations. Exchange interaction parameters can be defined from the effective Heisenberg Hamiltonian. When the magnetic moments are not too large, as copper magnetic moments in superconducting oxides, J(sub ij) parameters can be defined through the non-local magnetic susceptibility of spin restricted solution for the crystal. The results of nonlocal magnetic susceptibility calculations and the values of exchange interaction parameters for La CuO and YBa2Cu3O7 systems are given in tabular form. Strong anisotropy of exchange interactions in the ab plane and along the c axis in La2CuO4 is obviously seen. The value of Neel temperature found agrees well with the experimental data available. In the planes of '1-2-3' system there are quite strong antiferromagnetic Cu-O and O-O interaction which appear due to holes in oxygen subbands. These results are in line with the magnetic model of oxygen holes pairing in high-T(sub c) superconductors.
Electron energy spectrum and magnetic interactions in high-T(sub c) superconductors
NASA Technical Reports Server (NTRS)
Turshevski, S. A.; Liechtenstein, A. I.; Antropov, V. P.; Gubanov, V. A.
1990-01-01
The character of magnetic interactions in La-Sr-Cu-O and Y-Ba-Cu-O systems is of primary importance for analysis of high-T(sub c) superconductivity in these compounds. Neutron diffraction experiments showed the antiferromagnetic ground state for nonsuperconducting La2CuO4 and YBa2Cu3O6 with the strongest antiferromagnetic superexchange being in the ab plane. The nonsuperconducting '1-2-3' system has two Neel temperatures T sub N1 and T sub N2. The first one corresponds to the ordering of Cu atoms in the CuO2 planes; T sub N2 reflects the antiferromagnetic ordering of magnetic moments in CuO chains relatively to the moments in the planes T sub N1 and T sub N2 depend strongly on the oxygen content. Researchers describe magnetic interactions in high-T superconductors based on the Linear Muffin-Tin Orbitals (LMTO) band structure calculations. Exchange interaction parameters can be defined from the effective Heisenberg hamiltonian. When the magnetic moments are not too large, as copper magnetic moments in superconducting oxides, J sub ij parameters can be defined through the non-local magnetic susceptibility of spin restricted solution for the crystal. The results of nonlocal magnetic susceptibility calculations and the values of exchange interaction parameters for La CuO and YBa2Cu3O7 systems are given in tabular form. Strong anisotropy of exchange interactions in the ab plane and along the c axis in La2CuO4 is obviously seen. The value of Neel temperature found agrees well with the experimental data available. In the planes of '1-2-3' system there are quite strong antiferromagnetic Cu-O and O-O interaction which appear due to holes in oxygen subbands. These results are in line with the magnetic model of oxygen holes pairing in high-T(sub c) superconductors.
Continental-scale river flow in climate models
NASA Technical Reports Server (NTRS)
Miller, James R.; Russell, Gary L.; Caliri, Guilherme
1994-01-01
The hydrologic cycle is a major part of the global climate system. There is an atmospheric flux of water from the ocean surface to the continents. The cycle is closed by return flow in rivers. In this paper a river routing model is developed to use with grid box climate models for the whole earth. The routing model needs an algorithm for the river mass flow and a river direction file, which has been compiled for 4 deg x 5 deg and 2 deg x 2.5 deg resolutions. River basins are defined by the direction files. The river flow leaving each grid box depends on river and lake mass, downstream distance, and an effective flow speed that depends on topography. As input the routing model uses monthly land source runoff from a 5-yr simulation of the NASA/GISS atmospheric climate model (Hansen et al.). The land source runoff from the 4 deg x 5 deg resolution model is quartered onto a 2 deg x 2.5 deg grid, and the effect of grid resolution is examined. Monthly flow at the mouth of the world's major rivers is compared with observations, and a global error function for river flow is used to evaluate the routing model and its sensitivity to physical parameters. Three basinwide parameters are introduced: the river length weighted by source runoff, the turnover rate, and the basinwide speed. Although the values of these parameters depend on the resolution at which the rivers are defined, the values should converge as the grid resolution becomes finer. When the routing scheme described here is coupled with a climate model's source runoff, it provides the basis for closing the hydrologic cycle in coupled atmosphere-ocean models by realistically allowing water to return to the ocean at the correct location and with the proper magnitude and timing.
Clonal growth and plant species abundance
Herben, Tomáš; Nováková, Zuzana; Klimešová, Jitka
2014-01-01
Background and Aims Both regional and local plant abundances are driven by species' dispersal capacities and their abilities to exploit new habitats and persist there. These processes are affected by clonal growth, which is difficult to evaluate and compare across large numbers of species. This study assessed the influence of clonal reproduction on local and regional abundances of a large set of species and compared the predictive power of morphologically defined traits of clonal growth with data on actual clonal growth from a botanical garden. The role of clonal growth was compared with the effects of seed reproduction, habitat requirements and growth, proxied both by LHS (leaf–height–seed) traits and by actual performance in the botanical garden. Methods Morphological parameters of clonal growth, actual clonal reproduction in the garden and LHS traits (leaf-specific area – height – seed mass) were used as predictors of species abundance, both regional (number of species records in the Czech Republic) and local (mean species cover in vegetation records) for 836 perennial herbaceous species. Species differences in habitat requirements were accounted for by classifying the dataset by habitat type and also by using Ellenberg indicator values as covariates. Key Results After habitat differences were accounted for, clonal growth parameters explained an important part of variation in species abundance, both at regional and at local levels. At both levels, both greater vegetative growth in cultivation and greater lateral expansion trait values were correlated with higher abundance. Seed reproduction had weaker effects, being positive at the regional level and negative at the local level. Conclusions Morphologically defined traits are predictive of species abundance, and it is concluded that simultaneous investigation of several such traits can help develop hypotheses on specific processes (e.g. avoidance of self-competition, support of offspring) potentially underlying clonal growth effects on abundance. Garden performance parameters provide a practical approach to assessing the roles of clonal growth morphological traits (and LHS traits) for large sets of species. PMID:24482153
Akhmetov, Ildar; Bubnov, Rostyslav V
2015-01-01
Molecular diagnostic tests drive the scientific and technological uplift in the field of predictive, preventive, and personalized medicine offering invaluable clinical and socioeconomic benefits to the key stakeholders. Although the results of diagnostic tests are immensely influential, molecular diagnostic tests (MDx) are still grudgingly reimbursed by payers and amount for less than 5 % of the overall healthcare costs. This paper aims at defining the value of molecular diagnostic test and outlining the most important components of "value" from miscellaneous assessment frameworks, which go beyond accuracy and feasibility and impact the clinical adoption, informing healthcare resource allocation decisions. The authors suggest that the industry should facilitate discussions with various stakeholders throughout the entire assessment process in order to arrive at a consensus about the depth of evidence required for positive marketing authorization or reimbursement decisions. In light of the evolving "value-based healthcare" delivery practices, it is also recommended to account for social and ethical parameters of value, since these are anticipated to become as critical for reimbursement decisions and test acceptance as economic and clinical criteria.
Nonlinearly Activated Neural Network for Solving Time-Varying Complex Sylvester Equation.
Li, Shuai; Li, Yangming
2013-10-28
The Sylvester equation is often encountered in mathematics and control theory. For the general time-invariant Sylvester equation problem, which is defined in the domain of complex numbers, the Bartels-Stewart algorithm and its extensions are effective and widely used with an O(n³) time complexity. When applied to solving the time-varying Sylvester equation, the computation burden increases intensively with the decrease of sampling period and cannot satisfy continuous realtime calculation requirements. For the special case of the general Sylvester equation problem defined in the domain of real numbers, gradient-based recurrent neural networks are able to solve the time-varying Sylvester equation in real time, but there always exists an estimation error while a recently proposed recurrent neural network by Zhang et al [this type of neural network is called Zhang neural network (ZNN)] converges to the solution ideally. The advancements in complex-valued neural networks cast light to extend the existing real-valued ZNN for solving the time-varying real-valued Sylvester equation to its counterpart in the domain of complex numbers. In this paper, a complex-valued ZNN for solving the complex-valued Sylvester equation problem is investigated and the global convergence of the neural network is proven with the proposed nonlinear complex-valued activation functions. Moreover, a special type of activation function with a core function, called sign-bi-power function, is proven to enable the ZNN to converge in finite time, which further enhances its advantage in online processing. In this case, the upper bound of the convergence time is also derived analytically. Simulations are performed to evaluate and compare the performance of the neural network with different parameters and activation functions. Both theoretical analysis and numerical simulations validate the effectiveness of the proposed method.
FIT-MART: Quantum Magnetism with a Gentle Learning Curve
NASA Astrophysics Data System (ADS)
Engelhardt, Larry; Garland, Scott C.; Rainey, Cameron; Freeman, Ray A.
We present a new open-source software package, FIT-MART, that allows non-experts to quickly get started sim- ulating quantum magnetism. FIT-MART can be downloaded as a platform-idependent executable Java (JAR) file. It allows the user to define (Heisenberg) Hamiltonians by electronically drawing pictures that represent quantum spins and operators. Sliders are automatically generated to control the values of the parameters in the model, and when the values change, several plots are updated in real time to display both the resulting energy spectra and the equilibruim magnetic properties. Several experimental data sets for real magnetic molecules are included in FIT-MART to allow easy comparison between simulated and experimental data, and FIT-MART users can also import their own data for analysis and compare the goodness of fit for different models.
NASA Astrophysics Data System (ADS)
Alexeyev, C. N.; Lapin, B. P.; Yavorsky, M. A.
2018-01-01
We have studied the influence of a spacer introduced into a Bragg multihelicoidal fiber with a twist defect on the existence of defect-localized states. We have shown that in the presence of a Gaussian pump the energy of the electromagnetic field stored in topologically charged defect-localized modes essentially depends on the length of the spacer. We have demonstrated that by changing this length on the wavelength scale it is possible to strongly modulate such energy. This property can be used for generation and controlled emission of topologically charged light. We have also shown that if the value of an isotropic spacer’s refractive index deviates from the optimal value defined by the parameters of the multihelicoidal fiber parts the effect of localization disappears.
Numerical and Experimental Validation of a New Damage Initiation Criterion
NASA Astrophysics Data System (ADS)
Sadhinoch, M.; Atzema, E. H.; Perdahcioglu, E. S.; van den Boogaard, A. H.
2017-09-01
Most commercial finite element software packages, like Abaqus, have a built-in coupled damage model where a damage evolution needs to be defined in terms of a single fracture energy value for all stress states. The Johnson-Cook criterion has been modified to be Lode parameter dependent and this Modified Johnson-Cook (MJC) criterion is used as a Damage Initiation Surface (DIS) in combination with the built-in Abaqus ductile damage model. An exponential damage evolution law has been used with a single fracture energy value. Ultimately, the simulated force-displacement curves are compared with experiments to validate the MJC criterion. 7 out of 9 fracture experiments were predicted accurately. The limitations and accuracy of the failure predictions of the newly developed damage initiation criterion will be discussed shortly.
Monte Carlo simulation: Its status and future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murtha, J.A.
1997-04-01
Monte Carlo simulation is a statistics-based analysis tool that yields probability-vs.-value relationships for key parameters, including oil and gas reserves, capital exposure, and various economic yardsticks, such as net present value (NPV) and return on investment (ROI). Monte Carlo simulation is a part of risk analysis and is sometimes performed in conjunction with or as an alternative to decision [tree] analysis. The objectives are (1) to define Monte Carlo simulation in a more general context of risk and decision analysis; (2) to provide some specific applications, which can be interrelated; (3) to respond to some of the criticisms; (4) tomore » offer some cautions about abuses of the method and recommend how to avoid the pitfalls; and (5) to predict what the future has in store.« less
Net alkalinity and net acidity 1: Theoretical considerations
Kirby, C.S.; Cravotta, C.A.
2005-01-01
Net acidity and net alkalinity are widely used, poorly defined, and commonly misunderstood parameters for the characterization of mine drainage. The authors explain theoretical expressions of 3 types of alkalinity (caustic, phenolphthalein, and total) and acidity (mineral, CO2, and total). Except for rarely-invoked negative alkalinity, theoretically defined total alkalinity is closely analogous to measured alkalinity and presents few practical interpretation problems. Theoretically defined "CO 2-acidity" is closely related to most standard titration methods with an endpoint pH of 8.3 used for determining acidity in mine drainage, but it is unfortunately named because CO2 is intentionally driven off during titration of mine-drainage samples. Using the proton condition/mass- action approach and employing graphs to illustrate speciation with changes in pH, the authors explore the concept of principal components and how to assign acidity contributions to aqueous species commonly present in mine drainage. Acidity is defined in mine drainage based on aqueous speciation at the sample pH and on the capacity of these species to undergo hydrolysis to pH 8.3. Application of this definition shows that the computed acidity in mg L -1 as CaCO3 (based on pH and analytical concentrations of dissolved FeII, FeIII, Mn, and Al in mg L -1):aciditycalculated=50{1000(10-pH)+[2(FeII)+3(FeIII)]/56+2(Mn)/ 55+3(Al)/27}underestimates contributions from HSO4- and H+, but overestimates the acidity due to Fe3+ and Al3+. However, these errors tend to approximately cancel each other. It is demonstrated that "net alkalinity" is a valid mathematical construction based on theoretical definitions of alkalinity and acidity. Further, it is shown that, for most mine-drainage solutions, a useful net alkalinity value can be derived from: (1) alkalinity and acidity values based on aqueous speciation, (2) measured alkalinity minus calculated acidity, or (3) taking the negative of the value obtained in a standard method "hot peroxide" acidity titration, provided that labs report negative values. The authors recommend the third approach; i.e., net alkalinity = -Hot Acidity. ?? 2005 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Stopyra, Wojciech; Kurzac, Jarosław; Gruber, Konrad; Kurzynowski, Tomasz; Chlebus, Edward
2016-12-01
SLM technology allows production of a fully functional objects from metal and ceramic powders, with true density of more than 99,9%. The quality of manufactured items in SLM method affects more than 100 parameters, which can be divided into fixed and variable. Fixed parameters are those whose value before the process should be defined and maintained in an appropriate range during the process, e.g. chemical composition and morphology of the powder, oxygen level in working chamber, heating temperature of the substrate plate. In SLM technology, five parameters are variables that optimal set allows to produce parts without defects (pores, cracks) and with an acceptable speed. These parameters are: laser power, distance between points, time of exposure, distance between lines and layer thickness. To develop optimal parameters thin walls or single track experiments are performed, to select the best sets narrowed to three parameters: laser power, exposure time and distance between points. In this paper, the effect of laser power on the penetration depth and geometry of scanned single track was shown. In this experiment, titanium (grade 2) substrate plate was used and scanned by fibre laser of 1064 nm wavelength. For each track width, height and penetration depth of laser beam was measured.
The reliability of an instrumented start block analysis system.
Tor, Elaine; Pease, David L; Ball, Kevin A
2015-02-01
The swimming start is highly influential to overall competition performance. Therefore, it is paramount to develop reliable methods to perform accurate biomechanical analysis of start performance for training and research. The Wetplate Analysis System is a custom-made force plate system developed by the Australian Institute of Sport--Aquatic Testing, Training and Research Unit (AIS ATTRU). This sophisticated system combines both force data and 2D digitization to measure a number of kinetic and kinematic parameter values in an attempt to evaluate start performance. Fourteen elite swimmers performed two maximal effort dives (performance was defined as time from start signal to 15 m) over two separate testing sessions. Intraclass correlation coefficients (ICC) were used to determine each parameter's reliability. The kinetic parameters all had ICC greater than 0.9 except the time of peak vertical force (0.742). This may have been due to variations in movement initiation after the starting signal between trials. The kinematic and time parameters also had ICC greater than 0.9 apart from for the time of maximum depth (0.719). This parameter was lower due to the swimmers varying their depth between trials. Based on the high ICC scores for all parameters, the Wetplate Analysis System is suitable for biomechanical analysis of swimming starts.
Ferrari, M; Mistura, L; Patterson, E; Sjöström, M; Díaz, L E; Stehle, P; Gonzalez-Gross, M; Kersting, M; Widhalm, K; Molnár, D; Gottrand, F; De Henauw, S; Manios, Y; Kafatos, A; Moreno, L A; Leclercq, C
2011-01-01
Background/Objectives: To assess the iron status among European adolescents through selected biochemical parameters in a cross-sectional study performed in 10 European cities. Subjects/Methods: Iron status was defined utilising biochemical indicators. Iron depletion was defined as low serum ferritin (SF<15 μg/l). Iron deficiency (ID) was defined as high-soluble transferrin receptor (sTfR>8.5 mg/l) plus iron depletion. Iron deficiency anaemia (IDA) was defined as ID with haemoglobin (Hb) below the WHO cutoff for age and sex: 12.0 g/dl for girls and for boys aged 12.5–14.99 years and 13.0 g/dl for boys aged ⩾15 years. Enzyme linked immunosorbent assay was used as analytical method for SF, sTfR and C-reactive protein (CRP). Subjects with indication of inflammation (CRP >5 mg/l) were excluded from the analyses. A total of 940 adolescents aged 12.5–17.49 years (438 boys and 502 girls) were involved. Results: The percentage of iron depletion was 17.6%, significantly higher in girls (21.0%) compared with boys (13.8%). The overall percentage of ID and IDA was 4.7 and 1.3%, respectively, with no significant differences between boys and girls. A correlation was observed between log (SF) and Hb (r=0.36, P<0.01), and between log (sTfR) and mean corpuscular haemoglobin (r=−0.30, P<0.01). Iron body stores were estimated on the basis of log (sTfR/SF). A higher percentage of negative values of body iron was recorded in girls (16.5%) with respect to boys (8.3%), and body iron values tended to increase with age in boys, whereas the values remained stable in girls. Conclusions: To ensure adequate iron stores, specific attention should be given to girls at European level to ensure that their dietary intake of iron is adequate. PMID:21245877
The history of the Universe is an elliptic curve
NASA Astrophysics Data System (ADS)
Coquereaux, Robert
2015-06-01
Friedmann-Lemaître equations with contributions coming from matter, curvature, cosmological constant, and radiation, when written in terms of conformal time u rather than in terms of cosmic time t, can be solved explicitly in terms of standard Weierstrass elliptic functions. The spatial scale factor, the temperature, the densities, the Hubble function, and almost all quantities of cosmological interest (with the exception of t itself) are elliptic functions of u, in particular they are bi-periodic with respect to a lattice of the complex plane, when one takes u complex. After recalling the basics of the theory, we use these explicit expressions, as well as the experimental constraints on the present values of density parameters (we choose for the curvature density a small value in agreement with experimental bounds) to display the evolution of the main cosmological quantities for one real period 2{{ω }r} of conformal time (the cosmic time t ‘never ends’ but it goes to infinity for a finite value {{u}f}\\lt 2{{ω }r} of u). A given history of the Universe, specified by the measured values of present-day densities, is associated with a lattice in the complex plane, or with an elliptic curve, and therefore with two Weierstrass invariants {{g}2},{{g}3}. Using the same experimental data we calculate the values of these invariants, as well as the associated modular parameter and the corresponding Klein j-invariant. If one takes the flat case k = 0, the lattice is only defined up to homotheties, and if one, moreover, neglects the radiation contribution, the j-invariant vanishes and the corresponding modular parameter τ can be chosen in one corner of the standard fundamental domain of the modular group (equihanharmonic case: τ =exp (2iπ /3)). Several exact—i.e., non-numerical—results of independent interest are obtained in that case.
Rapant, S; Cvečková, V; Fajčíková, K; Dietzová, Z; Stehlíková, B
2017-02-01
This study deals with the analysis of relationship between chemical composition of the groundwater/drinking water and the data on mortality from oncological diseases (MOD) in the Slovak Republic. Primary data consist of the Slovak national database of groundwater analyses (20,339 chemical analyses, 34 chemical elements/compounds) and data on MOD (17 health indicators) collected for the 10-year period (1994-2003). The chemical and health data were unified in the same form and expressed as the mean values for each of 2883 municipalities within the Slovak Republic. Pearson and Spearman correlation as well as artificial neural network (ANN) methods were used for analysis of the relationship between chemical composition of groundwater/drinking water and MOD. The most significant chemical elements having influence on MOD were identified together with their limit values (limit and optimal contents). Based on the results of calculations, made through the neural networks, the following eight chemical elements/parameters in the groundwater were defined as the most significant for MOD: Ca + Mg (mmol l -1 ), Ca, Mg, TDS, Cl, HCO 3 , SO 4 and NO 3 . The results document the highest relationship between MOD and groundwater contents of Ca + Mg (mmol l -1 ), Ca and Mg. We observe increased MOD with low (deficit) contents of these three parameters of groundwater/drinking water. The following limit values were set for the most significant groundwater chemicals/parameters: Ca + Mg 1.73-5.85 mmol l -1 , Ca 60.5-196.8 mg l -1 and Mg 25.6-35.8 mg l -1 . At these concentration ranges, the mortality for oncological diseases in the Slovak Republic is at the lowest levels. These limit values are about twice higher in comparison with the current Slovak valid guideline values for the drinking water.
Telling apart Felidae and Ursidae from the distribution of nucleotides in mitochondrial DNA
NASA Astrophysics Data System (ADS)
Rovenchak, Andrij
2018-02-01
Rank-frequency distributions of nucleotide sequences in mitochondrial DNA are defined in a way analogous to the linguistic approach, with the highest-frequent nucleobase serving as a whitespace. For such sequences, entropy and mean length are calculated. These parameters are shown to discriminate the species of the Felidae (cats) and Ursidae (bears) families. From purely numerical values we are able to see in particular that giant pandas are bears while koalas are not. The observed linear relation between the parameters is explained using a simple probabilistic model. The approach based on the non-additive generalization of the Bose distribution is used to analyze the frequency spectra of the nucleotide sequences. In this case, the separation of families is not very sharp. Nevertheless, the distributions for Felidae have on average longer tails comparing to Ursidae.
Fractal Properties of Some Machined Surfaces
NASA Astrophysics Data System (ADS)
Thomas, T. R.; Rosén, B.-G.
Many surface profiles are self-affine fractals defined by fractal dimension D and topothesy Λ. Traditionally these parameters are derived laboriously from the slope and intercept of the profile's structure function. Recently a quicker and more convenient derivation from standard roughness parameters has been suggested. Based on this derivation, it is shown that D and Λ depend on two dimensionless numbers: the ratio of the mean peak spacing to the rms roughness, and the ratio of the mean local peak spacing to the sampling interval. Using this approach, values of D and Λ are calculated for 125 profiles produced by polishing, plateau honing and various single-point machining processes. Different processes are shown to occupy different regions in D-Λ space, and polished surfaces show a relationship between D and Λ which is independent of the surface material.
Thermal inactivation kinetics of Lactococcus lactis subsp. lactis bacteriophage pll98-22.
Sanlibaba, Pinar; Buzrul, S; Akkoç, Nefise; Alpas, H; Akçelik, M
2009-03-01
Survival curves of Lactococcus lactis subsp. lactis bacteriophage pll98 inactivated by heat were obtained at seven temperature values (50-80 degrees C) in M17 broth and skim milk. Deviations from first-order kinetics in both media were observed as sigmoidal shapes in the survival curves of pll98. An empirical model with four parameters was used to define the thermal inactivation. Number of parameters of the model was reduced from four to two in order to increase the robustness of the model. The reduced model produced comparable fits to the full model. Both the survival data and the calculations done using the reduced model (time necessary to reduce the number of phage pll98 six- or seven- log10) indicated that skim milk is a more protective medium than M17 broth within the assayed temperature range.
Perandini, Alessio; Perandini, Simone; Montemezzi, Stefania; Bonin, Cecilia; Bellini, Gaia; Bergamini, Valentino
2018-02-01
Deep endometriosis of the rectum is a highly challenging disease, and a surgical approach is often needed to restore anatomy and function. Two kinds of surgeries may be performed: radical with segmental bowel resection or conservative without resection. Most patients undergo magnetic resonance imaging (MRI) before surgery, but there is currently no method to predict if conservative surgery is feasible or whether bowel resection is required. The aim of this study was to create an algorithm that could predict bowel resection using MRI images, that was easy to apply and could be useful in a clinical setting, in order to adequately discuss informed consent with the patient and plan the an appropriate and efficient surgical session. We collected medical records from 2010 to 2016 and reviewed the MRI results of 52 patients to detect any parameters that could predict bowel resection. Parameters that were reproducible and with a significant correlation to radical surgery were investigated by statistical regression and combined in an algorithm to give the best prediction of resection. The calculation of two parameters in MRI, impact angle and lesion size, and their use in a mathematical algorithm permit us to predict bowel resection with a positive predictive value of 87% and a negative predictive value of 83%. MRI could be of value in predicting the need for bowel resection in deep endometriosis of the rectum. Further research is required to assess the possibility of a wider application of this algorithm outside our single-center study. © 2017 Japan Society of Obstetrics and Gynecology.
Self selected speed and maximal lactate steady state speed in swimming.
Baron, B; Dekerle, J; Depretz, S; Lefevre, T; Pelayo, P
2005-03-01
The purposes of this study were to ascertain whether physiological and stroking parameters remain stable during a 2-hour exercise performed at self-selected swimming speed (S4) and whether this speed corresponds to those associated with the maximal lactate steady state (SMLSS). Ten well-trained competitive swimmers performed a maximal 400-m front crawl test, 4 30-min swimming tests in order to determine S(MLSS) and a 2-hour test swum at their preferred paces to determine self-selected swimming speed (S4), stroke rate (SR4), and stroke length (SL4) defined as the mean values observed between the 5th and the 15th min of this test. The stroking, metabolic and respiratory parameters, and ratings of perceived exertion (CR10) were reported throughout the 2-hour test. S4 and SMLSS were not significantly different and were highly correlated (r=0.891). S4 and SL4 decreased significantly after a steady state of 68 min and 100 min, respectively, whereas SR4 remained constant. Mean VO2, dioxide output, and heart rate values did not evolve significantly between the 10th and 120th minute of the test whereas capillary blood lactate concentration (La) decreased significantly (p<0.05). Moreover, respiratory CR10 did not evolve significantly between the 10th and the 120th minute of the test whereas general CR10 and muscular CR10 increased significantly. Considering the (La), SL4 and CR10 values variations, muscular parameters and a probably glycogenic depletion seem to be the main limiting factors that prevent maintaining the self selected swimming speed.
Evaluation and Validation of the Messinger Freezing Fraction
NASA Technical Reports Server (NTRS)
Anderson, David N.; Tsao, Jen-Ching
2005-01-01
One of the most important non-dimensional parameters used in ice-accretion modeling and scaling studies is the freezing fraction defined by the heat-balance analysis of Messinger. For fifty years this parameter has been used to indicate how rapidly freezing takes place when super-cooled water strikes a solid body. The value ranges from 0 (no freezing) to 1 (water freezes immediately on impact), and the magnitude has been shown to play a major role in determining the physical appearance of the accreted ice. Because of its importance to ice shape, this parameter and the physics underlying the expressions used to calculate it have been questioned from time to time. Until now, there has been no strong evidence either validating or casting doubt on the current expressions. This paper presents experimental measurements of the leading-edge thickness of a number of ice shapes for a variety of test conditions with nominal freezing fractions from 0.3 to 1.0. From these thickness measurements, experimental freezing fractions were calculated and compared with values found from the Messinger analysis as applied by Ruff. Within the experimental uncertainty of measuring the leading-edge thickness, agreement of the experimental and analytical freezing fraction was very good. It is also shown that values of analytical freezing fraction were entirely consistent with observed ice shapes at and near rime conditions: At an analytical freezing fraction of unity, experimental ice shapes displayed the classic rime shape, while for conditions producing analytical freezing fractions slightly lower than unity, glaze features started to appear.
New Quality Standards of Testing Idlers for Highly Effective Belt Conveyors
NASA Astrophysics Data System (ADS)
Król, Robert; Gladysiewicz, Lech; Kaszuba, Damian; Kisielewski, Waldemar
2017-12-01
The paper presents result of research and analyses carried out into the belt conveyors idlers’ rotational resistance which is one of the key factor indicating the quality of idlers. Moreover, idlers’ rotational resistance is important factor in total resistance to motion of belt conveyor. The evaluation of the technical condition of belt conveyor idlers is carried out in accordance with actual national and international standards which determine the methodology of measurements and acceptable values of measured idlers’ parameters. Requirements defined by the standards, which determine the suitability of idlers to a specific application, despite the development of knowledge on idlers and quality of presently manufactured idlers maintain the same level of parameters values over long periods of time. Nowadays the need to implement new, efficient and economically justified solution for belt conveyor transportation systems characterized by long routes and energy-efficiency is often discussed as one of goals in belt conveyors’ future. One of the basic conditions for achieving this goal is to use only carefully selected idlers with low rotational resistance under the full range of operational loads and high durability. Due to this it is necessary to develop new guidelines for evaluation of the technical condition of belt conveyor idlers in accordance with actual standards and perfecting of existing and development of new methods of idlers testing. The changes in particular should concern updating of values of parameters used for evaluation of the technical condition of belt conveyor idlers in relation to belt conveyors’ operational challenges and growing demands in terms of belt conveyors’ energy efficiency.
NASA Astrophysics Data System (ADS)
Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor
2013-04-01
Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is often mixed or unknown. The residual values are found to be dependent on two input parameters (standard deviation and maximum point-plane distance both defining distance thresholds for assigning points to a segment) mainly and the curvature of the surface affected mostly the distributions. The results of the analysis helped to decide which parameter set is the best for further modelling and provides the highest accuracy. With these results in mind the success of quasi-automatic modelling of the planar (for example plateau-like) features became more successful and often provided more accuracy. These studies were carried out partly in the framework of TMIS.ascrea project (Nr. 2001978) financed by the Austrian Research Promotion Agency (FFG); the contribution of ZsK was partly funded by Campus Hungary Internship TÁMOP-424B1.
NASA Astrophysics Data System (ADS)
Robinson, Bruce H.; Dalton, Larry R.
1980-01-01
The stochastic Liouville equation for the spin density matrix is modified to consider the effects of Brownian anisotropic rotational diffusion upon electron paramagnetic resonance (EPR) and saturation transfer electron paramagnetic resonance (ST-EPR) spectra. Spectral shapes and the ST-EPR parameters L″/L, C'/C, and H″/H defined by Thomas, Dalton, and Hyde at X-band microwave frequencies [J. Chem. Phys. 65, 3006 (1976)] are examined and discussed in terms of the rotational times τ∥ and τ⊥ and in terms of other defined correlation times for systems characterized by magnetic tensors of axial symmetry and for systems characterized by nonaxially symmetric magnetic tensors. For nearly axially symmetric magnetic tensors, such as nitroxide spin labels studied employing 1-3 GHz microwaves, ST-EPR spectra for systems undergoing anisotropic rotational diffusion are virtually indistinguishable from spectra for systems characterized by isotropic diffusion. For nonaxially symmetric magnetic tensors, such as nitroxide spin labels studied employing 8-35 GHz microwaves, the high field region of the ST-EPR spectra, and hence the H″/H parameter, will be virtually indistinguishable from spectra, and parameter values, obtained for isotropic diffusion. On the other hand, the central spectral region at x-band microwave frequencies, and hence the C'/C parameter, is sensitive to the anisotropic diffusion model provided that a unique and static relationship exists between the magnetic and diffusion tensors. Random labeling or motion of the spin label relative to the biomolecule whose hydrodynamic properties are to be investigated will destroy spectral sensitivity to anisotropic motion. The sensitivity to anisotropic motion is enhanced in proceeding to 35 GHz with the increased sensitivity evident in the low field half of the EPR and ST-EPR spectra. The L″/L parameter is thus a meaningful indicator of anisotropic motion when compared with H″/H parameter analysis. However, consideration of spectral shapes suggests that the C'/C parameter definition is not meaningfully extended from 9.5 to 35 GHz. Alternative definitions of the L″/L and C'/C parameters are proposed for those microwave frequencies for which the electron Zeeman anisotropy is comparable to or greater than the electron-nitrogen nuclear hyperfine anisotropy.
Cihangiroglu, M M; Ozturk-Isik, E; Firat, Z; Kilickesmez, O; Ulug, A M; Ture, U
2017-03-01
The goal of this study was to compare diffusion-weighted magnetic resonance imaging (DW-MRI) using high b-value (b=3000s/mm 2 ) to DW-MRI using standard b-value (b=1000s/mm 2 ) in the preoperative grading of supratentorial gliomas. Fifty-three patients with glioma had brain DW-MRI at 3T using two different b-values (b=1000s/mm 2 and b=3000s/mm 2 ). There were 35 men and 18 women with a mean age of 40.5±17.1 years (range: 18-79 years). Mean, minimum, maximum, and range of apparent diffusion coefficient (ADC) values for solid tumor ROIs (ADC mean , ADC min , ADC max , and ADC diff ), and the normalized ADC (ADC ratio ) were calculated. A Kruskal-Wallis statistic with Bonferroni correction for multiple comparisons was applied to detect significant ADC parameter differences between tumor grades by including or excluding 19 patients with an oligodendroglioma. Receiver operating characteristic curve analysis was conducted to define appropriate cutoff values for grading gliomas. No differences in ADC derived parameters were found between grade II and grade III gliomas. Mean ADC values using standard b-value were 1.17±0.27×10 -3 mm 2 /s [range: 0.63-1.61], 1.05±0.22×10 -3 mm 2 /s [range: 0.73-1.33], and 0.86±0.23×10 -3 mm 2 /s [range: 0.52-1.46] for grades II, III and IV gliomas, respectively. Using high b-value, mean ADC values were 0.89±0.24×10 -3 mm 2 /s [range: 0.42-1.25], 0.82±0.20×10 -3 mm 2 /s [range: 0.56-1.10], and 0.59±0.17×10 -3 mm 2 /s [range: 0.40-1.01] for grades II, III and IV gliomas, respectively. ADC mean , ADC ratio , ADC max , and ADC min were different between grade II and grade IV gliomas at both standard and high b-values. Differences in ADC mean , ADC max , and ADC diff were found between grade III and grade IV only using high b-value. ADC parameters derived from DW-MRI using a high b-value allows a better differential diagnosis of gliomas, especially for differentiating grades III and IV, than those derived from DW-MRI using a standard b-value. Copyright © 2016 Éditions françaises de radiologie. Published by Elsevier Masson SAS. All rights reserved.
Integrating economic parameters into genetic selection for Large White pigs.
Dube, Bekezela; Mulugeta, Sendros D; Dzama, Kennedy
2013-08-01
The objective of the study was to integrate economic parameters into genetic selection for sow productivity, growth performance and carcass characteristics in South African Large White pigs. Simulation models for sow productivity and terminal production systems were performed based on a hypothetical 100-sow herd, to derive economic values for the economically relevant traits. The traits included in the study were number born alive (NBA), 21-day litter size (D21LS), 21-day litter weight (D21LWT), average daily gain (ADG), feed conversion ratio (FCR), age at slaughter (AGES), dressing percentage (DRESS), lean content (LEAN) and backfat thickness (BFAT). Growth of a pig was described by the Gompertz growth function, while feed intake was derived from the nutrient requirements of pigs at the respective ages. Partial budgeting and partial differentiation of the profit function were used to derive economic values, which were defined as the change in profit per unit genetic change in a given trait. The respective economic values (ZAR) were: 61.26, 38.02, 210.15, 33.34, -21.81, -68.18, 5.78, 4.69 and -1.48. These economic values indicated the direction and emphases of selection, and were sensitive to changes in feed prices and marketing prices for carcasses and maiden gilts. Economic values for NBA, D21LS, DRESS and LEAN decreased with increasing feed prices, suggesting a point where genetic improvement would be a loss, if feed prices continued to increase. The economic values for DRESS and LEAN increased as the marketing prices for carcasses increased, while the economic value for BFAT was not sensitive to changes in all prices. Reductions in economic values can be counterbalanced by simultaneous increases in marketing prices of carcasses and maiden gilts. Economic values facilitate genetic improvement by translating it to proportionate profitability. Breeders should, however, continually recalculate economic values to place the most appropriate emphases on the respective traits during genetic selection.
NASA Astrophysics Data System (ADS)
Li, Yue; Bai, Xiao Yong; Jie Wang, Shi; Qin, Luo Yi; Chao Tian, Yi; Jie Luo, Guang
2017-05-01
Soil loss tolerance (T value) is one of the criteria in determining the necessity of erosion control measures and ecological restoration strategy. However, the validity of this criterion in subtropical karst regions is strongly disputed. In this study, T value is calculated based on soil formation rate by using a digital distribution map of carbonate rock assemblage types. Results indicated a spatial heterogeneity and diversity in soil loss tolerance. Instead of only one criterion, a minimum of three criteria should be considered when investigating the carbonate areas of southern China because the one region, one T value
concept may not be applicable to this region. T value is proportionate to the amount of argillaceous material, which determines the surface soil thickness of the formations in homogenous carbonate rock areas. Homogenous carbonate rock, carbonate rock intercalated with clastic rock areas and carbonate/clastic rock alternation areas have T values of 20, 50 and 100 t/(km2 a), and they are extremely, severely and moderately sensitive to soil erosion. Karst rocky desertification (KRD) is defined as extreme soil erosion and reflects the risks of erosion. Thus, the relationship between T value and erosion risk is determined using KRD as a parameter. The existence of KRD land is unrelated to the T value, although this parameter indicates erosion sensitivity. Erosion risk is strongly dependent on the relationship between real soil loss (RL) and T value rather than on either erosion intensity or the T value itself. If RL > > T, then the erosion risk is high despite of a low RL. Conversely, if T > > RL, then the soil is safe although RL is high. Overall, these findings may clarify the heterogeneity of T value and its effect on erosion risk in a karst environment.
Wáng, Yì Xiáng J; Li, Yáo T; Chevallier, Olivier; Huang, Hua; Leung, Jason Chi Shun; Chen, Weitian; Lu, Pu-Xuan
2018-01-01
Background Intravoxel incoherent motion (IVIM) tissue parameters depend on the threshold b-value. Purpose To explore how threshold b-value impacts PF ( f), D slow ( D), and D fast ( D*) values and their performance for liver fibrosis detection. Material and Methods Fifteen healthy volunteers and 33 hepatitis B patients were included. With a 1.5-T magnetic resonance (MR) scanner and respiration gating, IVIM data were acquired with ten b-values of 10, 20, 40, 60, 80, 100, 150, 200, 400, and 800 s/mm 2 . Signal measurement was performed on the right liver. Segmented-unconstrained analysis was used to compute IVIM parameters and six threshold b-values in the range of 40-200 s/mm 2 were compared. PF, D slow , and D fast values were placed along the x-axis, y-axis, and z-axis, and a plane was defined to separate volunteers from patients. Results Higher threshold b-values were associated with higher PF measurement; while lower threshold b-values led to higher D slow and D fast measurements. The dependence of PF, D slow , and D fast on threshold b-value differed between healthy livers and fibrotic livers; with the healthy livers showing a higher dependence. Threshold b-value = 60 s/mm 2 showed the largest mean distance between healthy liver datapoints vs. fibrotic liver datapoints, and a classification and regression tree showed that a combination of PF (PF < 9.5%), D slow (D slow < 1.239 × 10 -3 mm 2 /s), and D fast (D fast < 20.85 × 10 -3 mm 2 /s) differentiated healthy individuals and all individual fibrotic livers with an area under the curve of logistic regression (AUC) of 1. Conclusion For segmented-unconstrained analysis, the selection of threshold b-value = 60 s/mm 2 improves IVIM differentiation between healthy livers and fibrotic livers.
Omori's Law Applied to Mining-Induced Seismicity and Re-entry Protocol Development
NASA Astrophysics Data System (ADS)
Vallejos, J. A.; McKinnon, S. D.
2010-02-01
This paper describes a detailed study of the Modified Omori's law n( t) = K/( c + t) p applied to 163 mining-induced aftershock sequences from four different mine environments in Ontario, Canada. We demonstrate, using a rigorous statistical analysis, that this equation can be adequately used to describe the decay rate of mining-induced aftershock sequences. The parameters K, p and c are estimated using a uniform method that employs the maximum likelihood procedure and the Anderson-Darling statistic. To estimate consistent decay parameters, the method considers only the time interval that satisfies power-law behavior. The p value differs from sequence to sequence, with most (98%) ranging from 0.4 to 1.6. The parameter K can be satisfactorily expressed by: K = κN 1, where κ is an activity ratio and N 1 is the measured number of events occurring during the first hour after the principal event. The average κ values are in a well-defined range. Theoretically κ ≤ 0.8, and empirically κ ∈ [0.3-0.5]. These two findings enable us to develop a real-time event rate re-entry protocol 1 h after the principal event. Despite the fact that the Omori formula is temporally self-similar, we found a characteristic time T MC at the maximum curvature point, which is a function of Omori's law parameters. For a time sequence obeying an Omori process, T MC marks the transition from highest to lowest event rate change. Using solely the aftershock decay rate, therefore, we recommend T MC as a preliminary estimate of the time at which it may be considered appropriate to re-enter an area affected by a blast or large event. We found that T MC can be estimated without specifying a p value by the expression: T MC = a N {1/ b }, where a and b are two parameters dependent on local conditions. Both parameters presented well-constrained empirical ranges for the sites analyzed: a ∈ [0.3-0.5] and b ∈ [0.5-0.7]. These findings provide concise and well-justified guidelines for event rate re-entry protocol development.
Soil conservation service curve number: How to take into account spatial and temporal variability
NASA Astrophysics Data System (ADS)
Rianna, M.; Orlando, D.; Montesarchio, V.; Russo, F.; Napolitano, F.
2012-09-01
The most commonly used method to evaluate rainfall excess, is the Soil Conservation Service (SCS) runoff curve number model. This method is based on the determination of the CN valuethat is linked with a hydrological soil group, cover type, treatment, hydrologic condition and antecedent runoff condition. To calculate the antecedent runoff condition the standard procedure needs to calculate the rainfall over the entire basin during the five days previous to the beginning of the event in order to simulate and then to use that volume of rainfall to calculate the antecedent moisture condition (AMC). This is necessary in order to obtain the correct curve number value. The value of the modified parameter is then kept constant throughout the whole event. The aim of this work is to evaluate the possibility of improving the curve number method. The various assumptions are focused on modifying those related to rainfall and the determination of an AMC condition and their role in the determination of the value of the curve number parameter. In order to consider the spatial variability we assumed that the rainfall which influences the AMC and the CN value does not account for the rainfall over the entire basin, but for the rainfall within a single cell where the basin domain is discretized. Furthermore, in order to consider the temporal variability of rainfall we assumed that the value of the CN of the single cell is not maintained constant during the whole event, but instead varies throughout it according to the time interval used to define the AMC conditions.
Tang, Qi; Li, Qiang; Xie, Dong; Chu, Ketao; Liu, Lidong; Liao, Chengcheng; Qin, Yunying; Wang, Zheng; Su, Danke
2018-05-21
This study aimed to investigate the utility of a volumetric apparent diffusion coefficient (ADC) histogram method for distinguishing non-puerperal mastitis (NPM) from breast cancer (BC) and to compare this method with a traditional 2-dimensional measurement method. Pretreatment diffusion-weighted imaging data at 3.0 T were obtained for 80 patients (NPM, n = 27; BC, n = 53) and were retrospectively assessed. Two readers measured ADC values according to 2 distinct region-of-interest (ROI) protocols. The first protocol included the generation of ADC histograms for each lesion, and various parameters were examined. In the second protocol, 3 freehand (TF) ROIs for local lesions were generated to obtain a mean ADC value (defined as ADC-ROITF). All of the ADC values were compared by an independent-samples t test or the Mann-Whitney U test. Receiver operating characteristic curves and a leave-one-out cross-validation method were also used to determine diagnostic deficiencies of the significant parameters. The ADC values for NPM were characterized by significantly higher mean, 5th to 95th percentiles, and maximum and mode ADCs compared with the corresponding ADCs for BC (all P < 0.05). However, the minimum, skewness, and kurtosis ADC values, as well as ADC-ROITF, did not significantly differ between the NPM and BC cases. Thus, the generation of volumetric ADC histograms seems to be a superior method to the traditional 2-dimensional method that was examined, and it also seems to represent a promising image analysis method for distinguishing NPM from BC.
Technical note: Design flood under hydrological uncertainty
NASA Astrophysics Data System (ADS)
Botto, Anna; Ganora, Daniele; Claps, Pierluigi; Laio, Francesco
2017-07-01
Planning and verification of hydraulic infrastructures require a design estimate of hydrologic variables, usually provided by frequency analysis, and neglecting hydrologic uncertainty. However, when hydrologic uncertainty is accounted for, the design flood value for a specific return period is no longer a unique value, but is represented by a distribution of values. As a consequence, the design flood is no longer univocally defined, making the design process undetermined. The Uncertainty Compliant Design Flood Estimation (UNCODE) procedure is a novel approach that, starting from a range of possible design flood estimates obtained in uncertain conditions, converges to a single design value. This is obtained through a cost-benefit criterion with additional constraints that is numerically solved in a simulation framework. This paper contributes to promoting a practical use of the UNCODE procedure without resorting to numerical computation. A modified procedure is proposed by using a correction coefficient that modifies the standard (i.e., uncertainty-free) design value on the basis of sample length and return period only. The procedure is robust and parsimonious, as it does not require additional parameters with respect to the traditional uncertainty-free analysis. Simple equations to compute the correction term are provided for a number of probability distributions commonly used to represent the flood frequency curve. The UNCODE procedure, when coupled with this simple correction factor, provides a robust way to manage the hydrologic uncertainty and to go beyond the use of traditional safety factors. With all the other parameters being equal, an increase in the sample length reduces the correction factor, and thus the construction costs, while still keeping the same safety level.
NASA Astrophysics Data System (ADS)
Decker, K. T.; Everett, M. E.
2009-12-01
The Edwards aquifer lies in the structurally complex Balcones fault zone and supplies water to the growing city of San Antonio. To ensure that future demands for water are met, the hydrological and geophysical properties of the aquifer must be well-understood. In most settings, fracture lengths and displacements occur in power-law distributions. Fracture distribution plays an important role in determining electrical and hydraulic current flowpaths. 1-D synthetic models of the controlled-source electromagnetic (CSEM) response for layered models with a fractured layer at depth described by the roughness parameter βV, such that 0≤βV<1, associated with the power-law length-scale dependence of electrical conductivity are developed. A value of βV = 0 represents homogeneous, continuous media, while a value of 0<βV<1 shows that roughness exists. The Seco Creek frequency-domain helicopter electromagnetic survey data set is analyzed by introducing the similarly defined roughness parameter βH to detect lateral roughness along survey lines. Fourier transforming the apparent resistivity as a function of position along flight line into wavenumber domain using a 256-point sliding window gives the power spectral density (PSD) plot for each line. The value of βH is the slope of the least squares regression for the PSD in each 256-point window. Changes in βH with distance along the flight line are plotted. Large values of βH are found near well-known large fractures and maps of βH produced by interpolating values of βH along survey lines suggest previously undetected structure at depth.
Reznicek, Lukas; Muth, Daniel; Vogel, Michaela; Hirneiß, Christoph
2017-03-01
To evaluate the relationship between functional parameters of repeated flicker-defined form perimetry (FDF) and structural parameters of spectral-domain optical coherence tomography (SD-OCT) in glaucoma suspects with normal findings in achromatic standard automated perimetry (SAP). Patients with optic nerve heads (ONH) clinically suspicious for glaucoma and normal SAP findings were enrolled in this prospective study. Each participant underwent visual field (VF) testing with FDF perimetry, using the Heidelberg Edge Perimeter (HEP, Heidelberg Engineering, Heidelberg, Germany) at two consecutive visits. Peripapillary RNFL thickness was obtained by SD-OCT (Spectralis, Heidelberg Engineering, Heidelberg, Germany). Correlations and regression analyses of global and sectoral peripapillary RNFL thickness with corresponding global and regional VF sensitivities were investigated. A consecutive series of 65 study eyes of 36 patients were prospectively included. The second FDF test (HEP II) was used for analysis. Cluster-point based suspicious VF defects were found in 34 eyes (52%). Significant correlations were observed between mean global MD (PSD) of HEP II and SD-OCT-based global peripapillary RNFL thickness (r = 0.380, p = 0.003 for MD and r = -0.516, p < 0.001 for PSD) and RNFL classification scores (R 2 = 0.157, p = 0.002 for MD and R 2 = 0.172, p = 0.001 for PSD). Correlations between mean global MD and PSD of HEP II and sectoral peripapillary RNFL thickness and classification scores showed highest correlations between function and structure for the temporal superior and temporal inferior sectors whereas sectoral MD and PSD correlated weaker with sectoral RNFL thickness. Correlations between linear RNFL values and untransformed logarithmic MD values for each segment were less significant than correlations between logarithmic MD values and RNFL thickness. In glaucoma suspects with normal SAP, global and sectoral peripapillary RNFL thickness is correlated with sensitivity and VF defects in FDF perimetry.
Kline, David I; Teneva, Lida; Hauri, Claudine; Schneider, Kenneth; Miard, Thomas; Chai, Aaron; Marker, Malcolm; Dunbar, Rob; Caldeira, Ken; Lazar, Boaz; Rivlin, Tanya; Mitchell, Brian Gregory; Dove, Sophie; Hoegh-Guldberg, Ove
2015-01-01
Understanding the temporal dynamics of present thermal and pH exposure on coral reefs is crucial for elucidating reef response to future global change. Diel ranges in temperature and carbonate chemistry parameters coupled with seasonal changes in the mean conditions define periods during the year when a reef habitat is exposed to anomalous thermal and/or pH exposure. Anomalous conditions are defined as values that exceed an empirically estimated threshold for each variable. We present a 200-day time series from June through December 2010 of carbonate chemistry and environmental parameters measured on the Heron Island reef flat. These data reveal that aragonite saturation state, pH, and pCO2 were primarily modulated by biologically-driven changes in dissolved organic carbon (DIC) and total alkalinity (TA), rather than salinity and temperature. The largest diel temperature ranges occurred in austral spring, in October (1.5 - 6.6°C) and lowest diel ranges (0.9 - 3.2°C) were observed in July, at the peak of winter. We observed large diel total pH variability, with a maximum range of 7.7 - 8.5 total pH units, with minimum diel average pH values occurring during spring and maximum during fall. As with many other reefs, the nighttime pH minima on the reef flat were far lower than pH values predicted for the open ocean by 2100. DIC and TA both increased from June (end of Fall) to December (end of Spring). Using this high-resolution dataset, we developed exposure metrics of pH and temperature individually for intensity, duration, and severity of low pH and high temperature events, as well as a combined metric. Periods of anomalous temperature and pH exposure were asynchronous on the Heron Island reef flat, which underlines the importance of understanding the dynamics of co-occurrence of multiple stressors on coastal ecosystems.
Kline, David I.; Teneva, Lida; Hauri, Claudine; Schneider, Kenneth; Miard, Thomas; Chai, Aaron; Marker, Malcolm; Dunbar, Rob; Caldeira, Ken; Lazar, Boaz; Rivlin, Tanya; Mitchell, Brian Gregory; Dove, Sophie; Hoegh-Guldberg, Ove
2015-01-01
Understanding the temporal dynamics of present thermal and pH exposure on coral reefs is crucial for elucidating reef response to future global change. Diel ranges in temperature and carbonate chemistry parameters coupled with seasonal changes in the mean conditions define periods during the year when a reef habitat is exposed to anomalous thermal and/or pH exposure. Anomalous conditions are defined as values that exceed an empirically estimated threshold for each variable. We present a 200-day time series from June through December 2010 of carbonate chemistry and environmental parameters measured on the Heron Island reef flat. These data reveal that aragonite saturation state, pH, and pCO2 were primarily modulated by biologically-driven changes in dissolved organic carbon (DIC) and total alkalinity (TA), rather than salinity and temperature. The largest diel temperature ranges occurred in austral spring, in October (1.5 – 6.6°C) and lowest diel ranges (0.9 – 3.2°C) were observed in July, at the peak of winter. We observed large diel total pH variability, with a maximum range of 7.7 – 8.5 total pH units, with minimum diel average pH values occurring during spring and maximum during fall. As with many other reefs, the nighttime pH minima on the reef flat were far lower than pH values predicted for the open ocean by 2100. DIC and TA both increased from June (end of Fall) to December (end of Spring). Using this high-resolution dataset, we developed exposure metrics of pH and temperature individually for intensity, duration, and severity of low pH and high temperature events, as well as a combined metric. Periods of anomalous temperature and pH exposure were asynchronous on the Heron Island reef flat, which underlines the importance of understanding the dynamics of co-occurrence of multiple stressors on coastal ecosystems. PMID:26039687
Kopans, Daniel B
2008-02-01
Numerous studies have suggested a link between breast tissue patterns, as defined with mammography, and risk for breast cancer. There may be a relationship, but the author believes all of these studies have methodological flaws. It is impossible, with the parameters used in these studies, to accurately measure the percentage of tissues by volume when two-dimensional x-ray mammographic images are used. Without exposure values, half-value layer information, and knowledge of the compressed thickness of the breast, an accurate volume of tissue cannot be calculated. The great variability in positioning the breast for a mammogram is also an uncontrollable factor in measuring tissue density. Computerized segmentation algorithms can accurately assess the percentage of the x-ray image that is "dense," but this does not accurately measure the true volume of tissue. Since the percentage of dense tissue is ultimately measured in relation to the complete volume of the breast, defining the true boundaries of the breast is also a problem. Studies that purport to show small percentage differences between groups are likely inaccurate. Future investigations need to use three-dimensional information. (c) RSNA, 2008.
NASA Astrophysics Data System (ADS)
Ralea, Daniel; Marginean, Raluca-Maria; Marzu, Marinica
1998-07-01
The algorithm presented in this paper proposes a way to find the optimum glasses that assure a better correction for optical apparatus with the human eye as a final receiver. The model (Ne, v1, v2), based on the Buchdahl formula, gives an approximation error for the refraction index less than 5(DOT)10-5 for visible domain. We introduced in the merit function used for optimizing the optical system an operand that describes the existence of an optical glass. This operand was defined so that the obtained value for Ne, v1 and v2 can be closed to some values for a real glass. A definition for this operand is obtained using the PNe, Pv1, Pv2, probabilities of existence for a glass with a certain parameter Ne, v1 or v2. Another possibility to define this operand is to describe the volume occupied by the optical glass in (Ne, v1, v2) space with some elliptical functions. The probabilities and the elliptical functions were found after an analysis for all optical glasses listed in the Schott catalogues was made.
Adjusted neutropenia is associated with early serious infection in systemic lupus erythematosus.
Lee, Sang-Won; Park, Min-Chan; Lee, Soo-Kon; Park, Yong-Beom
2013-05-01
The susceptibility to infection increases in systemic lupus erythematosus (SLE) patients with neutropenia, but the link between infection risk and the cutoff neutrophil count still remains controversial. In this study, we investigated a valuable parameter associated with early serious infection in SLE patients during the first follow-up year. We reviewed the medical records of 160 patients with SLE. The initial levels were defined as the mean of the results of the first two consecutive tests. The adjusted levels were defined as the results of the accumulated area under the curve divided by interval follow-up days. Patients were divided into two groups according to early serious infection and initial and adjusted neutropenia and were then compared. Immunosuppressive-naïve SLE patients with early serious infection more frequently had initial, latest, and adjusted leukopenia and neutropenia (<2,500/mm(3)) and hypocomplementemia than those without. Adjusted neutropenia was the only independent predictive value for early serious infection [odds ratio (OR 11.366)]. Initial neutropenia was the independent predictive value for adjusted neutropenia (OR 6.504). We suggest that adjusted neutropenia is useful for predicting early serious infection in SLE patients during the first follow-up year.
Fusion of AIRSAR and TM Data for Parameter Classification and Estimation in Dense and Hilly Forests
NASA Technical Reports Server (NTRS)
Moghaddam, Mahta; Dungan, J. L.; Coughlan, J. C.
2000-01-01
The expanded remotely sensed data space consisting of coincident radar backscatter and optical reflectance data provides for a more complete description of the Earth surface. This is especially useful where many parameters are needed to describe a certain scene, such as in the presence of dense and complex-structured vegetation or where there is considerable underlying topography. The goal of this paper is to use a combination of radar and optical data to develop a methodology for parameter classification for dense and hilly forests, and further, class-specific parameter estimation. The area to be used in this study is the H. J. Andrews Forest in Oregon, one of the Long-Term Ecological Research (LTER) sites in the US. This area consists of various dense old-growth conifer stands, and contains significant topographic relief. The Andrews forest has been the subject of many ecological studies over several decades, resulting in an abundance of ground measurements. Recently, biomass and leaf-area index (LAI) values for approximately 30 reference stands have also become available which span a large range of those parameters. The remote sensing data types to be used are the C-, L-, and P-band polarimetric radar data from the JPL airborne SAR (AIRSAR), the C-band single-polarization data from the JPL topographic SAR (TOPSAR), and the Thematic Mapper (TM) data from Landsat, all acquired in late April 1998. The total number of useful independent data channels from the AIRSAR is 15 (three frequencies, each with three unique polarizations and amplitude and phase of the like-polarized correlation), from the TOPSAR is 2 (amplitude and phase of the interferometric correlation), and from the TM is 6 (the thermal band is not used). The range pixel spacing of the AIRSAR is 3.3m for C- and L-bands and 6.6m for P-band. The TOPSAR pixel spacing is 10m, and the TM pixel size is 30m. To achieve parameter classification, first a number of parameters are defined which are of interest to ecologists for forest process modeling. These parameters include total biomass, leaf biomass, LAI, and tree height. The remote sensing data from radar and TM are used to formulate a multivariate analysis problem given the ground measurements of the parameters. Each class of each parameter is defined by a probability density function (pdf), the spread of which defines the range of that class. High classification accuracy results from situations in which little overlap occurs between pdfs. Classification results provide the basis for the future work of class-specific parameter estimation using radar and optical data. This work was performed in part by the Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, and in part by the NASA Ames Research Center, Moffett Field, CA, both under contract from the National Aeronautics and Space Administration.
Safayi, Sina; Jeffery, Nick D; Shivapour, Sara K; Zamanighomi, Mahdi; Zylstra, Tyler J; Bratsch-Prince, Joshua; Wilson, Saul; Reddy, Chandan G; Fredericks, Douglas C; Gillies, George T; Howard, Matthew A
2015-11-15
We are developing a novel intradural spinal cord (SC) stimulator designed to improve the treatment of intractable pain and the sequelae of SC injury. In-vivo ovine models of neuropathic pain and moderate SC injury are being implemented for pre-clinical evaluations of this device, to be carried out via gait analysis before and after induction of the relevant condition. We extend previous studies on other quadrupeds to extract the three-dimensional kinematics of the limbs over the gait cycle of sheep walking on a treadmill. Quantitative measures of thoracic and pelvic limb movements were obtained from 17 animals. We calculated the total-error values to define the analytical performance of our motion capture system for these kinematic variables. The post- vs. pre-injury time delay between contralateral thoracic and pelvic-limb steps for normal and SC-injured sheep increased by ~24s over 100 steps. The pelvic limb hoof velocity during swing phase decreased, while range of pelvic hoof elevation and distance between lateral pelvic hoof placements increased after SC injury. The kinematics measures in a single SC-injured sheep can be objectively defined as changed from the corresponding pre-injury values, implying utility of this method to assess new neuromodulation strategies for specific deficits exhibited by an individual. Copyright © 2015 Elsevier B.V. All rights reserved.
The optical method for investigation of the peritonitis progressing process
NASA Astrophysics Data System (ADS)
Guminetskiy, S. H.; Ushenko, O. G.; Polyanskiy, I. P.; Motrych, A. V.; Grynchuk, F. V.
2008-05-01
There have been given the results of the spectrophotometric examination of the dogs' and rats' venous and whole blood plasma taken in the process of the peritonitis progressing within the spectral interval λ = 220 - 320 nm (for plasma) and λ = 350 - 610 nm (for the whole blood). It has been defined that D-optical density values in the field of the long-waved maximum of plasma absorption intensity of the venous blood at λ = 280 nm depend upon the intensity of the inflammatory process and also upon the circumstances against the background of which it started to progress. It was found out that the dynamics of D= values changes for λ = 540 (or 570) nm in the process of the peritonitis progressing in case of the whole blood taken from a portal vein is a mirror symmetrical if to compare to the same dynamics for the blood from cava inferior. The defined conformities with regularities may have a diagnostic meaning. It was also found out that the biggest influence upon the dynamics of D-values at λ = 280nm of the venous blood plasma has the content of the circulating immune complexes, necrosis factor of α-tumors and interleukin - 2, the changes of which explain for almost on 100% the distribution of the optical density parameters and what proves a possible immunologic explanation of its changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raoult, Nina M.; Jupp, Tim E.; Cox, Peter M.
Land-surface models (LSMs) are crucial components of the Earth system models (ESMs) that are used to make coupled climate–carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. JULES is also extensively used offline as a land-surface impacts tool, forced with climatologies into the future. In this study, JULES is automatically differentiated with respect to JULES parameters using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimationmore » system has been developed to search for locally optimum parameters by calibrating against observations. This paper describes adJULES in a data assimilation framework and demonstrates its ability to improve the model–data fit using eddy-covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the five plant functional types (PFTs) in JULES. The optimised PFT-specific parameters improve the performance of JULES at over 85 % of the sites used in the study, at both the calibration and evaluation stages. Furthermore, the new improved parameters for JULES are presented along with the associated uncertainties for each parameter.« less
Equation of state for dense nucleonic matter from metamodeling. I. Foundational aspects
NASA Astrophysics Data System (ADS)
Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca
2018-02-01
Metamodeling for the nucleonic equation of state (EOS), inspired from a Taylor expansion around the saturation density of symmetric nuclear matter, is proposed and parameterized in terms of the empirical parameters. The present knowledge of nuclear empirical parameters is first reviewed in order to estimate their average values and associated uncertainties, and thus defining the parameter space of the metamodeling. They are divided into isoscalar and isovector types, and ordered according to their power in the density expansion. The goodness of the metamodeling is analyzed against the predictions of the original models. In addition, since no correlation among the empirical parameters is assumed a priori, all arbitrary density dependences can be explored, which might not be accessible in existing functionals. Spurious correlations due to the assumed functional form are also removed. This meta-EOS allows direct relations between the uncertainties on the empirical parameters and the density dependence of the nuclear equation of state and its derivatives, and the mapping between the two can be done with standard Bayesian techniques. A sensitivity analysis shows that the more influential empirical parameters are the isovector parameters Lsym and Ksym, and that laboratory constraints at supersaturation densities are essential to reduce the present uncertainties. The present metamodeling for the EOS for nuclear matter is proposed for further applications in neutron stars and supernova matter.
Chemical freezeout parameters within generic nonextensive statistics
NASA Astrophysics Data System (ADS)
Tawfik, Abdel; Yassin, Hayam; Abo Elyazeed, Eman R.
2018-06-01
The particle production in relativistic heavy-ion collisions seems to be created in a dynamically disordered system which can be best described by an extended exponential entropy. In distinguishing between the applicability of this and Boltzmann-Gibbs (BG) in generating various particle-ratios, generic (non)extensive statistics is introduced to the hadron resonance gas model. Accordingly, the degree of (non)extensivity is determined by the possible modifications in the phase space. Both BG extensivity and Tsallis nonextensivity are included as very special cases defined by specific values of the equivalence classes (c, d). We found that the particle ratios at energies ranging between 3.8 and 2760 GeV are best reproduced by nonextensive statistics, where c and d range between ˜ 0.9 and ˜ 1 . The present work aims at illustrating that the proposed approach is well capable to manifest the statistical nature of the system on interest. We don't aim at highlighting deeper physical insights. In other words, while the resulting nonextensivity is neither BG nor Tsallis, the freezeout parameters are found very compatible with BG and accordingly with the well-known freezeout phase-diagram, which is in an excellent agreement with recent lattice calculations. We conclude that the particle production is nonextensive but should not necessarily be accompanied by a radical change in the intensive or extensive thermodynamic quantities, such as internal energy and temperature. Only, the two critical exponents defining the equivalence classes (c, d) are the physical parameters characterizing the (non)extensivity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. S. Schroeder; R. W. Youngblood
The Risk-Informed Safety Margin Characterization (RISMC) pathway of the Light Water Reactor Sustainability Program is developing simulation-based methods and tools for analyzing safety margin from a modern perspective. [1] There are multiple definitions of 'margin.' One class of definitions defines margin in terms of the distance between a point estimate of a given performance parameter (such as peak clad temperature), and a point-value acceptance criterion defined for that parameter (such as 2200 F). The present perspective on margin is that it relates to the probability of failure, and not just the distance between a nominal operating point and a criterion.more » In this work, margin is characterized through a probabilistic analysis of the 'loads' imposed on systems, structures, and components, and their 'capacity' to resist those loads without failing. Given the probabilistic load and capacity spectra, one can assess the probability that load exceeds capacity, leading to component failure. Within the project, we refer to a plot of these probabilistic spectra as 'the logo.' Refer to Figure 1 for a notional illustration. The implications of referring to 'the logo' are (1) RISMC is focused on being able to analyze loads and spectra probabilistically, and (2) calling it 'the logo' tacitly acknowledges that it is a highly simplified picture: meaningful analysis of a given component failure mode may require development of probabilistic spectra for multiple physical parameters, and in many practical cases, 'load' and 'capacity' will not vary independently.« less
Spontaneous generation of singularities in paraxial optical fields.
Aiello, Andrea
2016-04-01
In nonrelativistic quantum mechanics, the spontaneous generation of singularities in smooth and finite wave functions is a well understood phenomenon also occurring for free particles. We use the familiar analogy between the two-dimensional Schrödinger equation and the optical paraxial wave equation to define a new class of square-integrable paraxial optical fields that develop a spatial singularity in the focal point of a weakly focusing thin lens. These fields are characterized by a single real parameter whose value determines the nature of the singularity. This novel field enhancement mechanism may stimulate fruitful research for diverse technological and scientific applications.
NASA Astrophysics Data System (ADS)
Jannson, Tomasz; Kostrzewski, Andrew; Patton, Edward; Pradhan, Ranjit; Shih, Min-Yi; Walter, Kevin; Savant, Gajendra; Shie, Rick; Forrester, Thomas
2010-04-01
In this paper, Bayesian inference is applied to performance metrics definition of the important class of recent Homeland Security and defense systems called binary sensors, including both (internal) system performance and (external) CONOPS. The medical analogy is used to define the PPV (Positive Predictive Value), the basic Bayesian metrics parameter of the binary sensors. Also, Small System Integration (SSI) is discussed in the context of recent Homeland Security and defense applications, emphasizing a highly multi-technological approach, within the broad range of clusters ("nexus") of electronics, optics, X-ray physics, γ-ray physics, and other disciplines.
Revenue Prediction of a Local Event Using the Mathematical Model of Hit Phenomena
NASA Astrophysics Data System (ADS)
Ishii, A.; Matsumoto, T.; Miki, S.
We propose a theoretical approach to investigate human-humaninteraction in the society, which uses a many-body theory that incorporates human-human interaction. We treat advertisement as an external force, and include the word of mouth (WOM) effect as a two-body interaction between humans and the rumor effect as a three-body interaction among humans. The parameters to define the strength of human interactions are assumed to be constant values. The calculated result explained well the two local events ``Mizuki-Shigeru Road in Sakaiminato" and ``the sculpture festival at Tottori" in Japan.
A non-asymptotic homogenization theory for periodic electromagnetic structures
Tsukerman, Igor; Markel, Vadim A.
2014-01-01
Homogenization of electromagnetic periodic composites is treated as a two-scale problem and solved by approximating the fields on both scales with eigenmodes that satisfy Maxwell's equations and boundary conditions as accurately as possible. Built into this homogenization methodology is an error indicator whose value characterizes the accuracy of homogenization. The proposed theory allows one to define not only bulk, but also position-dependent material parameters (e.g. in proximity to a physical boundary) and to quantify the trade-off between the accuracy of homogenization and its range of applicability to various illumination conditions. PMID:25104912
A model of Neptune according to the Savic-Kasanin theory
NASA Astrophysics Data System (ADS)
Celebonovic, V.
1983-10-01
The structure and the distributions of temperature, pressure and density in the interior of Neptune are calculated using the pressure-ionization model of Savic and Kasanin (1961-1965). The model input data comprise only the mass, radius and moment of inertia; the results are presented in a graph and a table. A four-zone structure is defined, and the parameter values and profiles are found to be in good agreement with those of more complex models. Differences can be attributed to the crudeness of the present model but also to possible errors in the assumptions required by other models.
Escherichia coli promoter sequences predict in vitro RNA polymerase selectivity.
Mulligan, M E; Hawley, D K; Entriken, R; McClure, W R
1984-01-11
We describe a simple algorithm for computing a homology score for Escherichia coli promoters based on DNA sequence alone. The homology score was related to 31 values, measured in vitro, of RNA polymerase selectivity, which we define as the product KBk2, the apparent second order rate constant for open complex formation. We found that promoter strength could be predicted to within a factor of +/-4.1 in KBk2 over a range of 10(4) in the same parameter. The quantitative evaluation was linked to an automated (Apple II) procedure for searching and evaluating possible promoters in DNA sequence files.