Prediction and typicality in multiverse cosmology
NASA Astrophysics Data System (ADS)
Azhar, Feraz
2014-02-01
In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios.
Laboratory R-value vs. in-situ NDT methods.
DOT National Transportation Integrated Search
2006-05-01
The New Mexico Department of Transportation (NMDOT) uses the Resistance R-Value as a quantifying parameter in subgrade and base course design. The parameter represents soil strength and stiffness and ranges from 1 to 80, 80 being typical of the highe...
Yonai, Shunsuke; Matsufuji, Naruhiro; Akahane, Keiichi
2018-04-23
The aim of this work was to estimate typical dose equivalents to out-of-field organs during carbon-ion radiotherapy (CIRT) with a passive beam for prostate cancer treatment. Additionally, sensitivity analyses of organ doses for various beam parameters and phantom sizes were performed. Because the CIRT out-of-field dose depends on the beam parameters, the typical values of those parameters were determined from statistical data on the target properties of patients who received CIRT at the Heavy-Ion Medical Accelerator in Chiba (HIMAC). Using these typical beam-parameter values, out-of-field organ dose equivalents during CIRT for typical prostate treatment were estimated by Monte Carlo simulations using the Particle and Heavy-Ion Transport Code System (PHITS) and the ICRP reference phantom. The results showed that the dose decreased with distance from the target, ranging from 116 mSv in the testes to 7 mSv in the brain. The organ dose equivalents per treatment dose were lower than those either in 6-MV intensity-modulated radiotherapy or in brachytherapy with an Ir-192 source for organs within 40 cm of the target. Sensitivity analyses established that the differences from typical values were within ∼30% for all organs, except the sigmoid colon. The typical out-of-field organ dose equivalents during passive-beam CIRT were shown. The low sensitivity of the dose equivalent in organs farther than 20 cm from the target indicated that individual dose assessments required for retrospective epidemiological studies may be limited to organs around the target in cases of passive-beam CIRT for prostate cancer. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
SYNCHROTRON ORIGIN OF THE TYPICAL GRB BAND FUNCTION—A CASE STUDY OF GRB 130606B
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Bin-Bin; Briggs, Michael S.; Uhm, Z. Lucas
2016-01-10
We perform a time-resolved spectral analysis of GRB 130606B within the framework of a fast-cooling synchrotron radiation model with magnetic field strength in the emission region decaying with time, as proposed by Uhm and Zhang. The data from all time intervals can be successfully fit by the model. The same data can be equally well fit by the empirical Band function with typical parameter values. Our results, which involve only minimal physical assumptions, offer one natural solution to the origin of the observed GRB spectra and imply that, at least some, if not all, Band-like GRB spectra with typical Bandmore » parameter values can indeed be explained by synchrotron radiation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuñez-Cumplido, E., E-mail: ejnc-mccg@hotmail.com; Hernandez-Armas, J.; Perez-Calatayud, J.
2015-08-15
Purpose: In clinical practice, specific air kerma strength (S{sub K}) value is used in treatment planning system (TPS) permanent brachytherapy implant calculations with {sup 125}I and {sup 103}Pd sources; in fact, commercial TPS provide only one S{sub K} input value for all implanted sources and the certified shipment average is typically used. However, the value for S{sub K} is dispersed: this dispersion is not only due to the manufacturing process and variation between different source batches but also due to the classification of sources into different classes according to their S{sub K} values. The purpose of this work is tomore » examine the impact of S{sub K} dispersion on typical implant parameters that are used to evaluate the dose volume histogram (DVH) for both planning target volume (PTV) and organs at risk (OARs). Methods: The authors have developed a new algorithm to compute dose distributions with different S{sub K} values for each source. Three different prostate volumes (20, 30, and 40 cm{sup 3}) were considered and two typical commercial sources of different radionuclides were used. Using a conventional TPS, clinically accepted calculations were made for {sup 125}I sources; for the palladium, typical implants were simulated. To assess the many different possible S{sub K} values for each source belonging to a class, the authors assigned an S{sub K} value to each source in a randomized process 1000 times for each source and volume. All the dose distributions generated for each set of simulations were assessed through the DVH distributions comparing with dose distributions obtained using a uniform S{sub K} value for all the implanted sources. The authors analyzed several dose coverage (V{sub 100} and D{sub 90}) and overdosage parameters for prostate and PTV and also the limiting and overdosage parameters for OARs, urethra and rectum. Results: The parameters analyzed followed a Gaussian distribution for the entire set of computed dosimetries. PTV and prostate V{sub 100} and D{sub 90} variations ranged between 0.2% and 1.78% for both sources. Variations for the overdosage parameters V{sub 150} and V{sub 200} compared to dose coverage parameters were observed and, in general, variations were larger for parameters related to {sup 125}I sources than {sup 103}Pd sources. For OAR dosimetry, variations with respect to the reference D{sub 0.1cm{sup 3}} were observed for rectum values, ranging from 2% to 3%, compared with urethra values, which ranged from 1% to 2%. Conclusions: Dose coverage for prostate and PTV was practically unaffected by S{sub K} dispersion, as was the maximum dose deposited in the urethra due to the implant technique geometry. However, the authors observed larger variations for the PTV V{sub 150}, rectum V{sub 100}, and rectum D{sub 0.1cm{sup 3}} values. The variations in rectum parameters were caused by the specific location of sources with S{sub K} value that differed from the average in the vicinity. Finally, on comparing the two sources, variations were larger for {sup 125}I than for {sup 103}Pd. This is because for {sup 103}Pd, a greater number of sources were used to obtain a valid dose distribution than for {sup 125}I, resulting in a lower variation for each S{sub K} value for each source (because the variations become averaged out statistically speaking)« less
Modification of a rainfall-runoff model for distributed modeling in a GIS and its validation
NASA Astrophysics Data System (ADS)
Nyabeze, W. R.
A rainfall-runoff model, which can be inter-faced with a Geographical Information System (GIS) to integrate definition, measurement, calculating parameter values for spatial features, presents considerable advantages. The modification of the GWBasic Wits Rainfall-Runoff Erosion Model (GWBRafler) to enable parameter value estimation in a GIS (GISRafler) is presented in this paper. Algorithms are applied to estimate parameter values reducing the number of input parameters and the effort to populate them. The use of a GIS makes the relationship between parameter estimates and cover characteristics more evident. This paper has been produced as part of research to generalize the GWBRafler on a spatially distributed basis. Modular data structures are assumed and parameter values are weighted relative to the module area and centroid properties. Modifications to the GWBRafler enable better estimation of low flows, which are typical in drought conditions.
NASA Astrophysics Data System (ADS)
Kalnacs, J.; Bendere, R.; Murasovs, A.; Arina, D.; Antipovs, A.; Kalnacs, A.; Sprince, L.
2018-02-01
The article analyses the variations in carbon dioxide emission factor depending on parameters characterising biomass and RDF (refuse-derived fuel). The influence of moisture, ash content, heat of combustion, carbon and nitrogen content on the amount of emission factors has been reviewed, by determining their average values. The options for the improvement of the fuel to result in reduced emissions of carbon dioxide and nitrogen oxide have been analysed. Systematic measurements of biomass parameters have been performed, by determining their average values, seasonal limits of variations in these parameters and their mutual relations. Typical average values of RDF parameters and limits of variations have been determined.
Meier, Kimberly; Sum, Brian; Giaschi, Deborah
2016-10-01
Global motion sensitivity in typically developing children depends on the spatial (Δx) and temporal (Δt) displacement parameters of the motion stimulus. Specifically, sensitivity for small Δx values matures at a later age, suggesting it may be the most vulnerable to damage by amblyopia. To explore this possibility, we compared motion coherence thresholds of children with amblyopia (7-14years old) to age-matched controls. Three Δx values were used with two Δt values, yielding six conditions covering a range of speeds (0.3-30deg/s). We predicted children with amblyopia would show normal coherence thresholds for the same parameters on which 5-year-olds previously demonstrated mature performance, and elevated coherence thresholds for parameters on which 5-year-olds demonstrated immaturities. Consistent with this, we found that children with amblyopia showed deficits with amblyopic eye viewing compared to controls for small and medium Δx values, regardless of Δt value. The fellow eye showed similar results at the smaller Δt. These results confirm that global motion perception in children with amblyopia is particularly deficient at the finer spatial scales that typically mature later in development. An additional implication is that carefully designed stimuli that are adequately sensitive must be used to assess global motion function in developmental disorders. Stimulus parameters for which performance matures early in life may not reveal global motion perception deficits. Copyright © 2016 Elsevier Ltd. All rights reserved.
Design values of resilient modulus of stabilized and non-stabilized base.
DOT National Transportation Integrated Search
2010-10-01
The primary objective of this research study is to determine design value ranges for typical base materials, as allowed by LADOTD specifications, through laboratory tests with respect to resilient modulus and other parameters used by pavement design ...
A simple strategy for varying the restart parameter in GMRES(m)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, A H; Jessup, E R; Kolev, T V
2007-10-02
When solving a system of linear equations with the restarted GMRES method, a fixed restart parameter is typically chosen. We present numerical experiments that demonstrate the beneficial effects of changing the value of the restart parameter in each restart cycle on the total time to solution. We propose a simple strategy for varying the restart parameter and provide some heuristic explanations for its effectiveness based on analysis of the symmetric case.
Physical characteristics and resistance parameters of typical urban cyclists.
Tengattini, Simone; Bigazzi, Alexander York
2018-03-30
This study investigates the rolling and drag resistance parameters and bicycle and cargo masses of typical urban cyclists. These factors are important for modelling of cyclist speed, power and energy expenditure, with applications including exercise performance, health and safety assessments and transportation network analysis. However, representative values for diverse urban travellers have not been established. Resistance parameters were measured utilizing a field coast-down test for 557 intercepted cyclists in Vancouver, Canada. Masses were also measured, along with other bicycle attributes such as tire pressure and size. The average (standard deviation) of coefficient of rolling resistance, effective frontal area, bicycle plus cargo mass, and bicycle-only mass were 0.0077 (0.0036), 0.559 (0.170) m 2 , 18.3 (4.1) kg, and 13.7 (3.3) kg, respectively. The range of measured values is wider and higher than suggested in existing literature, which focusses on sport cyclists. Significant correlations are identified between resistance parameters and rider and bicycle attributes, indicating higher resistance parameters for less sport-oriented cyclists. The findings of this study are important for appropriately characterising the full range of urban cyclists, including commuters and casual riders.
Disorder-induced losses in photonic crystal waveguides with line defects.
Gerace, Dario; Andreani, Lucio Claudio
2004-08-15
A numerical analysis of extrinsic diffraction losses in two-dimensional photonic crystal slabs with line defects is reported. To model disorder, a Gaussian distribution of hole radii in the triangular lattice of airholes is assumed. The extrinsic losses below the light line increase quadratically with the disorder parameter, decrease slightly with increasing core thickness, and depend weakly on the hole radius. For typical values of the disorder parameter the calculated loss values of guided modes below the light line compare favorably with available experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, H.H.M.; Chen, C.H.S.
1990-04-16
An assessment of the seismic hazard that exists along the major crude oil pipeline running through the New Madrid seismic zone from southeastern Louisiana to Patoka, Illinois is examined in the report. An 1811-1812 type New Madrid earthquake with moment magnitude 8.2 is assumed to occur at three locations where large historical earthquakes have occurred. Six pipeline crossings of the major rivers in West Tennessee are chosen as the sites for hazard evaluation because of the liquefaction potential at these sites. A seismologically-based model is used to predict the bedrock accelerations. Uncertainties in three model parameters, i.e., stress parameter, cutoffmore » frequency, and strong-motion duration are included in the analysis. Each parameter is represented by three typical values. From the combination of these typical values, a total of 27 earthquake time histories can be generated for each selected site due to an 1811-1812 type New Madrid earthquake occurring at a postulated seismic source.« less
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Dembo, M; De Penfold, J B; Ruiz, R; Casalta, H
1985-03-01
Four pigeons were trained to peck a key under different values of a temporally defined independent variable (T) and different probabilities of reinforcement (p). Parameter T is a fixed repeating time cycle and p the probability of reinforcement for the first response of each cycle T. Two dependent variables were used: mean response rate and mean postreinforcement pause. For all values of p a critical value for the independent variable T was found (T=1 sec) in which marked changes took place in response rate and postreinforcement pauses. Behavior typical of random ratio schedules was obtained at T 1 sec and behavior typical of random interval schedules at T 1 sec. Copyright © 1985. Published by Elsevier B.V.
Investigating the Metallicity–Mixing-length Relation
NASA Astrophysics Data System (ADS)
Viani, Lucas S.; Basu, Sarbani; Joel Ong J., M.; Bonaca, Ana; Chaplin, William J.
2018-05-01
Stellar models typically use the mixing-length approximation as a way to implement convection in a simplified manner. While conventionally the value of the mixing-length parameter, α, used is the solar-calibrated value, many studies have shown that other values of α are needed to properly model stars. This uncertainty in the value of the mixing-length parameter is a major source of error in stellar models and isochrones. Using asteroseismic data, we determine the value of the mixing-length parameter required to properly model a set of about 450 stars ranging in log g, {T}eff}, and [{Fe}/{{H}}]. The relationship between the value of α required and the properties of the star is then investigated. For Eddington atmosphere, non-diffusion models, we find that the value of α can be approximated by a linear model, in the form of α /{α }ȯ =5.426{--}0.101 {log}(g)-1.071 {log}({T}eff}) +0.437([{Fe}/{{H}}]). This process is repeated using a variety of model physics, as well as compared with previous studies and results from 3D convective simulations.
2013-09-30
HPMM. For these minerals, kaolinite and smectite , the corresponding shear speed estimates are 13 m/s and 0.25 m/s. The third and fourth columns of...representative value for each parameter in two clay minerals, kaolinite and smectite , which are the most common types in marine mud. These values produce...13 m/s for kaolinite and 0.25 m/s for smectite . The third column shows typical ranges of values for h, L, and χ in the two clay types. The fourth
Testable solution of the cosmological constant and coincidence problems
NASA Astrophysics Data System (ADS)
Shaw, Douglas J.; Barrow, John D.
2011-02-01
We present a new solution to the cosmological constant (CC) and coincidence problems in which the observed value of the CC, Λ, is linked to other observable properties of the Universe. This is achieved by promoting the CC from a parameter that must be specified, to a field that can take many possible values. The observed value of Λ≈(9.3Gyrs)-2 [≈10-120 in Planck units] is determined by a new constraint equation which follows from the application of a causally restricted variation principle. When applied to our visible Universe, the model makes a testable prediction for the dimensionless spatial curvature of Ωk0=-0.0056(ζb/0.5), where ζb˜1/2 is a QCD parameter. Requiring that a classical history exist, our model determines the probability of observing a given Λ. The observed CC value, which we successfully predict, is typical within our model even before the effects of anthropic selection are included. When anthropic selection effects are accounted for, we find that the observed coincidence between tΛ=Λ-1/2 and the age of the Universe, tU, is a typical occurrence in our model. In contrast to multiverse explanations of the CC problems, our solution is independent of the choice of a prior weighting of different Λ values and does not rely on anthropic selection effects. Our model includes no unnatural small parameters and does not require the introduction of new dynamical scalar fields or modifications to general relativity, and it can be tested by astronomical observations in the near future.
Meigal, Alexander Yu.; Miroshnichenko, German G.; Kuzmina, Anna P.; Rissanen, Saara M.; Georgiadis, Stefanos D.; Karjalainen, Pasi A.
2015-01-01
We compared a set of surface EMG (sEMG) parameters in several groups of schizophrenia (SZ, n = 74) patients and healthy controls (n = 11) and coupled them with the clinical data. sEMG records were quantified with spectral, mutual information (MI) based and recurrence quantification analysis (RQA) parameters, and with approximate and sample entropies (ApEn and SampEn). Psychotic deterioration was estimated with Positive and Negative Syndrome Scale (PANSS) and with the positive subscale of PANSS. Neuroleptic-induced parkinsonism (NIP) motor symptoms were estimated with Simpson-Angus Scale (SAS). Dyskinesia was measured with Abnormal Involuntary Movement Scale (AIMS). We found that there was no difference in values of sEMG parameters between healthy controls and drug-naïve SZ patients. The most specific group was formed of SZ patients who were administered both typical and atypical antipsychotics (AP). Their sEMG parameters were significantly different from those of SZ patients taking either typical or atypical AP or taking no AP. This may represent a kind of synergistic effect of these two classes of AP. For the clinical data we found that PANSS, SAS, and AIMS were not correlated to any of the sEMG parameters. Conclusion: with nonlinear parameters of sEMG it is possible to reveal NIP in SZ patients, and it may help to discriminate between different clinical groups of SZ patients. Combined typical and atypical AP therapy has stronger effect on sEMG than a therapy with AP of only one class. PMID:26217236
Meigal, Alexander Yu; Miroshnichenko, German G; Kuzmina, Anna P; Rissanen, Saara M; Georgiadis, Stefanos D; Karjalainen, Pasi A
2015-01-01
We compared a set of surface EMG (sEMG) parameters in several groups of schizophrenia (SZ, n = 74) patients and healthy controls (n = 11) and coupled them with the clinical data. sEMG records were quantified with spectral, mutual information (MI) based and recurrence quantification analysis (RQA) parameters, and with approximate and sample entropies (ApEn and SampEn). Psychotic deterioration was estimated with Positive and Negative Syndrome Scale (PANSS) and with the positive subscale of PANSS. Neuroleptic-induced parkinsonism (NIP) motor symptoms were estimated with Simpson-Angus Scale (SAS). Dyskinesia was measured with Abnormal Involuntary Movement Scale (AIMS). We found that there was no difference in values of sEMG parameters between healthy controls and drug-naïve SZ patients. The most specific group was formed of SZ patients who were administered both typical and atypical antipsychotics (AP). Their sEMG parameters were significantly different from those of SZ patients taking either typical or atypical AP or taking no AP. This may represent a kind of synergistic effect of these two classes of AP. For the clinical data we found that PANSS, SAS, and AIMS were not correlated to any of the sEMG parameters. with nonlinear parameters of sEMG it is possible to reveal NIP in SZ patients, and it may help to discriminate between different clinical groups of SZ patients. Combined typical and atypical AP therapy has stronger effect on sEMG than a therapy with AP of only one class.
New photoionization models of intergalactic clouds
NASA Technical Reports Server (NTRS)
Donahue, Megan; Shull, J. M.
1991-01-01
New photoionization models of optically thin low-density intergalactic gas at constant pressure, photoionized by QSOs, are presented. All ion stages of H, He, C, N, O, Si, and Fe, plus H2 are modeled, and the column density ratios of clouds at specified values of the ionization parameter of n sub gamma/n sub H and cloud metallicity are predicted. If Ly-alpha clouds are much cooler than the previously assumed value, 30,000 K, the ionization parameter must be very low, even with the cooling contribution of a trace component of molecules. If the clouds cool below 6000 K, their final equilibrium must be below 3000 K, owing to the lack of a stable phase between 6000 and 3000 K. If it is assumed that the clouds are being irradiated by an EUV power-law continuum typical of WSOs, with J0 = 10 exp -21 ergs/s sq cm Hz, typical cloud thicknesses along the line of sight that are much smaller than would be expected from shocks, thermal instabilities, or gravitational collapse are derived.
Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT
NASA Astrophysics Data System (ADS)
Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang
2015-03-01
In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.
Systems identification using a modified Newton-Raphson method: A FORTRAN program
NASA Technical Reports Server (NTRS)
Taylor, L. W., Jr.; Iliff, K. W.
1972-01-01
A FORTRAN program is offered which computes a maximum likelihood estimate of the parameters of any linear, constant coefficient, state space model. For the case considered, the maximum likelihood estimate can be identical to that which minimizes simultaneously the weighted mean square difference between the computed and measured response of a system and the weighted square of the difference between the estimated and a priori parameter values. A modified Newton-Raphson or quasilinearization method is used to perform the minimization which typically requires several iterations. A starting technique is used which insures convergence for any initial values of the unknown parameters. The program and its operation are described in sufficient detail to enable the user to apply the program to his particular problem with a minimum of difficulty.
Testable solution of the cosmological constant and coincidence problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, Douglas J.; Barrow, John D.
2011-02-15
We present a new solution to the cosmological constant (CC) and coincidence problems in which the observed value of the CC, {Lambda}, is linked to other observable properties of the Universe. This is achieved by promoting the CC from a parameter that must be specified, to a field that can take many possible values. The observed value of {Lambda}{approx_equal}(9.3 Gyrs){sup -2}[{approx_equal}10{sup -120} in Planck units] is determined by a new constraint equation which follows from the application of a causally restricted variation principle. When applied to our visible Universe, the model makes a testable prediction for the dimensionless spatial curvaturemore » of {Omega}{sub k0}=-0.0056({zeta}{sub b}/0.5), where {zeta}{sub b}{approx}1/2 is a QCD parameter. Requiring that a classical history exist, our model determines the probability of observing a given {Lambda}. The observed CC value, which we successfully predict, is typical within our model even before the effects of anthropic selection are included. When anthropic selection effects are accounted for, we find that the observed coincidence between t{sub {Lambda}={Lambda}}{sup -1/2} and the age of the Universe, t{sub U}, is a typical occurrence in our model. In contrast to multiverse explanations of the CC problems, our solution is independent of the choice of a prior weighting of different {Lambda} values and does not rely on anthropic selection effects. Our model includes no unnatural small parameters and does not require the introduction of new dynamical scalar fields or modifications to general relativity, and it can be tested by astronomical observations in the near future.« less
Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model
Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.
2013-01-01
One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874
Effects of lint cleaning on lint trash particle size distribution
USDA-ARS?s Scientific Manuscript database
Cotton quality trash measurements used today typically yield a single value for trash parameters for a lint sample (i.e. High Volume Instrument – percent area; Advanced Fiber Information System – total count, trash size, dust count, trash count, and visible foreign matter). A Cotton Trash Identifica...
Biochemical and physiological consequences of the Apollo flight diet.
NASA Technical Reports Server (NTRS)
Hander, E. W.; Leach, C. S.; Fischer, C. L.; Rummel, J.; Rambaut, P.; Johnson, P. C.
1971-01-01
Six male subjects subsisting on a typical Apollo flight diet for five consecutive days were evaluated for changes in biochemical and physiological status. Laboratory examinations failed to demonstrate any significant changes of the kind previously attributed to weightlessness, such as in serum electrolytes, endocrine values, body fluid, or hematologic parameters.
Parameterization of deformed nuclei for Glauber modeling in relativistic heavy ion collisions
Sorensen, P.; Tang, A. H.; Videbaek, F.; ...
2015-08-04
In this study, the density distributions of large nuclei are typically modeled with a Woods–Saxon distribution characterized by a radius R 0 and skin depth a. Deformation parameters β are then introduced to describe non-spherical nuclei using an expansion in spherical harmonics R 0(1+β 2Y 2 0+β 4Y 4 0). But when a nucleus is non-spherical, the R 0 and a inferred from electron scattering experiments that integrate over all nuclear orientations cannot be used directly as the parameters in the Woods–Saxon distribution. In addition, the β 2 values typically derived from the reduced electric quadrupole transition probability B(E2)↑ aremore » not directly related to the β 2 values used in the spherical harmonic expansion. B(E2)↑ is more accurately related to the intrinsic quadrupole moment Q 0 than to β 2. One can however calculate Q 0 for a given β 2 and then derive B(E2)↑ from Q 0. In this paper we calculate and tabulate the R 0, a , and β 2 values that when used in a Woods–Saxon distribution, will give results consistent with electron scattering data. We then present calculations of the second and third harmonic participant eccentricity (ε 2 and ε 3) with the new and old parameters. We demonstrate that ε 3 is particularly sensitive to a and argue that using the incorrect value of a has important implications for the extraction of viscosity to entropy ratio (η/s) from the QGP created in Heavy Ion collisions.« less
Li, Zhaofu; Liu, Hongyu; Luo, Chuan; Li, Yan; Li, Hengpeng; Pan, Jianjun; Jiang, Xiaosan; Zhou, Quansuo; Xiong, Zhengqin
2015-05-01
The Hydrological Simulation Program-Fortran (HSPF), which is a hydrological and water-quality computer model that was developed by the United States Environmental Protection Agency, was employed to simulate runoff and nutrient export from a typical small watershed in a hilly eastern monsoon region of China. First, a parameter sensitivity analysis was performed to assess how changes in the model parameters affect runoff and nutrient export. Next, the model was calibrated and validated using measured runoff and nutrient concentration data. The Nash-Sutcliffe efficiency (E NS ) values of the yearly runoff were 0.87 and 0.69 for the calibration and validation periods, respectively. For storms runoff events, the E NS values were 0.93 for the calibration period and 0.47 for the validation period. Antecedent precipitation and soil moisture conditions can affect the simulation accuracy of storm event flow. The E NS values for the total nitrogen (TN) export were 0.58 for the calibration period and 0.51 for the validation period. In addition, the correlation coefficients between the observed and simulated TN concentrations were 0.84 for the calibration period and 0.74 for the validation period. For phosphorus export, the E NS values were 0.89 for the calibration period and 0.88 for the validation period. In addition, the correlation coefficients between the observed and simulated orthophosphate concentrations were 0.96 and 0.94 for the calibration and validation periods, respectively. The nutrient simulation results are generally satisfactory even though the parameter-lumped HSPF model cannot represent the effects of the spatial pattern of land cover on nutrient export. The model parameters obtained in this study could serve as reference values for applying the model to similar regions. In addition, HSPF can properly describe the characteristics of water quantity and quality processes in this area. After adjustment, calibration, and validation of the parameters, the HSPF model is suitable for hydrological and water-quality simulations in watershed planning and management and for designing best management practices.
Alternative method of quantum state tomography toward a typical target via a weak-value measurement
NASA Astrophysics Data System (ADS)
Chen, Xi; Dai, Hong-Yi; Yang, Le; Zhang, Ming
2018-03-01
There is usually a limitation of weak interaction on the application of weak-value measurement. This limitation dominates the performance of the quantum state tomography toward a typical target in the finite and high-dimensional complex-valued superposition of its basis states, especially when the compressive sensing technique is also employed. Here we propose an alternative method of quantum state tomography, presented as a general model, toward such typical target via weak-value measurement to overcome such limitation. In this model the pointer for the weak-value measurement is a qubit, and the target-pointer coupling interaction is no longer needed within the weak interaction limitation, meanwhile this interaction under the compressive sensing can be described with the Taylor series of the unitary evolution operator. The postselection state at the target is the equal superposition of all basis states, and the pointer readouts are gathered under multiple Pauli operator measurements. The reconstructed quantum state is generated from an optimization algorithm of total variation augmented Lagrangian alternating direction algorithm. Furthermore, we demonstrate an example of this general model for the quantum state tomography toward the planar laser-energy distribution and discuss the relations among some parameters at both our general model and the original first-order approximate model for this tomography.
Junge, Randall E; Dutton, Christopher J; Knightly, Felicia; Williams, Cathy V; Rasambainarivo, Fidisoa T; Louis, Edward E
2008-12-01
Health and nutritional assessments of wildlife are important management tools and can provide a means to evaluate ecosystem health. Such examinations were performed on 37 white-fronted brown lemurs (Eulemur fulvus albifrons) from four sites in Madagascar. Comparison of health parameters between sites revealed statistically significant differences in body weight, body temperature, respiratory rate, hematology parameters (white cell count, hematocrit, segmented neutrophil count, and lymphocyte count), serum chemistry parameters (aspartate aminotransferase, alanine aminotransferase, serum alkaline phosphatase, total protein, albumin, phosphorus, calcium, sodium, chloride, and creatinine phosphokinase), and nutrition parameters (copper, zinc, ferritin, retinol, tocopherol, and 25-hydroxycholecalciferol). Two of 10 lemurs tested were positive for toxoplasmosis; none of 10 were positive for Cryptosporidium or Giardia. Enteric bacteria and endo- and ectoparasites were typical. Statistically different values in hematology and chemistry values probably do not reflect clinically significant differences, whereas nutrition parameter differences are likely related to season, soil, and forage availability.
NASA Astrophysics Data System (ADS)
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty
Data collection handbook to support modeling the impacts of radioactive material in soil
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; Cheng, J.J.; Jones, L.G.
1993-04-01
A pathway analysis computer code called RESRAD has been developed for implementing US Department of Energy Residual Radioactive Material Guidelines. Hydrogeological, meteorological, geochemical, geometrical (size, area, depth), and material-related (soil, concrete) parameters are used in the RESRAD code. This handbook discusses parameter definitions, typical ranges, variations, measurement methodologies, and input screen locations. Although this handbook was developed primarily to support the application of RESRAD, the discussions and values are valid for other model applications.
Maljovec, D.; Liu, S.; Wang, B.; ...
2015-07-14
Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated,more » where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.« less
Contact angles of wetting and water stability of soil structure
NASA Astrophysics Data System (ADS)
Kholodov, V. A.; Yaroslavtseva, N. V.; Yashin, M. A.; Frid, A. S.; Lazarev, V. I.; Tyugai, Z. N.; Milanovskiy, E. Yu.
2015-06-01
From the soddy-podzolic soils and typical chernozems of different texture and land use, dry 3-1 mm aggregates were isolated and sieved in water. As a result, water-stable aggregates and water-unstable particles composing dry 3-1 mm aggregates were obtained. These preparations were ground, and contact angles of wetting were determined by the static sessile drop method. The angles varied from 11° to 85°. In most cases, the values of the angles for the water-stable aggregates significantly exceeded those for the water-unstable components. In terms of carbon content in structural units, there was no correlation between these parameters. When analyzing the soil varieties separately, the significant positive correlation between the carbon content and contact angle of aggregates was revealed only for the loamy-clayey typical chernozem. Based on the multivariate analysis of variance, the value of contact wetting angle was shown to be determined by the structural units belonging to water-stable or water-unstable components of macroaggregates and by the land use type. In addition, along with these parameters, the texture has an indirect effect.
NASA Technical Reports Server (NTRS)
Srivatsangam, S.; Reiter, E. R.
1973-01-01
Extratropical eddy distributions in four months typical of the four seasons are treated in terms of temporal mean and temporal r.m.s. values of the geostrophic relative vorticity. The geographical distributions of these parameters at the 300 mb level show that the arithmetic mean fields are highly biased representatives of the extratropical eddy distributions. The zonal arithmetic means of these parameters are also presented. These show that the zonal-and-time mean relative vorticity is but a small fraction of the zonal mean of the temporal r.m.s. relative vorticity, K. The reasons for considering the r.m.s. values as the temporal normal values of vorticity in the extratropics are given in considerable detail. The parameter K is shown to be of considerable importance in locating the extratropical frontal jet streams (EFJ) in time-and-zonal average distributions. The study leads to an understanding of the seasonal migrations of the EFJ which have not been explored until now.
CONSTRAINTS ON THE SYNCHROTRON EMISSION MECHANISM IN GAMMA-RAY BURSTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beniamini, Paz; Piran, Tsvi, E-mail: paz.beniamini@mail.huji.ac.il, E-mail: tsvi.piran@mail.huji.ac.il
2013-05-20
We reexamine the general synchrotron model for gamma-ray bursts' (GRBs') prompt emission and determine the regime in the parameter phase space in which it is viable. We characterize a typical GRB pulse in terms of its peak energy, peak flux, and duration and use the latest Fermi observations to constrain the high-energy part of the spectrum. We solve for the intrinsic parameters at the emission region and find the possible parameter phase space for synchrotron emission. Our approach is general and it does not depend on a specific energy dissipation mechanism. Reasonable synchrotron solutions are found with energy ratios ofmore » 10{sup -4} < {epsilon}{sub B}/{epsilon}{sub e} < 10, bulk Lorentz factor values of 300 < {Gamma} < 3000, typical electrons' Lorentz factor values of 3 Multiplication-Sign 10{sup 3} < {gamma}{sub e} < 10{sup 5}, and emission radii of the order 10{sup 15} cm < R < 10{sup 17} cm. Most remarkable among those are the rather large values of the emission radius and the electron's Lorentz factor. We find that soft (with peak energy less than 100 keV) but luminous (isotropic luminosity of 1.5 Multiplication-Sign 10{sup 53}) pulses are inefficient. This may explain the lack of strong soft bursts. In cases when most of the energy is carried out by the kinetic energy of the flow, such as in the internal shocks, the synchrotron solution requires that only a small fraction of the electrons are accelerated to relativistic velocities by the shocks. We show that future observations of very high energy photons from GRBs by CTA could possibly determine all parameters of the synchrotron model or rule it out altogether.« less
Supernovae as probes of cosmic parameters: estimating the bias from under-dense lines of sight
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busti, V.C.; Clarkson, C.; Holanda, R.F.L., E-mail: vinicius.busti@uct.ac.za, E-mail: holanda@uepb.edu.br, E-mail: chris.clarkson@uct.ac.za
2013-11-01
Correctly interpreting observations of sources such as type Ia supernovae (SNe Ia) require knowledge of the power spectrum of matter on AU scales — which is very hard to model accurately. Because under-dense regions account for much of the volume of the universe, light from a typical source probes a mean density significantly below the cosmic mean. The relative sparsity of sources implies that there could be a significant bias when inferring distances of SNe Ia, and consequently a bias in cosmological parameter estimation. While the weak lensing approximation should in principle give the correct prediction for this, linear perturbationmore » theory predicts an effectively infinite variance in the convergence for ultra-narrow beams. We attempt to quantify the effect typically under-dense lines of sight might have in parameter estimation by considering three alternative methods for estimating distances, in addition to the usual weak lensing approximation. We find in each case this not only increases the errors in the inferred density parameters, but also introduces a bias in the posterior value.« less
Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.
Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S
2012-11-01
One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, D; MacDougall, R
2016-06-15
Purpose: Accurate values for Kerma-Area-Product (KAP) are needed for patient dosimetry and quality control for exams utilizing radiographic and/or fluoroscopic imaging. The KAP measured using a typical direct KAP meter built with parallel-plate transmission ionization chamber is not precise and depends on the energy spectrum of diagnostic x-rays. This study compared the accuracy and reproducibility of KAP derived from system parameters with values measured with a direct KAP meter. Methods: IEC tolerance for displayed KAP is specified up to ± 35% above 2.5 Gy-cm{sup 2} and manufacturer’s specifications are typically ± 25%. KAP values from the direct KAP meter driftsmore » with time leading to replacement or re-calibration. More precise and consistent KAP is achievable utilizing a database of known radiation output for various system parameters. The integrated KAP meter was removed from a radiography system. A total of 48 measurements of air kerma were acquired at x-ray tube potential from 40 to 150 kVp with 10 kVp increment using ion chamber type external dosimeter at free-in-air geometry for four different types of filter combinations following the manufacturer’s service procedure. These data were used to create updated correction factors that determine air kerma computationally for given system parameters. Results of calculated KAP were evaluated against results using a calibrated ion chamber based dosimeter and a computed radiography imaging plate to measure x-ray field size. Results: The accuracy of calculated KAP from the system parameters was better within 4% deviation in all diagnostic x-ray tube potentials tested from 50 to 140 kVp. In contrast, deviations of up to 25% were measured from KAP displayed from the direct KAP meter. Conclusion: The “calculated KAP” approach provides the nominal advantage of improved accuracy and precision of displayed KAP as well as reduced cost of calibrating or replacing integrated KAP meters.« less
Estimating the uncertainty in thermochemical calculations for oxygen-hydrogen combustors
NASA Astrophysics Data System (ADS)
Sims, Joseph David
The thermochemistry program CEA2 was combined with the statistical thermodynamics program PAC99 in a Monte Carlo simulation to determine the uncertainty in several CEA2 output variables due to uncertainty in thermodynamic reference values for the reactant and combustion species. In all, six typical performance parameters were examined, along with the required intermediate calculations (five gas properties and eight stoichiometric coefficients), for three hydrogen-oxygen combustors: a main combustor, an oxidizer preburner and a fuel preburner. The three combustors were analyzed in two different modes: design mode, where, for the first time, the uncertainty in thermodynamic reference values---taken from the literature---was considered (inputs to CEA2 were specified and so had no uncertainty); and data reduction mode, where inputs to CEA2 did have uncertainty. The inputs to CEA2 were contrived experimental measurements that were intended to represent the typical combustor testing facility. In design mode, uncertainties in the performance parameters were on the order of 0.1% for the main combustor, on the order of 0.05% for the oxidizer preburner and on the order of 0.01% for the fuel preburner. Thermodynamic reference values for H2O were the dominant sources of uncertainty, as was the assigned enthalpy for liquid oxygen. In data reduction mode, uncertainties in performance parameters increased significantly as a result of the uncertainties in experimental measurements compared to uncertainties in thermodynamic reference values. Main combustor and fuel preburner theoretical performance values had uncertainties of about 0.5%, while the oxidizer preburner had nearly 2%. Associated experimentally-determined performance values for all three combustors were 3% to 4%. The dominant sources of uncertainty in this mode were the propellant flowrates. These results only apply to hydrogen-oxygen combustors and should not be generalized to every propellant combination. Species for a hydrogen-oxygen system are relatively simple, thereby resulting in low thermodynamic reference value uncertainties. Hydrocarbon combustors, solid rocket motors and hybrid rocket motors have combustion gases containing complex molecules that will likely have thermodynamic reference values with large uncertainties. Thus, every chemical system should be analyzed in a similar manner as that shown in this work.
Convergence in parameters and predictions using computational experimental design.
Hagen, David R; White, Jacob K; Tidor, Bruce
2013-08-06
Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.
Lança, L; Silva, A; Alves, E; Serranheira, F; Correia, M
2008-01-01
Typical distribution of exposure parameters in plain radiography is unknown in Portugal. This study aims to identify exposure parameters that are being used in plain radiography in the Lisbon area and to compare the collected data with European references [Commission of European Communities (CEC) guidelines]. The results show that in four examinations (skull, chest, lumbar spine and pelvis), there is a strong tendency of using exposure times above the European recommendation. The X-ray tube potential values (in kV) are below the recommended values from CEC guidelines. This study shows that at a local level (Lisbon region), radiographic practice does not comply with CEC guidelines concerning exposure techniques. Further national/local studies are recommended with the objective to improve exposure optimisation and technical procedures in plain radiography. This study also suggests the need to establish national/local diagnostic reference levels and to proceed to effective measurements for exposure optimisation.
Effect of Biological and Mass Transfer Parameter Uncertainty on N₂O Emission Estimates from WRRFs.
Song, Kang; Harper, Willie F; Takeuchi, Yuki; Hosomi, Masaaki; Terada, Akihiko
2017-07-01
This research used the detailed activated sludge model (ASM) to investigate the effect of parameter uncertainty on nitrous oxide (N2O) emissions from biological wastewater treatment systems. Monte Carlo simulations accounted for uncertainty in the values of the microbial growth parameters and in the volumetric mass transfer coefficient for dissolved oxygen (kLaDO), and the results show that the detailed ASM predicted N2O emission of less than 4% (typically 1%) of the total influent
NASA Astrophysics Data System (ADS)
Glover, Paul W. J.
2016-07-01
When scientists apply Archie's first law they often include an extra parameter a, which was introduced about 10 years after the equation's first publication by Winsauer et al. (1952), and which is sometimes called the "tortuosity" or "lithology" parameter. This parameter is not, however, theoretically justified. Paradoxically, the Winsauer et al. (1952) form of Archie's law often performs better than the original, more theoretically correct version. The difference in the cementation exponent calculated from these two forms of Archie's law is important, and can lead to a misestimation of reserves by at least 20 % for typical reservoir parameter values. We have examined the apparent paradox, and conclude that while the theoretical form of the law is correct, the data that we have been analysing with Archie's law have been in error. There are at least three types of systematic error that are present in most measurements: (i) a porosity error, (ii) a pore fluid salinity error, and (iii) a temperature error. Each of these systematic errors is sufficient to ensure that a non-unity value of the parameter a is required in order to fit the electrical data well. Fortunately, the inclusion of this parameter in the fit has compensated for the presence of the systematic errors in the electrical and porosity data, leading to a value of cementation exponent that is correct. The exceptions are those cementation exponents that have been calculated for individual core plugs. We make a number of recommendations for reducing the systematic errors that contribute to the problem and suggest that the value of the parameter a may now be used as an indication of data quality.
Approximate, computationally efficient online learning in Bayesian spiking neurons.
Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André
2014-03-01
Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.
Study on Adaptive Parameter Determination of Cluster Analysis in Urban Management Cases
NASA Astrophysics Data System (ADS)
Fu, J. Y.; Jing, C. F.; Du, M. Y.; Fu, Y. L.; Dai, P. P.
2017-09-01
The fine management for cities is the important way to realize the smart city. The data mining which uses spatial clustering analysis for urban management cases can be used in the evaluation of urban public facilities deployment, and support the policy decisions, and also provides technical support for the fine management of the city. Aiming at the problem that DBSCAN algorithm which is based on the density-clustering can not realize parameter adaptive determination, this paper proposed the optimizing method of parameter adaptive determination based on the spatial analysis. Firstly, making analysis of the function Ripley's K for the data set to realize adaptive determination of global parameter MinPts, which means setting the maximum aggregation scale as the range of data clustering. Calculating every point object's highest frequency K value in the range of Eps which uses K-D tree and setting it as the value of clustering density to realize the adaptive determination of global parameter MinPts. Then, the R language was used to optimize the above process to accomplish the precise clustering of typical urban management cases. The experimental results based on the typical case of urban management in XiCheng district of Beijing shows that: The new DBSCAN clustering algorithm this paper presents takes full account of the data's spatial and statistical characteristic which has obvious clustering feature, and has a better applicability and high quality. The results of the study are not only helpful for the formulation of urban management policies and the allocation of urban management supervisors in XiCheng District of Beijing, but also to other cities and related fields.
Choosing a Transformation in Analyses of Insect Counts from Contagious Distributions with Low Means
W.D. Pepper; S.J. Zarnoch; G.L. DeBarr; P. de Groot; C.D. Tangren
1997-01-01
Guidelines based on computer simulation are suggested for choosing a transformation of insect counts from negative binomial distributions with low mean counts and high levels of contagion. Typical values and ranges of negative binomial model parameters were determined by fitting the model to data from 19 entomological field studies. Random sampling of negative binomial...
MLBCD: a machine learning tool for big clinical data.
Luo, Gang
2015-01-01
Predictive modeling is fundamental for extracting value from large clinical data sets, or "big clinical data," advancing clinical research, and improving healthcare. Machine learning is a powerful approach to predictive modeling. Two factors make machine learning challenging for healthcare researchers. First, before training a machine learning model, the values of one or more model parameters called hyper-parameters must typically be specified. Due to their inexperience with machine learning, it is hard for healthcare researchers to choose an appropriate algorithm and hyper-parameter values. Second, many clinical data are stored in a special format. These data must be iteratively transformed into the relational table format before conducting predictive modeling. This transformation is time-consuming and requires computing expertise. This paper presents our vision for and design of MLBCD (Machine Learning for Big Clinical Data), a new software system aiming to address these challenges and facilitate building machine learning predictive models using big clinical data. The paper describes MLBCD's design in detail. By making machine learning accessible to healthcare researchers, MLBCD will open the use of big clinical data and increase the ability to foster biomedical discovery and improve care.
Fuzzy Performance between Surface Fitting and Energy Distribution in Turbulence Runner
Liang, Zhongwei; Liu, Xiaochu; Ye, Bangyan; Brauwer, Richard Kars
2012-01-01
Because the application of surface fitting algorithms exerts a considerable fuzzy influence on the mathematical features of kinetic energy distribution, their relation mechanism in different external conditional parameters must be quantitatively analyzed. Through determining the kinetic energy value of each selected representative position coordinate point by calculating kinetic energy parameters, several typical algorithms of complicated surface fitting are applied for constructing microkinetic energy distribution surface models in the objective turbulence runner with those obtained kinetic energy values. On the base of calculating the newly proposed mathematical features, we construct fuzzy evaluation data sequence and present a new three-dimensional fuzzy quantitative evaluation method; then the value change tendencies of kinetic energy distribution surface features can be clearly quantified, and the fuzzy performance mechanism discipline between the performance results of surface fitting algorithms, the spatial features of turbulence kinetic energy distribution surface, and their respective environmental parameter conditions can be quantitatively analyzed in detail, which results in the acquirement of final conclusions concerning the inherent turbulence kinetic energy distribution performance mechanism and its mathematical relation. A further turbulence energy quantitative study can be ensured. PMID:23213287
Implications from the Upper Limit of Radio Afterglow Emission of FRB 131104/Swift J0644.5-5111
NASA Astrophysics Data System (ADS)
Gao, He; Zhang, Bing
2017-02-01
A γ-ray transient, Swift J0644.5-5111, has been claimed to be associated with FRB 131104. However, a long-term radio imaging follow-up observation only placed an upper limit on the radio afterglow flux of Swift J0644.5-5111. Applying the external shock model, we perform a detailed constraint on the afterglow parameters for the FRB 131104/Swift J0644.5-5111 system. We find that for the commonly used microphysics shock parameters (e.g., {ɛ }e=0.1, {ɛ }B=0.01, and p = 2.3), if the fast radio burst (FRB) is indeed cosmological as inferred from its measured dispersion measure (DM), the ambient medium number density should be ≤slant {10}-3 {{cm}}-3, which is the typical value for a compact binary merger environment but disfavors a massive star origin. Assuming a typical ISM density, one would require that the redshift of the FRB be much smaller than the value inferred from DM (z\\ll 0.1), implying a non-cosmological origin of DM. The constraints are much looser if one adopts smaller {ɛ }B and {ɛ }e values, as observed in some gamma-ray burst afterglows. The FRB 131104/Swift J0644.5-5111 association remains plausible. We critically discuss possible progenitor models for the system.
Jackson, Neal
2015-01-01
I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H 0 values of around 72-74 km s -1 Mpc -1 , with typical errors of 2-3 km s -1 Mpc -1 . This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67-68 km s -1 Mpc -1 and typical errors of 1-2 km s -1 Mpc -1 . The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.
Structure of thermal pair clouds around gamma-ray-emitting black holes
NASA Technical Reports Server (NTRS)
Liang, Edison P.
1991-01-01
Using certain simplifying assumptions, the general structure of a quasi-spherical thermal pair-balanced cloud surrounding an accreting black hole is derived from first principles. Pair-dominated hot solutions exist only for a restricted range of the viscosity parameter. These results are applied as examples to the 1979 HEAO 3 gamma-ray data of Cygnus X-1 and the Galactic center. Values are obtained for the viscosity parameter lying in the range of about 0.1-0.01. Since the lack of synchrotron soft photons requires the magnetic field to be typically less than 1 percent of the equipartition value, a magnetic field cannot be the main contributor to the viscous stress of the inner accretion flow, at least during the high gamma-ray states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-02-01
This report examines the potential for increasing the rate of production of natural gas from the East Cameron Block 271 Field in the Gulf of Mexico Outer Continental Shelf. Proved reserves are estimated using all available reservoir data, including well logs and pressure tests, and cost parameters typical in the area. Alternative schedules for future production are devised, and net present values calculated from which the maximum production rate that also maximizes net present value is determined.
Screening Models of Aquifer Heterogeneity Using the Flow Dimension
NASA Astrophysics Data System (ADS)
Walker, D. D.; Cello, P. A.; Roberts, R. M.; Valocchi, A. J.
2007-12-01
Despite advances in test interpretation and modeling, typical groundwater modeling studies only indirectly use the parameters and information inferred from hydraulic tests. In particular, the Generalized Radial Flow approach to test interpretation infers the flow dimension, a parameter describing the geometry of the flow field during a hydraulic test. Noninteger values of the flow dimension often are inferred for tests in highly heterogeneous aquifers, yet subsequent modeling studies typically ignore the flow dimension. Monte Carlo analyses of detailed numerical models of aquifer tests examine the flow dimension for several stochastic models of heterogeneous transmissivity, T(x). These include multivariate lognormal, fractional Brownian motion, a site percolation network, and discrete linear features with lengths distributed as power-law. The behavior of the simulated flow dimensions are compared to the flow dimensions observed for multiple aquifer tests in a fractured dolomite aquifer in the Great Lakes region of North America. The combination of multiple hydraulic tests, observed fracture patterns, and the Monte Carlo results are used to screen models of heterogeneity and their parameters for subsequent groundwater flow modeling.
Weak-value amplification and optimal parameter estimation in the presence of correlated noise
NASA Astrophysics Data System (ADS)
Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.
2017-11-01
We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.
A Hierarchical Bayesian Model for Calibrating Estimates of Species Divergence Times
Heath, Tracy A.
2012-01-01
In Bayesian divergence time estimation methods, incorporating calibrating information from the fossil record is commonly done by assigning prior densities to ancestral nodes in the tree. Calibration prior densities are typically parametric distributions offset by minimum age estimates provided by the fossil record. Specification of the parameters of calibration densities requires the user to quantify his or her prior knowledge of the age of the ancestral node relative to the age of its calibrating fossil. The values of these parameters can, potentially, result in biased estimates of node ages if they lead to overly informative prior distributions. Accordingly, determining parameter values that lead to adequate prior densities is not straightforward. In this study, I present a hierarchical Bayesian model for calibrating divergence time analyses with multiple fossil age constraints. This approach applies a Dirichlet process prior as a hyperprior on the parameters of calibration prior densities. Specifically, this model assumes that the rate parameters of exponential prior distributions on calibrated nodes are distributed according to a Dirichlet process, whereby the rate parameters are clustered into distinct parameter categories. Both simulated and biological data are analyzed to evaluate the performance of the Dirichlet process hyperprior. Compared with fixed exponential prior densities, the hierarchical Bayesian approach results in more accurate and precise estimates of internal node ages. When this hyperprior is applied using Markov chain Monte Carlo methods, the ages of calibrated nodes are sampled from mixtures of exponential distributions and uncertainty in the values of calibration density parameters is taken into account. PMID:22334343
Understanding identifiability as a crucial step in uncertainty assessment
NASA Astrophysics Data System (ADS)
Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.
2016-12-01
The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.
Intrinsic physical conditions and structure of relativistic jets in active galactic nuclei
NASA Astrophysics Data System (ADS)
Nokhrina, E. E.; Beskin, V. S.; Kovalev, Y. Y.; Zheltoukhov, A. A.
2015-03-01
The analysis of the frequency dependence of the observed shift of the cores of relativistic jets in active galactic nuclei (AGNs) allows us to evaluate the number density of the outflowing plasma ne and, hence, the multiplicity parameter λ = ne/nGJ, where nGJ is the Goldreich-Julian number density. We have obtained the median value for λmed = 3 × 1013 and the median value for the Michel magnetization parameter σM, med = 8 from an analysis of 97 sources. Since the magnetization parameter can be interpreted as the maximum possible Lorentz factor Γ of the bulk motion which can be obtained for relativistic magnetohydrodynamic (MHD) flow, this estimate is in agreement with the observed superluminal motion of bright features in AGN jets. Moreover, knowing these key parameters, one can determine the transverse structure of the flow. We show that the poloidal magnetic field and particle number density are much larger in the centre of the jet than near the jet boundary. The MHD model can also explain the typical observed level of jet acceleration. Finally, casual connectivity of strongly collimated jets is discussed.
NASA Astrophysics Data System (ADS)
Gravestijn, R. M.; Drake, J. R.; Hedqvist, A.; Rachlew, E.
2004-01-01
A loop voltage is required to sustain the reversed-field pinch (RFP) equilibrium. The configuration is characterized by redistribution of magnetic helicity but with the condition that the total helicity is maintained constant. The magnetic field shell penetration time, tgrs, has a critical role in the stability and performance of the RFP. Confinement in the EXTRAP device has been studied with two values of tgrs, first (EXTRAP-T2) with tgrs of the order of the typical relaxation cycle timescale and then (EXTRAP-T2R) with tgrs much longer than the relaxation cycle timescale, but still much shorter than the pulse length. Plasma parameters show significant improvements in confinement in EXTRAP-T2R. The typical loop voltage required to sustain comparable electron poloidal beta values is a factor of 3 lower in the EXTRAP-T2R device. The improvement is attributed to reduced magnetic turbulence.
NASA Astrophysics Data System (ADS)
Lim, Kyoung Jae; Park, Youn Shik; Kim, Jonggun; Shin, Yong-Chul; Kim, Nam Won; Kim, Seong Joon; Jeon, Ji-Hong; Engel, Bernard A.
2010-07-01
Many hydrologic and water quality computer models have been developed and applied to assess hydrologic and water quality impacts of land use changes. These models are typically calibrated and validated prior to their application. The Long-Term Hydrologic Impact Assessment (L-THIA) model was applied to the Little Eagle Creek (LEC) watershed and compared with the filtered direct runoff using BFLOW and the Eckhardt digital filter (with a default BFI max value of 0.80 and filter parameter value of 0.98), both available in the Web GIS-based Hydrograph Analysis Tool, called WHAT. The R2 value and the Nash-Sutcliffe coefficient values were 0.68 and 0.64 with BFLOW, and 0.66 and 0.63 with the Eckhardt digital filter. Although these results indicate that the L-THIA model estimates direct runoff reasonably well, the filtered direct runoff values using BFLOW and Eckhardt digital filter with the default BFI max and filter parameter values do not reflect hydrological and hydrogeological situations in the LEC watershed. Thus, a BFI max GA-Analyzer module (BFI max Genetic Algorithm-Analyzer module) was developed and integrated into the WHAT system for determination of the optimum BFI max parameter and filter parameter of the Eckhardt digital filter. With the automated recession curve analysis method and BFI max GA-Analyzer module of the WHAT system, the optimum BFI max value of 0.491 and filter parameter value of 0.987 were determined for the LEC watershed. The comparison of L-THIA estimates with filtered direct runoff using an optimized BFI max and filter parameter resulted in an R2 value of 0.66 and the Nash-Sutcliffe coefficient value of 0.63. However, L-THIA estimates calibrated with the optimized BFI max and filter parameter increased by 33% and estimated NPS pollutant loadings increased by more than 20%. This indicates L-THIA model direct runoff estimates can be incorrect by 33% and NPS pollutant loading estimation by more than 20%, if the accuracy of the baseflow separation method is not validated for the study watershed prior to model comparison. This study shows the importance of baseflow separation in hydrologic and water quality modeling using the L-THIA model.
Discovery of a suspected giant radio galaxy with the KAT-7 array
NASA Astrophysics Data System (ADS)
Colafrancesco, S.; Mhlahlo, N.; Jarrett, T.; Oozeer, N.; Marchegiani, P.
2016-02-01
We detect a new suspected giant radio galaxy (GRG) discovered by KAT-7. The GRG core is identified with the Wide-field Infrared Survey Explorer source J013313.50-130330.5, an extragalactic source based on its infrared colours and consistent with a misaligned active galactic nuclei-type spectrum at z ≈ 0.3. The multi-ν spectral energy distribution (SED) of the object associated with the GRG core shows a synchrotron peak at ν ≈ 1014 Hz consistent with the SED of a radio galaxy blazar-like core. The angular size of the lobes are ˜4 arcmin for the NW lobe and ˜1.2 arcmin for the SE lobe, corresponding to projected linear distances of ˜1078 kpc and ˜324 kpc, respectively. The best-fitting parameters for the SED of the GRG core and the value of jet boosting parameter δ = 2, indicate that the GRG jet has maximum inclination θ ≈ 30 deg with respect to the line of sight, a value obtained for δ = Γ, while the minimum value of θ is not constrained due to the degeneracy existing with the value of Lorentz factor Γ. Given the photometric redshift z ≈ 0.3, this GRG shows a core luminosity of P1.4 GHz ≈ 5.52 × 1024 W Hz-1, and a luminosity P1.4 GHz ≈ 1.29 × 1025 W Hz-1 for the NW lobe and P1.4 GHz ≈ 0.46 × 1025 W Hz-1 for the SE lobe, consistent with the typical GRG luminosities. The radio lobes show a fractional linear polarization ≈9 per cent consistent with typical values found in other GRG lobes.
Chimera patterns in two-dimensional networks of coupled neurons.
Schmidt, Alexander; Kasimatis, Theodoros; Hizanidis, Johanne; Provata, Astero; Hövel, Philipp
2017-03-01
We discuss synchronization patterns in networks of FitzHugh-Nagumo and leaky integrate-and-fire oscillators coupled in a two-dimensional toroidal geometry. A common feature between the two models is the presence of fast and slow dynamics, a typical characteristic of neurons. Earlier studies have demonstrated that both models when coupled nonlocally in one-dimensional ring networks produce chimera states for a large range of parameter values. In this study, we give evidence of a plethora of two-dimensional chimera patterns of various shapes, including spots, rings, stripes, and grids, observed in both models, as well as additional patterns found mainly in the FitzHugh-Nagumo system. Both systems exhibit multistability: For the same parameter values, different initial conditions give rise to different dynamical states. Transitions occur between various patterns when the parameters (coupling range, coupling strength, refractory period, and coupling phase) are varied. Many patterns observed in the two models follow similar rules. For example, the diameter of the rings grows linearly with the coupling radius.
A Novel Degradation Identification Method for Wind Turbine Pitch System
NASA Astrophysics Data System (ADS)
Guo, Hui-Dong
2018-04-01
It’s difficult for traditional threshold value method to identify degradation of operating equipment accurately. An novel degradation evaluation method suitable for wind turbine condition maintenance strategy implementation was proposed in this paper. Based on the analysis of typical variable-speed pitch-to-feather control principle and monitoring parameters for pitch system, a multi input multi output (MIMO) regression model was applied to pitch system, where wind speed, power generation regarding as input parameters, wheel rotation speed, pitch angle and motor driving currency for three blades as output parameters. Then, the difference between the on-line measurement and the calculated value from the MIMO regression model applying least square support vector machines (LSSVM) method was defined as the Observed Vector of the system. The Gaussian mixture model (GMM) was applied to fitting the distribution of the multi dimension Observed Vectors. Applying the model established, the Degradation Index was calculated using the SCADA data of a wind turbine damaged its pitch bearing retainer and rolling body, which illustrated the feasibility of the provided method.
Chimera patterns in two-dimensional networks of coupled neurons
NASA Astrophysics Data System (ADS)
Schmidt, Alexander; Kasimatis, Theodoros; Hizanidis, Johanne; Provata, Astero; Hövel, Philipp
2017-03-01
We discuss synchronization patterns in networks of FitzHugh-Nagumo and leaky integrate-and-fire oscillators coupled in a two-dimensional toroidal geometry. A common feature between the two models is the presence of fast and slow dynamics, a typical characteristic of neurons. Earlier studies have demonstrated that both models when coupled nonlocally in one-dimensional ring networks produce chimera states for a large range of parameter values. In this study, we give evidence of a plethora of two-dimensional chimera patterns of various shapes, including spots, rings, stripes, and grids, observed in both models, as well as additional patterns found mainly in the FitzHugh-Nagumo system. Both systems exhibit multistability: For the same parameter values, different initial conditions give rise to different dynamical states. Transitions occur between various patterns when the parameters (coupling range, coupling strength, refractory period, and coupling phase) are varied. Many patterns observed in the two models follow similar rules. For example, the diameter of the rings grows linearly with the coupling radius.
NASA Astrophysics Data System (ADS)
Miksovsky, J.; Raidl, A.
Time delays phase space reconstruction represents one of useful tools of nonlinear time series analysis, enabling number of applications. Its utilization requires the value of time delay to be known, as well as the value of embedding dimension. There are sev- eral methods how to estimate both these parameters. Typically, time delay is computed first, followed by embedding dimension. Our presented approach is slightly different - we reconstructed phase space for various combinations of mentioned parameters and used it for prediction by means of the nearest neighbours in the phase space. Then some measure of prediction's success was computed (correlation or RMSE, e.g.). The position of its global maximum (minimum) should indicate the suitable combination of time delay and embedding dimension. Several meteorological (particularly clima- tological) time series were used for the computations. We have also created a MS- Windows based program in order to implement this approach - its basic features will be presented as well.
NASA Astrophysics Data System (ADS)
Malyshev, A. V.; Petrova, A. B.; Sokolovskiy, A. N.; Surzhikov, A. P.
2018-06-01
The method for evaluating the integral defects level and chemical homogeneity of ferrite ceramics based on temperature dependence analysis of initial permeability is suggested. A phenomenological expression for the description of such dependence was suggested and an interpretation of its main parameters was given. It was shown, that the main criterion of the integral defects level of ferrite ceramics is relation of two parameters correlating with elastic stress value in a material. An indicator of structural perfection can be a maximum value of initial permeability close to Curie point as well. The temperature dependences of initial permeability have analyzed for samples sintered in laboratory conditions and for the ferrite industrial product. The proposed method allows controlling integral defects level of the soft ferrite products and has high sensitivity compare to typical X-ray methods.
Optimizing Vowel Formant Measurements in Four Acoustic Analysis Systems for Diverse Speaker Groups
Derdemezis, Ekaterini; Kent, Ray D.; Fourakis, Marios; Reinicke, Emily L.; Bolt, Daniel M.
2016-01-01
Purpose This study systematically assessed the effects of select linear predictive coding (LPC) analysis parameter manipulations on vowel formant measurements for diverse speaker groups using 4 trademarked Speech Acoustic Analysis Software Packages (SAASPs): CSL, Praat, TF32, and WaveSurfer. Method Productions of 4 words containing the corner vowels were recorded from 4 speaker groups with typical development (male and female adults and male and female children) and 4 speaker groups with Down syndrome (male and female adults and male and female children). Formant frequencies were determined from manual measurements using a consensus analysis procedure to establish formant reference values, and from the 4 SAASPs (using both the default analysis parameters and with adjustments or manipulations to select parameters). Smaller differences between values obtained from the SAASPs and the consensus analysis implied more optimal analysis parameter settings. Results Manipulations of default analysis parameters in CSL, Praat, and TF32 yielded more accurate formant measurements, though the benefit was not uniform across speaker groups and formants. In WaveSurfer, manipulations did not improve formant measurements. Conclusions The effects of analysis parameter manipulations on accuracy of formant-frequency measurements varied by SAASP, speaker group, and formant. The information from this study helps to guide clinical and research applications of SAASPs. PMID:26501214
A dual theory of price and value in a meso-scale economic model with stochastic profit rate
NASA Astrophysics Data System (ADS)
Greenblatt, R. E.
2014-12-01
The problem of commodity price determination in a market-based, capitalist economy has a long and contentious history. Neoclassical microeconomic theories are based typically on marginal utility assumptions, while classical macroeconomic theories tend to be value-based. In the current work, I study a simplified meso-scale model of a commodity capitalist economy. The production/exchange model is represented by a network whose nodes are firms, workers, capitalists, and markets, and whose directed edges represent physical or monetary flows. A pair of multivariate linear equations with stochastic input parameters represent physical (supply/demand) and monetary (income/expense) balance. The input parameters yield a non-degenerate profit rate distribution across firms. Labor time and price are found to be eigenvector solutions to the respective balance equations. A simple relation is derived relating the expected value of commodity price to commodity labor content. Results of Monte Carlo simulations are consistent with the stochastic price/labor content relation.
Computation of the anharmonic orbits in two piecewise monotonic maps with a single discontinuity
NASA Astrophysics Data System (ADS)
Li, Yurong; Du, Zhengdong
2017-02-01
In this paper, the bifurcation values for two typical piecewise monotonic maps with a single discontinuity are computed. The variation of the parameter of those maps leads to a sequence of border-collision and period-doubling bifurcations, generating a sequence of anharmonic orbits on the boundary of chaos. The border-collision and period-doubling bifurcation values are computed by the word-lifting technique and the Maple fsolve function or the Newton-Raphson method, respectively. The scaling factors which measure the convergent rates of the bifurcation values and the width of the stable periodic windows, respectively, are investigated. We found that these scaling factors depend on the parameters of the maps, implying that they are not universal. Moreover, if one side of the maps is linear, our numerical results suggest that those quantities converge increasingly. In particular, for the linear-quadratic case, they converge to one of the Feigenbaum constants δ _F= 4.66920160\\cdots.
Rapid Spectral Variability of the Symbiotic Star CH Cyg During One Night
NASA Astrophysics Data System (ADS)
Mikayilov, Kh. M.; Rustamov, B. N.; Alakbarov, I. A.; Rustamova, A. B.
2017-06-01
During one night (15.07.2015), within 6 hours 14 echelle spectrograms of this star were obtained. It was revealed that the profile of Ha and Hβ lines have two-component emission structure with a central absorption, parameters which vary from spectrum to spectrum during the night. The intensity of blue emission component (V) have been changed strongly during the night: the value of ratio of intensities of violet and red components (V/R) of line Hα decreased from 0:93 to 0:49 in the beginning and then increased to a value of 0.97. The synchronous variations of values of V/R for the Hα and Hβ lines have been revealed. The parameters of blue emission components of Hα and of line Hel λ5876 Å are correlated. We propose that revealed by us the rapid spectral changes in the spectrum of the star CH Cyg could be connected with a flickering in the optical brightness of the star that is typical for the active phase of this system.
Balancing income and cost in red deer management.
Skonhoft, Anders; Veiberg, Vebjørn; Gauteplass, Asle; Olaussen, Jon Olaf; Meisingset, Erling L; Mysterud, Atle
2013-01-30
This paper presents a bioeconomic analysis of a red deer population within a Norwegian institutional context. This population is managed by a well-defined manager, typically consisting of many landowners operating in a cooperative manner, with the goal of maximizing the present-value hunting related income while taking browsing and grazing damages into account. The red deer population is structured in five categories of animals (calves, female and male yearlings, adult females and adult males). It is shown that differences in the per-animal meat values and survival rates ('biological discounted' values) are instrumental in determining the optimal harvest composition. Fertility plays no direct role. It is argued that this is a general result working in stage-structured models with harvest values. In the numerical illustration it is shown that the optimal harvest pattern stays quite stable under various parameter changes. It is revealed which parameters and harvest restrictions that is most important. We also show that the current harvest pattern involves too much yearling harvest compared with the economically efficient level. Copyright © 2012 Elsevier Ltd. All rights reserved.
Fieselmann, Andreas; Dennerlein, Frank; Deuerling-Zheng, Yu; Boese, Jan; Fahrig, Rebecca; Hornegger, Joachim
2011-06-21
Filtered backprojection is the basis for many CT reconstruction tasks. It assumes constant attenuation values of the object during the acquisition of the projection data. Reconstruction artifacts can arise if this assumption is violated. For example, contrast flow in perfusion imaging with C-arm CT systems, which have acquisition times of several seconds per C-arm rotation, can cause this violation. In this paper, we derived and validated a novel spatio-temporal model to describe these kinds of artifacts. The model separates the temporal dynamics due to contrast flow from the scan and reconstruction parameters. We introduced derivative-weighted point spread functions to describe the spatial spread of the artifacts. The model allows prediction of reconstruction artifacts for given temporal dynamics of the attenuation values. Furthermore, it can be used to systematically investigate the influence of different reconstruction parameters on the artifacts. We have shown that with optimized redundancy weighting function parameters the spatial spread of the artifacts around a typical arterial vessel can be reduced by about 70%. Finally, an inversion of our model could be used as the basis for novel dynamic reconstruction algorithms that further minimize these artifacts.
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Structural Analysis of Cubane-Type Iron Clusters
Tan, Lay Ling; Holm, R. H.; Lee, Sonny C.
2013-01-01
The generalized cluster type [M4(μ3-Q)4Ln]x contains the cubane-type [M4Q4]z core unit that can approach, but typically deviates from, perfect Td symmetry. The geometric properties of this structure have been analyzed with reference to Td symmetry by a new protocol. Using coordinates of M and Q atoms, expressions have been derived for interatomic separations, bond angles, and volumes of tetrahedral core units (M4, Q4) and the total [M4Q4] core (as a tetracapped M4 tetrahedron). Values for structural parameters have been calculated from observed average values for a given cluster type. Comparison of calculated and observed values measures the extent of deviation of a given parameter from that required in an exact tetrahedral structure. The procedure has been applied to the structures of over 130 clusters containing [Fe4Q4] (Q = S2−, Se2−, Te2−, [NPR3]−, [NR]2−) units, of which synthetic and biological sulfide-bridged clusters constitute the largest subset. General structural features and trends in structural parameters are identified and summarized. An extensive database of structural properties (distances, angles, volumes) has been compiled in Supporting Information. PMID:24072952
Structural Analysis of Cubane-Type Iron Clusters.
Tan, Lay Ling; Holm, R H; Lee, Sonny C
2013-07-13
The generalized cluster type [M 4 (μ 3 -Q) 4 L n ] x contains the cubane-type [M 4 Q 4 ] z core unit that can approach, but typically deviates from, perfect T d symmetry. The geometric properties of this structure have been analyzed with reference to T d symmetry by a new protocol. Using coordinates of M and Q atoms, expressions have been derived for interatomic separations, bond angles, and volumes of tetrahedral core units (M 4 , Q 4 ) and the total [M 4 Q 4 ] core (as a tetracapped M 4 tetrahedron). Values for structural parameters have been calculated from observed average values for a given cluster type. Comparison of calculated and observed values measures the extent of deviation of a given parameter from that required in an exact tetrahedral structure. The procedure has been applied to the structures of over 130 clusters containing [Fe 4 Q 4 ] (Q = S 2- , Se 2- , Te 2- , [NPR 3 ] - , [NR] 2- ) units, of which synthetic and biological sulfide-bridged clusters constitute the largest subset. General structural features and trends in structural parameters are identified and summarized. An extensive database of structural properties (distances, angles, volumes) has been compiled in Supporting Information.
Optimal Wonderful Life Utility Functions in Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Tumer, Kagan; Swanson, Keith (Technical Monitor)
2000-01-01
The mathematics of Collective Intelligence (COINs) is concerned with the design of multi-agent systems so as to optimize an overall global utility function when those systems lack centralized communication and control. Typically in COINs each agent runs a distinct Reinforcement Learning (RL) algorithm, so that much of the design problem reduces to how best to initialize/update each agent's private utility function, as far as the ensuing value of the global utility is concerned. Traditional team game solutions to this problem assign to each agent the global utility as its private utility function. In previous work we used the COIN framework to derive the alternative Wonderful Life Utility (WLU), and experimentally established that having the agents use it induces global utility performance up to orders of magnitude superior to that induced by use of the team game utility. The WLU has a free parameter (the clamping parameter) which we simply set to zero in that previous work. Here we derive the optimal value of the clamping parameter, and demonstrate experimentally that using that optimal value can result in significantly improved performance over that of clamping to zero, over and above the improvement beyond traditional approaches.
Compression for an effective management of telemetry data
NASA Technical Reports Server (NTRS)
Arcangeli, J.-P.; Crochemore, M.; Hourcastagnou, J.-N.; Pin, J.-E.
1993-01-01
A Technological DataBase (T.D.B.) records all the values taken by the physical on-board parameters of a satellite since launch time. The amount of temporal data is very large (about 15 Gbytes for the satellite TDF1) and an efficient system must allow users to have a fast access to any value. This paper presents a new solution for T.D.B. management. The main feature of our new approach is the use of lossless data compression methods. Several parametrizable data compression algorithms based on substitution, relative difference and run-length encoding are available. Each of them is dedicated to a specific type of variation of the parameters' values. For each parameter, an analysis of stability is performed at decommutation time, and then the best method is chosen and run. A prototype intended to process different sorts of satellites has been developed. Its performances are well beyond the requirements and prove that data compression is both time and space efficient. For instance, the amount of data for TDF1 has been reduced to 1.05 Gbytes (compression ratio is 1/13) and access time for a typical query has been reduced from 975 seconds to 14 seconds.
Strongly enhanced 1/f - noise level in {kappa}-(BEDT-TTF){sub 2}X salts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandenburg, J.; Muller, J.; Wirth, S.
2010-01-01
Fluctuation spectroscopy has been used as an investigative tool to understand the scattering mechanisms of carriers and their low-frequency dynamics in quasi-two-dimensional organic conductors ?-(BEDT-TTF)2X. We report on the very high noise level in these systems as determined from Hooge's empirical law to quantify 1/f-type noise in solids. The value of the Hooge parameter ?H, i.e. the normalized noise level, of 105-107 is several orders of magnitude higher than values of ?Hnot, vert, similar10-2-10-3 typically found in homogeneous metals and semiconductors.
Energy levels distribution in supersaturated silicon with titanium for photovoltaic applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pérez, E., E-mail: eduper@ele.uva.es; Castán, H.; García, H.
2015-01-12
In the attempt to form an intermediate band in the bandgap of silicon substrates to give it the capability to absorb infrared radiation, we studied the deep levels in supersaturated silicon with titanium. The technique used to characterize the energy levels was the thermal admittance spectroscopy. Our experimental results showed that in samples with titanium concentration just under Mott limit there was a relationship among the activation energy value and the capture cross section value. This relationship obeys to the well known Meyer-Neldel rule, which typically appears in processes involving multiple excitations, like carrier capture/emission in deep levels, and itmore » is generally observed in disordered systems. The obtained characteristic Meyer-Neldel parameters were Tmn = 176 K and kTmn = 15 meV. The energy value could be associated to the typical energy of the phonons in the substrate. The almost perfect adjust of all experimental data to the same straight line provides further evidence of the validity of the Meyer Neldel rule, and may contribute to obtain a deeper insight on the ultimate meaning of this phenomenon.« less
Water absorption characteristics and structural properties of rice for sake brewing.
Mizuma, Tomochika; Kiyokawa, Yoshifumi; Wakai, Yoshinori
2008-09-01
This study investigated the water absorption curve characteristics and structural properties of rice used for sake brewing. The parameter values in the water absorption rate equation were calculated using experimental data. Differences between sample parameters for rice used for sake brewing and typical rice were confirmed. The water absorption curve for rice suitable for sake brewing showed a quantitatively sharper turn in the S-shaped water absorption curve than that of typical rice. Structural characteristics, including specific volume, grain density, and powdered density of polished rice, were measured by a liquid substitution method using a Gay-Lussac pycnometer. In addition, we calculated internal porosity from whole grain and powdered grain densities. These results showed that a decrease in internal porosity resulted from invasion of water into the rice grain, and that a decrease in the grain density affected expansion during the water absorption process. A characteristic S-shape water absorption curve for rice suitable for sake brewing was related to the existence of an invisible Shinpaku-like structure.
Single neuron modeling and data assimilation in BNST neurons
NASA Astrophysics Data System (ADS)
Farsian, Reza
Neurons, although tiny in size, are vastly complicated systems, which are responsible for the most basic yet essential functions of any nervous system. Even the most simple models of single neurons are usually high dimensional, nonlinear, and contain many parameters and states which are unobservable in a typical neurophysiological experiment. One of the most fundamental problems in experimental neurophysiology is the estimation of these parameters and states, since knowing their values is essential in identification, model construction, and forward prediction of biological neurons. Common methods of parameter and state estimation do not perform well for neural models due to their high dimensionality and nonlinearity. In this dissertation, two alternative approaches for parameters and state estimation of biological neurons have been demonstrated: dynamical parameter estimation (DPE) and a Markov Chain Monte Carlo (MCMC) method. The first method uses elements of chaos control and synchronization theory for parameter and state estimation. MCMC is a statistical approach which uses a path integral formulation to evaluate a mean and an error bound for these unobserved parameters and states. These methods have been applied to biological system of neurons in Bed Nucleus of Stria Termialis neurons (BNST) of rats. State and parameters of neurons in both systems were estimated, and their value were used for recreating a realistic model and predicting the behavior of the neurons successfully. The knowledge of biological parameters can ultimately provide a better understanding of the internal dynamics of a neuron in order to build robust models of neuron networks.
Effect of buoyancy on fuel containment in an open-cycle gas-core nuclear rocket engine.
NASA Technical Reports Server (NTRS)
Putre, H. A.
1971-01-01
Analysis aimed at determining the scaling laws for the buoyancy effect on fuel containment in an open-cycle gas-core nuclear rocket engine, so conducted that experimental conditions can be related to engine conditions. The fuel volume fraction in a short coaxial flow cavity is calculated with a programmed numerical solution of the steady Navier-Stokes equations for isothermal, variable density fluid mixing. A dimensionless parameter B, called the Buoyancy number, was found to correlate the fuel volume fraction for large accelerations and various density ratios. This parameter has the value B = 0 for zero acceleration, and B = 350 for typical engine conditions.
Automatic approach to deriving fuzzy slope positions
NASA Astrophysics Data System (ADS)
Zhu, Liang-Jun; Zhu, A.-Xing; Qin, Cheng-Zhi; Liu, Jun-Zhi
2018-03-01
Fuzzy characterization of slope positions is important for geographic modeling. Most of the existing fuzzy classification-based methods for fuzzy characterization require extensive user intervention in data preparation and parameter setting, which is tedious and time-consuming. This paper presents an automatic approach to overcoming these limitations in the prototype-based inference method for deriving fuzzy membership value (or similarity) to slope positions. The key contribution is a procedure for finding the typical locations and setting the fuzzy inference parameters for each slope position type. Instead of being determined totally by users in the prototype-based inference method, in the proposed approach the typical locations and fuzzy inference parameters for each slope position type are automatically determined by a rule set based on prior domain knowledge and the frequency distributions of topographic attributes. Furthermore, the preparation of topographic attributes (e.g., slope gradient, curvature, and relative position index) is automated, so the proposed automatic approach has only one necessary input, i.e., the gridded digital elevation model of the study area. All compute-intensive algorithms in the proposed approach were speeded up by parallel computing. Two study cases were provided to demonstrate that this approach can properly, conveniently and quickly derive the fuzzy slope positions.
Boehm, Udo; Steingroever, Helen; Wagenmakers, Eric-Jan
2018-06-01
An important tool in the advancement of cognitive science are quantitative models that represent different cognitive variables in terms of model parameters. To evaluate such models, their parameters are typically tested for relationships with behavioral and physiological variables that are thought to reflect specific cognitive processes. However, many models do not come equipped with the statistical framework needed to relate model parameters to covariates. Instead, researchers often revert to classifying participants into groups depending on their values on the covariates, and subsequently comparing the estimated model parameters between these groups. Here we develop a comprehensive solution to the covariate problem in the form of a Bayesian regression framework. Our framework can be easily added to existing cognitive models and allows researchers to quantify the evidential support for relationships between covariates and model parameters using Bayes factors. Moreover, we present a simulation study that demonstrates the superiority of the Bayesian regression framework to the conventional classification-based approach.
Measuring milk fat content by random laser emission
NASA Astrophysics Data System (ADS)
Abegão, Luis M. G.; Pagani, Alessandra A. C.; Zílio, Sérgio C.; Alencar, Márcio A. R. C.; Rodrigues, José J.
2016-10-01
The luminescence spectra of milk containing rhodamine 6G are shown to exhibit typical signatures of random lasing when excited with 532 nm laser pulses. Experiments carried out on whole and skim forms of two commercial brands of UHT milk, with fat volume concentrations ranging from 0 to 4%, presented lasing threshold values dependent on the fat concentration, suggesting that a random laser technique can be developed to monitor such important parameter.
Measuring milk fat content by random laser emission.
Abegão, Luis M G; Pagani, Alessandra A C; Zílio, Sérgio C; Alencar, Márcio A R C; Rodrigues, José J
2016-10-12
The luminescence spectra of milk containing rhodamine 6G are shown to exhibit typical signatures of random lasing when excited with 532 nm laser pulses. Experiments carried out on whole and skim forms of two commercial brands of UHT milk, with fat volume concentrations ranging from 0 to 4%, presented lasing threshold values dependent on the fat concentration, suggesting that a random laser technique can be developed to monitor such important parameter.
Jambrošić, Kristian; Horvat, Marko; Domitrović, Hrvoje
2013-07-01
Urban soundscapes at five locations in the city of Zadar were perceptually assessed by on-site surveys and objectively evaluated based on monaural and binaural recordings. All locations were chosen so that they would display auditory and visual diversity as much as possible. The unique sound installation known as the Sea Organ was included as an atypical music-like environment. Typical objective parameters were calculated from the recordings related to the amount of acoustic energy, spectral properties of sound, the amount of fluctuations, and tonal properties. The subjective assessment was done on-site using a common survey for evaluating the properties of sound and visual environment. The results revealed the importance of introducing the context into soundscape research because objective parameters did not show significant correlation with responses obtained from interviewees. Excessive values of certain objective parameters could indicate that a sound environment will be perceived as unpleasant or annoying, but its overall perception depends on how well it agrees with people's expectations. This was clearly seen for the case of Sea Organ for which the highest values of objective parameters were obtained, but, at the same time, it was evaluated as the most positive sound environment in every aspect.
McCullagh, Laura; Schmitz, Susanne; Barry, Michael; Walsh, Cathal
2017-11-01
In Ireland, all new drugs for which reimbursement by the healthcare payer is sought undergo a health technology assessment by the National Centre for Pharmacoeconomics. The National Centre for Pharmacoeconomics estimate expected value of perfect information but not partial expected value of perfect information (owing to computational expense associated with typical methodologies). The objective of this study was to examine the feasibility and utility of estimating partial expected value of perfect information via a computationally efficient, non-parametric regression approach. This was a retrospective analysis of evaluations on drugs for cancer that had been submitted to the National Centre for Pharmacoeconomics (January 2010 to December 2014 inclusive). Drugs were excluded if cost effective at the submitted price. Drugs were excluded if concerns existed regarding the validity of the applicants' submission or if cost-effectiveness model functionality did not allow required modifications to be made. For each included drug (n = 14), value of information was estimated at the final reimbursement price, at a threshold equivalent to the incremental cost-effectiveness ratio at that price. The expected value of perfect information was estimated from probabilistic analysis. Partial expected value of perfect information was estimated via a non-parametric approach. Input parameters with a population value at least €1 million were identified as potential targets for research. All partial estimates were determined within minutes. Thirty parameters (across nine models) each had a value of at least €1 million. These were categorised. Collectively, survival analysis parameters were valued at €19.32 million, health state utility parameters at €15.81 million and parameters associated with the cost of treating adverse effects at €6.64 million. Those associated with drug acquisition costs and with the cost of care were valued at €6.51 million and €5.71 million, respectively. This research demonstrates that the estimation of partial expected value of perfect information via this computationally inexpensive approach could be considered feasible as part of the health technology assessment process for reimbursement purposes within the Irish healthcare system. It might be a useful tool in prioritising future research to decrease decision uncertainty.
Volumetric flow rate in simulations of microfluidic devices+
NASA Astrophysics Data System (ADS)
Kovalčíková, KristÍna; Slavík, Martin; Bachratá, Katarína; Bachratý, Hynek; Bohiniková, Alžbeta
2018-06-01
In this work, we examine the volumetric flow rate of microfluidic devices. The volumetric flow rate is a parameter which is necessary to correctly set up a simulation of a real device and to check the conformity of a simulation and a laboratory experiments [1]. Instead of defining the volumetric rate at the beginning as a simulation parameter, a parameter of external force is set. The proposed hypothesis is that for a fixed set of other parameters (topology, viscosity of the liquid, …) the volumetric flow rate is linearly dependent on external force in typical ranges of fluid velocity used in our simulations. To confirm this linearity hypothesis and to find numerical limits of this approach, we test several values of the external force parameter. The tests are designed for three different topologies of simulation box and for various haematocrits. The topologies of the microfluidic devices are inspired by existing laboratory experiments [3 - 6]. The linear relationship between the external force and the volumetric flow rate is verified in orders of magnitudes similar to the values obtained from laboratory experiments. Supported by the Slovak Research and Development Agency under the contract No. APVV-15-0751 and by the Ministry of Education, Science, Research and Sport of the Slovak Republic under the contract No. VEGA 1/0643/17.
Determination of Phobos' rotational parameters by an inertial frame bundle block adjustment
NASA Astrophysics Data System (ADS)
Burmeister, Steffi; Willner, Konrad; Schmidt, Valentina; Oberst, Jürgen
2018-01-01
A functional model for a bundle block adjustment in the inertial reference frame was developed, implemented and tested. This approach enables the determination of rotation parameters of planetary bodies on the basis of photogrammetric observations. Tests with a self-consistent synthetic data set showed that the implementation converges reliably toward the expected values of the introduced unknown parameters of the adjustment, e.g., spin pole orientation, and that it can cope with typical observational errors in the data. We applied the model to a data set of Phobos using images from the Mars Express and the Viking mission. With Phobos being in a locked rotation, we computed a forced libration amplitude of 1.14^circ ± 0.03^circ together with a control point network of 685 points.
A vibration model for centrifugal contactors
NASA Astrophysics Data System (ADS)
Leonard, R. A.; Wasserman, M. O.; Wygmans, D. G.
1992-11-01
Using the transfer matrix method, we created the Excel worksheet 'Beam' for analyzing vibrations in centrifugal contactors. With this worksheet, a user can calculate the first natural frequency of the motor/rotor system for a centrifugal contactor. We determined a typical value for the bearing stiffness (k(sub B)) of a motor after measuring the k(sub B) value for three different motors. The k(sub B) value is an important parameter in this model, but it is not normally available for motors. The assumptions that we made in creating the Beam worksheet were verified by comparing the calculated results with those from a VAX computer program, BEAM IV. The Beam worksheet was applied to several contactor designs for which we have experimental data and found to work well.
Analysis and Sizing for Transient Thermal Heating of Insulated Aerospace Vehicle Structures
NASA Technical Reports Server (NTRS)
Blosser, Max L.
2012-01-01
An analytical solution was derived for the transient response of an insulated structure subjected to a simplified heat pulse. The solution is solely a function of two nondimensional parameters. Simpler functions of these two parameters were developed to approximate the maximum structural temperature over a wide range of parameter values. Techniques were developed to choose constant, effective thermal properties to represent the relevant temperature and pressure-dependent properties for the insulator and structure. A technique was also developed to map a time-varying surface temperature history to an equivalent square heat pulse. Equations were also developed for the minimum mass required to maintain the inner, unheated surface below a specified temperature. In the course of the derivation, two figures of merit were identified. Required insulation masses calculated using the approximate equation were shown to typically agree with finite element results within 10%-20% over the relevant range of parameters studied.
An Analytical Solution for Transient Thermal Response of an Insulated Structure
NASA Technical Reports Server (NTRS)
Blosser, Max L.
2012-01-01
An analytical solution was derived for the transient response of an insulated aerospace vehicle structure subjected to a simplified heat pulse. This simplified problem approximates the thermal response of a thermal protection system of an atmospheric entry vehicle. The exact analytical solution is solely a function of two non-dimensional parameters. A simpler function of these two parameters was developed to approximate the maximum structural temperature over a wide range of parameter values. Techniques were developed to choose constant, effective properties to represent the relevant temperature and pressure-dependent properties for the insulator and structure. A technique was also developed to map a time-varying surface temperature history to an equivalent square heat pulse. Using these techniques, the maximum structural temperature rise was calculated using the analytical solutions and shown to typically agree with finite element simulations within 10 to 20 percent over the relevant range of parameters studied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Permanyer, A.; Valles, D.; Dorronsorro, C.
1988-08-01
The Armancies Formation is an Eocene carbonate slope succession in the Catalonian South Pyrenean basin. It ranges from 500 to 700 m in thickness. The first 200 m are made of a thin-bedded facies of wackestones alternating with dark pelagic fauna of miliolid, ostracods, bryozoans, and planktonic foraminifers and show significant bioturbation. They also show a low organic content (< 0.5% TOC). The lime-mudstone beds show a massive structure or planar millimeter laminations. They may contain sparse pelagic fossils of planktonic foraminifers, ostracods, and dinoflagellates; they do not show any bioturbation, and have high TOC values, which can reach individualmore » scores of about 14%. They qualify, therefore, as a typical oil shale. Rock-Eval Pyrolysis analysis affords a mean S{sub 2} value of 25 mg HC/g. Mean S{sub 1} value is around 1.0 mg HC/g. As is typical of an initial oil window, T{sub max} maturity parameter ranges from 432 to 440{degree}C (mean = 434{degree}C). This degree of evolution is in accordance with the very low value of carbonyl and carboxyl groups, as determined by IR spectrometry and NMR on Fischer assay extract. The proton NMR shows an aromatic/aliphatic hydrocarbon ratio of 1:4, as expected in earlier stages of catagenesis. N-alkane gas chromatography profiles show n-C{sub 15} to n-C{sub 19} prevalence and that neither even nor odd carbon numbers prevail. This distribution perfectly matches that of typical sediments of marine origin and also agrees with obtained hydrogen index values (mean HI = 500 mg HC/g TOC). Sedimentological and geochemical results indicate an autochthonous marine organic matter and the potential of these slope shales is good oil-prone source beds.« less
Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan
2016-04-01
Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
New Kohn-Sham density functional based on microscopic nuclear and neutron matter equations of state
NASA Astrophysics Data System (ADS)
Baldo, M.; Robledo, L. M.; Schuck, P.; Viñas, X.
2013-06-01
A new version of the Barcelona-Catania-Paris energy functional is applied to a study of nuclear masses and other properties. The functional is largely based on calculated ab initio nuclear and neutron matter equations of state. Compared to typical Skyrme functionals having 10-12 parameters apart from spin-orbit and pairing terms, the new functional has only 2 or 3 adjusted parameters, fine tuning the nuclear matter binding energy and fixing the surface energy of finite nuclei. An energy rms value of 1.58 MeV is obtained from a fit of these three parameters to the 579 measured masses reported in the Audi and Wapstra [Nucl. Phys. ANUPABL0375-947410.1016/j.nuclphysa.2003.11.003 729, 337 (2003)] compilation. This rms value compares favorably with the one obtained using other successful mean field theories, which range from 1.5 to 3.0 MeV for optimized Skyrme functionals and 0.7 to 3.0 for the Gogny functionals. The other properties that have been calculated and compared to experiment are nuclear radii, the giant monopole resonance, and spontaneous fission lifetimes.
The analysis of distribution of meteorological over China in astronomical site selection
NASA Astrophysics Data System (ADS)
Zhang, Cai-yun; Weng, Ning-quan
2014-02-01
The distribution of parameters such as sunshine hours, precipitation, and visibility were obtained by analyzing the meteorological data in 906 stations of China during 1981~2012. And the month and annual variations of the parameters in some typical stations were discussed. The results show that: (1) the distribution of clear days is similar to that of sunshine hours, the values of which decrease from north to south and from west to east. The distributions of cloud, precipitation and vapor pressure are opposite. (2) The northwest areas in China have the characteristic such as low precipitation and vapor pressure, small cloud clever, and good visibility, which are the general conditions of astronomical site selection. (3) The parameters have obvious month variation. There are large precipitation, long sunshine hours and strong radiation in the mid months of one year, which are opposite in beginning and ending of one year. (4) In the selected stations, the value of vapor pressure decreases year by year, and the optical depth is similar or invariable. All the above results provided for astronomical site selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mantha, Sriteja; Yethiraj, Arun
2016-02-24
The properties of water under confinement are of practical and fundamental interest. Here in this work we study the properties of water in the self-assembled lyotropic phases of gemini surfactants with a focus on testing the standard analysis of quasi-elastic neutron scattering (QENS) experiments. In QENS experiments the dynamic structure factor is measured and fit to models to extract the translational diffusion constant, D T , and rotational relaxation time, τ R. We test this procedure by using simulation results for the dynamic structure factor, extracting the dynamic parameters from the fit as is typically done in experiments, and comparingmore » the values to those directly measured in the simulations. We find that the decoupling approximation, where the intermediate scattering function is assumed to be a product of translational and rotational contributions, is quite accurate. The jump-diffusion and isotropic rotation models, however, are not accurate when the degree of confinement is high. In particular, the exponential approximations for the intermediate scattering function fail for highly confined water and the values of D T and τ R can differ from the measured value by as much as a factor of two. Other models have more fit parameters, however, and with the range of energies and wave-vectors accessible to QENS, the typical analysis appears to be the best choice. In the most confined lamellar phase, the dynamics are sufficiently slow that QENS does not access a large enough time scale and neutron spin echo measurements would be a valuable technique in addition to QENS.« less
Spectral constraints on models of gas in clusters of galaxies
NASA Technical Reports Server (NTRS)
Henriksen, M. J.; Mushotzky, R.
1985-01-01
The HEAO 1A2 spectra of clusters of galaxies are used to determine the temperature profile which characterizes the X-ray emitting gas. Strong evidence of nonisothermality is found for the Coma, A85, and A1795 clusters. Properties of the cluster potential which binds the gas are calculated for a range of model parameters. The typical binding mass, if the gas is adiabatic, is 2-4E14 solar masses and is quite centrally concentrated. In addition, the Fe abundance in Coma is .26 + or - .06 solar, less than the typical value (.5) found for rich clusters. The results for the gas in Coma may imply a physical description of the cluster which is quite different from what was previously believed.
Lomnitz, Jason G.; Savageau, Michael A.
2016-01-01
Mathematical models of biochemical systems provide a means to elucidate the link between the genotype, environment, and phenotype. A subclass of mathematical models, known as mechanistic models, quantitatively describe the complex non-linear mechanisms that capture the intricate interactions between biochemical components. However, the study of mechanistic models is challenging because most are analytically intractable and involve large numbers of system parameters. Conventional methods to analyze them rely on local analyses about a nominal parameter set and they do not reveal the vast majority of potential phenotypes possible for a given system design. We have recently developed a new modeling approach that does not require estimated values for the parameters initially and inverts the typical steps of the conventional modeling strategy. Instead, this approach relies on architectural features of the model to identify the phenotypic repertoire and then predict values for the parameters that yield specific instances of the system that realize desired phenotypic characteristics. Here, we present a collection of software tools, the Design Space Toolbox V2 based on the System Design Space method, that automates (1) enumeration of the repertoire of model phenotypes, (2) prediction of values for the parameters for any model phenotype, and (3) analysis of model phenotypes through analytical and numerical methods. The result is an enabling technology that facilitates this radically new, phenotype-centric, modeling approach. We illustrate the power of these new tools by applying them to a synthetic gene circuit that can exhibit multi-stability. We then predict values for the system parameters such that the design exhibits 2, 3, and 4 stable steady states. In one example, inspection of the basins of attraction reveals that the circuit can count between three stable states by transient stimulation through one of two input channels: a positive channel that increases the count, and a negative channel that decreases the count. This example shows the power of these new automated methods to rapidly identify behaviors of interest and efficiently predict parameter values for their realization. These tools may be applied to understand complex natural circuitry and to aid in the rational design of synthetic circuits. PMID:27462346
NASA Astrophysics Data System (ADS)
Lim, Hongki; Dewaraja, Yuni K.; Fessler, Jeffrey A.
2018-02-01
Most existing PET image reconstruction methods impose a nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random coincidence fraction. This paper investigates a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values, thereby potentially reducing bias. Unlike the previously reported NEG-ML approach that modifies the Poisson log-likelihood to allow negative values, the new formulation retains the classical Poisson statistical model. To relax the non-negativity constraint embedded in the standard methods for PET reconstruction, we used an alternating direction method of multipliers (ADMM). Because choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. We investigated the methods using lung to liver slices of XCAT phantom. We simulated low true coincidence count-rates with high random fractions corresponding to the typical values from patient imaging in Y-90 microsphere radioembolization. We compared our new methods with standard reconstruction algorithms and NEG-ML and a regularized version thereof. Both our new method and NEG-ML allow more accurate quantification in all volumes of interest while yielding lower noise than the standard method. The performance of NEG-ML can degrade when its user-defined parameter is tuned poorly, while the proposed algorithm is robust to any count level without requiring parameter tuning.
Sensitivity of Beam Parameters to a Station C Solenoid Scan on Axis II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulze, Martin E.
Magnet scans are a standard technique for determining beam parameters in accelerators. Beam parameters are inferred from spot size measurements using a model of the beam optics. The sensitivity of the measured beam spot size to the beam parameters is investigated for typical DARHT Axis II beam energies and currents. In a typical S4 solenoid scan, the downstream transport is tuned to achieve a round beam at Station C with an envelope radius of about 1.5 cm with a very small divergence with S4 off. The typical beam energy and current are 16.0 MeV and 1.625 kA. Figures 1-3 showmore » the sensitivity of the bean size at Station C to the emittance, initial radius and initial angle respectively. To better understand the relative sensitivity of the beam size to the emittance, initial radius and initial angle, linear regressions were performed for each parameter as a function of the S4 setting. The results are shown in Figure 4. The measured slope was scaled to have a maximum value of 1 in order to present the relative sensitivities in a single plot. Figure 4 clearly shows the beam size at the minimum of the S4 scan is most sensitive to emittance and relatively insensitive to initial radius and angle as expected. The beam emittance is also very sensitive to the beam size of the converging beam and becomes insensitive to the beam size of the diverging beam. Measurements of the beam size of the diverging beam provide the greatest sensitivity to the initial beam radius and to a lesser extent the initial beam angle. The converging beam size is initially very sensitive to the emittance and initial angle at low S4 currents. As the S4 current is increased the sensitivity to the emittance remains strong while the sensitivity to the initial angle diminishes.« less
Optical spectral singularities as threshold resonances
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mostafazadeh, Ali
2011-04-15
Spectral singularities are among generic mathematical features of complex scattering potentials. Physically they correspond to scattering states that behave like zero-width resonances. For a simple optical system, we show that a spectral singularity appears whenever the gain coefficient coincides with its threshold value and other parameters of the system are selected properly. We explore a concrete realization of spectral singularities for a typical semiconductor gain medium and propose a method of constructing a tunable laser that operates at threshold gain.
ATM observations - X-ray results. [solar coronal structure from Skylab experiments
NASA Technical Reports Server (NTRS)
Vaiana, G. S.; Zombeck, M.; Krieger, A. S.; Timothy, A. F.
1976-01-01
Preliminary results of the solar X-ray observations from Skylab are reviewed which indicate a highly structured nature for the corona, with closed magnetic-loop structures over a wide range of size scales. A description of the S-054 experiments is provided, and values are given for the parameters - including size, density, and temperature - describing a variety of typical coronal features. The structure and evolution of active regions, coronal holes, and bright points are discussed.
2007-09-20
phases. The power law parameter values were found to be in close agreement with the constants for nuclear explosions in Nevada and chemical explosions in...caused by the difference of lithostatic pressures between top and bottom of a vertical cylindrical explosive source, typical for borehole chemical ...NORSAR recorded several decoupled chemical explosions in large chambers of underground mines in Sweden (Stevens et al., 2003), however a reference
An Overview of Acoustic Detection Analysis.
1983-01-01
PL(r) and NL are typically determined from publica - tions that give geographic and seasonal values for these parameters. The left-hand side of the...In Seguridad hence) 14 p., Oct 1980, AD A092 733 wiorthwstern University, Evanston, IL PP 297 - Classified PP 3o9 Bowes, Marianne, Brchling, Frank P...Siultaneous PP 36 Determination of Income and Employment in United States-- U’NeIll, Thomas, Řool ltv riels for the Na.y," I oc., Mexico Border
Ellis, John; Evans, Jason L.; Mustafayev, Azar; ...
2016-10-28
Here, we revisit minimal supersymmetric SU(5) grand unification (GUT) models in which the soft supersymmetry-breaking parameters of the minimal supersymmetric Standard Model (MSSM) are universal at some input scale, M in, above the supersymmetric gauge-coupling unification scale, M GUT. As in the constrained MSSM (CMSSM), we assume that the scalar masses and gaugino masses have common values, m 0 and m 1/2, respectively, at M in, as do the trilinear soft supersymmetry-breaking parameters A 0. Going beyond previous studies of such a super-GUT CMSSM scenario, we explore the constraints imposed by the lower limit on the proton lifetime and themore » LHC measurement of the Higgs mass, m h. We find regions of m 0, m 1/2 A 0 and the parameters of the SU(5) superpotential that are compatible with these and other phenomenological constraints such as the density of cold dark matter, which we assume to be provided by the lightest neutralino. Typically, these allowed regions appear for m 0 and m 1/2 in the multi-TeV region, for suitable values of the unknown SU(5) GUT-scale phases and superpotential couplings, and with the ratio of supersymmetric Higgs vacuum expectation values tan β≲6.« less
Hickey, James P.
1996-01-01
This chapter provides a listing of the increasing variety of organic moieties and heteroatom group for which Linear Solvation Energy Relationship (LSER) values are available, and the LSER variable estimation rules. The listings include values for typical nitrogen-, sulfur- and phosphorus-containing moieties, and general organosilicon and organotin groups. The contributions by an ion pair situation to the LSER values are also offered in Table 1, allowing estimation of parameters for salts and zwitterions. The guidelines permit quick estimation of values for the four primary LSER variables Vi/100, π*, Βm, and αm by summing the contribtuions from its components. The use of guidelines and Table 1 significantly simplifies computation of values for the LSER variables for most possible organic comppounds in the environment, including the larger compounds of environmental and biological interest.
Adaptive Local Realignment of Protein Sequences.
DeBlasio, Dan; Kececioglu, John
2018-06-11
While mutation rates can vary markedly over the residues of a protein, multiple sequence alignment tools typically use the same values for their scoring-function parameters across a protein's entire length. We present a new approach, called adaptive local realignment, that in contrast automatically adapts to the diversity of mutation rates along protein sequences. This builds upon a recent technique known as parameter advising, which finds global parameter settings for an aligner, to now adaptively find local settings. Our approach in essence identifies local regions with low estimated accuracy, constructs a set of candidate realignments using a carefully-chosen collection of parameter settings, and replaces the region if a realignment has higher estimated accuracy. This new method of local parameter advising, when combined with prior methods for global advising, boosts alignment accuracy as much as 26% over the best default setting on hard-to-align protein benchmarks, and by 6.4% over global advising alone. Adaptive local realignment has been implemented within the Opal aligner using the Facet accuracy estimator.
Terahertz generation via laser coupling to anharmonic carbon nanotube array
NASA Astrophysics Data System (ADS)
Sharma, Soni; Vijay, A.
2018-02-01
A scheme of terahertz radiation generation employing a matrix of anharmonic carbon nanotubes (CNTs) embedded in silica is proposed. The matrix is irradiated by two collinear laser beams that induce large excursions on CNT electrons and exert a nonlinear force at the beat frequency ω = ω1-ω2. The force derives a nonlinear current producing THz radiation. The THz field is resonantly enhanced at the plasmon resource, ω = ω p ( 1 + β ) / √{ 2 } , where ωp is the plasma frequency and β is a characteristic parameter. Collisions are a limiting factor, suppressing the plasmon resonance. For typical values of plasma parameters, we obtain power conversion efficiency of the order of 10-6.
Mellors, Jane; Waycott, Michelle; Marsh, Helene
2005-01-01
This survey provides baseline information on sediment characteristics, porewater, adsorbed and plant tissue nutrients from intertidal coastal seagrass meadows in the central region of the Great Barrier Reef World Heritage Area. Data collected from 11 locations, representative of intertidal coastal seagrass beds across the region, indicated that the chemical environment was typical of other tropical intertidal areas. Results using two different extraction methods highlight the need for caution when choosing an adsorbed phosphate extraction technique, as sediment type affects the analytical outcome. Comparison with published values indicates that the range of nutrient parameters measured is equivalent to those measured across tropical systems globally. However, the nutrient values in seagrass leaves and their molar ratios for Halophila ovalis and Halodule uninervis were much higher than the values from the literature from this and other regions, obtained using the same techniques, suggesting that these species act as nutrient sponges, in contrast with Zostera capricorni. The limited historical data from this region suggest that the nitrogen and phosphorus content of seagrass leaves has increased since the 1970s concomitant with changing land use practice.
Radiative PQ breaking and the Higgs boson mass
NASA Astrophysics Data System (ADS)
D'Eramo, Francesco; Hall, Lawrence J.; Pappadopulo, Duccio
2015-06-01
The small and negative value of the Standard Model Higgs quartic coupling at high scales can be understood in terms of anthropic selection on a landscape where large and negative values are favored: most universes have a very short-lived electroweak vacuum and typical observers are in universes close to the corresponding metastability boundary. We provide a simple example of such a landscape with a Peccei-Quinn symmetry breaking scale generated through dimensional transmutation and supersymmetry softly broken at an intermediate scale. Large and negative contributions to the Higgs quartic are typically generated on integrating out the saxion field. Cancellations among these contributions are forced by the anthropic requirement of a sufficiently long-lived electroweak vacuum, determining the multiverse distribution for the Higgs quartic in a similar way to that of the cosmological constant. This leads to a statistical prediction of the Higgs boson mass that, for a wide range of parameters, yields the observed value within the 1σ statistical uncertainty of ˜ 5 GeV originating from the multiverse distribution. The strong CP problem is solved and single-component axion dark matter is predicted, with an abundance that can be understood from environmental selection. A more general setting for the Higgs mass prediction is discussed.
Rao, Harsha L; Yadav, Ravi K; Begum, Viquar U; Addepalli, Uday K; Senthil, Sirisha; Choudhari, Nikhil S; Garudadri, Chandra S
2015-03-01
To evaluate the effect of typical scan score (TSS), when within the acceptable limits, on the diagnostic performance of retinal nerve fibre layer (RNFL) parameters with the enhanced corneal compensation (ECC) protocol of scanning laser polarimetry (SLP) in glaucoma. In a cross-sectional study, 203 eyes of 160 glaucoma patients and 140 eyes of 104 control subjects underwent RNFL imaging with the ECC protocol of SLP. TSS was used to quantify atypical birefringence pattern (ABP) images. Influence of TSS on the diagnostic ability of SLP parameters was evaluated by receiver operating characteristic (ROC) regression models after adjusting for the effect of disease severity [based on mean deviation (MD)] on standard automated perimetry). Diagnostic abilities of all RNFL parameters of SLP increased when the TSS values were higher. This effect was statistically significant for TSNIT (coefficient: 0.08, p<0.001) and inferior average parameters (coefficient: 0.06, p=0.002) but not for nerve fibre indicator (NFI, coefficient: 0.03, p=0.21). In early glaucoma (MD of -5 dB), predicted area under ROC curve (AUC) for TSNIT average parameter improved from 0.642 at a TSS of 90 to 0.845 at a TSS of 100. In advanced glaucoma (MD of -15 dB), AUC for TSNIT average improved from 0.832 at a TSS of 90 to 0.947 at 100. Diagnostic performances of TSNIT and inferior average RNFL parameters with ECC protocol of SLP were significantly influenced by TSS even when the TSS values were within the acceptable limits. Diagnostic ability of NFI was unaffected by TSS values. © 2014 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Cloud Inhomogeneity from MODIS
NASA Technical Reports Server (NTRS)
Oreopoulos, Lazaros; Cahalan, Robert F.
2004-01-01
Two full months (July 2003 and January 2004) of MODIS Atmosphere Level-3 data from the Terra and Aqua satellites are analyzed in order to characterize the horizontal variability of cloud optical thickness and water path at global scales. Various options to derive cloud variability parameters are discussed. The climatology of cloud inhomogeneity is built by first calculating daily parameter values at spatial scales of l degree x 1 degree, and then at zonal and global scales, followed by averaging over monthly time scales. Geographical, diurnal, and seasonal changes of inhomogeneity parameters are examined separately for the two cloud phases, and separately over land and ocean. We find that cloud inhomogeneity is weaker in summer than in winter, weaker over land than ocean for liquid clouds, weaker for local morning than local afternoon, about the same for liquid and ice clouds on a global scale, but with wider probability distribution functions (PDFs) and larger latitudinal variations for ice, and relatively insensitive to whether water path or optical thickness products are used. Typical mean values at hemispheric and global scales of the inhomogeneity parameter nu (roughly the mean over the standard deviation of water path or optical thickness), range from approximately 2.5 to 3, while for the inhomogeneity parameter chi (the ratio of the logarithmic to linear mean) from approximately 0.7 to 0.8. Values of chi for zonal averages can occasionally fall below 0.6 and for individual gridpoints below 0.5. Our results demonstrate that MODIS is capable of revealing significant fluctuations in cloud horizontal inhomogenity and stress the need to model their global radiative effect in future studies.
Reference-free error estimation for multiple measurement methods.
Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga
2018-01-01
We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.
Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.
Shireman, Emilie; Steinley, Douglas; Brusco, Michael J
2017-02-01
Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.
Bubble Entropy: An Entropy Almost Free of Parameters.
Manis, George; Aktaruzzaman, Md; Sassi, Roberto
2017-11-01
Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors. Objective : A critical point in any definition of entropy is the selection of the parameters employed to obtain an estimate in practice. We propose a new definition of entropy aiming to reduce the significance of this selection. Methods: We call the new definition Bubble Entropy . Bubble Entropy is based on permutation entropy, where the vectors in the embedding space are ranked. We use the bubble sort algorithm for the ordering procedure and count instead the number of swaps performed for each vector. Doing so, we create a more coarse-grained distribution and then compute the entropy of this distribution. Results: Experimental results with both real and synthetic HRV signals showed that bubble entropy presents remarkable stability and exhibits increased descriptive and discriminating power compared to all other definitions, including the most popular ones. Conclusion: The definition proposed is almost free of parameters. The most common ones are the scale factor r and the embedding dimension m . In our definition, the scale factor is totally eliminated and the importance of m is significantly reduced. The proposed method presents increased stability and discriminating power. Significance: After the extensive use of some entropy measures in physiological signals, typical values for their parameters have been suggested, or at least, widely used. However, the parameters are still there, application and dataset dependent, influencing the computed value and affecting the descriptive power. Reducing their significance or eliminating them alleviates the problem, decoupling the method from the data and the application, and eliminating subjective factors.
Compensatory parameters of intracranial space in giant hydrocephalus.
Cieślicki, Krzysztof; Czepko, Ryszard
2009-01-01
The main goal of the present study is to examine compensatory parameters of intracranial space in giant hydrocephalus. We also assess the early and late outcome and analyse complications in shunted cases. Nine cases of giant hydrocephalus characterised by the value of Evans ratio > 0.5, ventricular index > 1.5, and the width of the third ventricle > 20 mm were considered. Using the lumbar infusion test and developed software we analysed the intracranial compensatory parameters typical for hydrocephalus. Based on the Marmarou model, the method depended on a repeated search for the best fitting curve corresponding to the progress of the test was used. Eight out of nine patients were therefore shunted. Patients were followed up for 9 months. Five out of eight shunted patients undoubtedly improved in a few days after surgery (62%). Complications (subdural hygromas/haematomas and intracerebral haematoma) developed in 5 (62%) cases in longer follow-up. A definite improvement was noted in 4 out of 8 operated cases (50%). To get the stable values of compensatory parameters, the duration of the infusion test must at least double the inflexion time of the test curve. All but one considered cases of giant hydrocephalus were characterized by lack of intracranial space reserve, significantly reduced rate of CSF secretion and by various degrees of elevated value of the resistance to outflow. Due to the significant number of complications and uncertain long-term improvement, great caution in decision making for shunting has to be taken.
Lucas Martínez, Néstor; Martínez Ortega, José-Fernán; Hernández Díaz, Vicente; Del Toro Matamoros, Raúl M
2016-05-12
The deployment of the nodes in a Wireless Sensor and Actuator Network (WSAN) is typically restricted by the sensing and acting coverage. This implies that the locations of the nodes may be, and usually are, not optimal from the point of view of the radio communication. Additionally, when the transmission power is tuned for those locations, there are other unpredictable factors that can cause connectivity failures, like interferences, signal fading due to passing objects and, of course, radio irregularities. A control-based self-adaptive system is a typical solution to improve the energy consumption while keeping good connectivity. In this paper, we explore how the communication range for each node evolves along the iterations of an energy saving self-adaptive transmission power controller when using different parameter sets in an outdoor scenario, providing a WSAN that automatically adapts to surrounding changes keeping good connectivity. The results obtained in this paper show how the parameters with the best performance keep a k-connected network, where k is in the range of the desired node degree plus or minus a specified tolerance value.
Lucas Martínez, Néstor; Martínez Ortega, José-Fernán; Hernández Díaz, Vicente; del Toro Matamoros, Raúl M.
2016-01-01
The deployment of the nodes in a Wireless Sensor and Actuator Network (WSAN) is typically restricted by the sensing and acting coverage. This implies that the locations of the nodes may be, and usually are, not optimal from the point of view of the radio communication. Additionally, when the transmission power is tuned for those locations, there are other unpredictable factors that can cause connectivity failures, like interferences, signal fading due to passing objects and, of course, radio irregularities. A control-based self-adaptive system is a typical solution to improve the energy consumption while keeping good connectivity. In this paper, we explore how the communication range for each node evolves along the iterations of an energy saving self-adaptive transmission power controller when using different parameter sets in an outdoor scenario, providing a WSAN that automatically adapts to surrounding changes keeping good connectivity. The results obtained in this paper show how the parameters with the best performance keep a k-connected network, where k is in the range of the desired node degree plus or minus a specified tolerance value. PMID:27187397
Using Active Learning for Speeding up Calibration in Simulation Models.
Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2016-07-01
Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.
Using Active Learning for Speeding up Calibration in Simulation Models
Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan
2015-01-01
Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190
Suppressing Transients In Digital Phase-Locked Loops
NASA Technical Reports Server (NTRS)
Thomas, J. B.
1993-01-01
Loop of arbitrary order starts in steady-state lock. Method for initializing variables of digital phase-locked loop reduces or eliminates transients in phase and frequency typically occurring during acquisition of lock on signal or when changes made in values of loop-filter parameters called "loop constants". Enables direct acquisition by third-order loop without prior acquisition by second-order loop of greater bandwidth, and eliminates those perturbations in phase and frequency lock occurring when loop constants changed by arbitrarily large amounts.
A Physics-Based Heterojunction Bipolar Transistor Model for Integrated Circuit Simulation
1993-12-01
Laverghetta, Practical Microwaves, IN, Howard W. Sams & Co., 1984. [56] C. R . Selvakumar , "A New Minority Carrier Lifetime Model for Heavily Doped GaAs...transistor common-emitter output conductance (S). gm Small-signal transconductance (S). r Reflection coefficient of a transmission line. ’Y Emitter...material and geometry parameters to equivalent circuit element values. Typically, the first step in 6 C RC Re + VWc- +B B ,a W’ COE ’IIc I R E Figure 1.7
Fluidized bed regenerators for Brayton cycles
NASA Technical Reports Server (NTRS)
Nichols, L. D.
1975-01-01
A recuperator consisting of two fluidized bed regenerators with circulating solid particles is considered for use in a Brayton cycle. These fluidized beds offer the possibility of high temperature operation if ceramic particles are used. Calculations of the efficiency and size of fluidized bed regenerators for typical values of operating parameters were made and compared to a shell and tube recuperator. The calculations indicate that the fluidized beds will be more compact than the shell and tube as well as offering a high temperature operating capability.
Dimensionally regularized Tsallis' statistical mechanics and two-body Newton's gravitation
NASA Astrophysics Data System (ADS)
Zamora, J. D.; Rocca, M. C.; Plastino, A.; Ferri, G. L.
2018-05-01
Typical Tsallis' statistical mechanics' quantifiers like the partition function and the mean energy exhibit poles. We are speaking of the partition function Z and the mean energy 〈 U 〉 . The poles appear for distinctive values of Tsallis' characteristic real parameter q, at a numerable set of rational numbers of the q-line. These poles are dealt with dimensional regularization resources. The physical effects of these poles on the specific heats are studied here for the two-body classical gravitation potential.
Accurate Temperature Feedback Control for MRI-Guided, Phased Array HICU Endocavitary Therapy
NASA Astrophysics Data System (ADS)
Salomir, Rares; Rata, Mihaela; Cadis, Daniela; Lafon, Cyril; Chapelon, Jean Yves; Cotton, François; Bonmartin, Alain; Cathignol, Dominique
2007-05-01
Effective treatment of malignant tumours demands well controlled energy deposition in the region of interest. Generally, two major steps must be fulfilled: 1. pre-operative optimal planning of the thermal dosimetry and 2. per-operative active spatial-and-temporal control of the delivered thermal dose. The second issue is made possible by using fast MR thermometry data and adjusting on line the sonication parameters. This approach is addressed here in the particular case of the ultrasound therapy for endocavitary tumours (oesophagus, colon or rectum) with phased array cylindrical applicators of High Intensity Contact Ultrasound (HICU). Two specific methodological objectives have been defined for this study: 1. to implement a robust and effective temperature controller for the specific geometry of endocavitary HICU and 2. to determine the stability (ie convergence) domain of the controller with respect to possible errors affecting the empirical parameters of the underlying physical model. Experimental setup included a Philips 1.5T clinical MR scanner and a cylindrical phased array transducer (64 elements) driven by a computer-controlled multi-channel generator. Performance of the temperature controller was tested ex vivo on fresh meat samples with planar and slightly focused beams, for a temperature elevation range from 10°C to 30°C. During the steady state regime, typical error of the temperature mean value was inferior to 1%, while the typical standard deviation of the temperature was inferior to 2% (relative to the targeted temperature elevation). Further, the empirical parameters of the physical model have been deliberately set to erroneous values and the impact on the controller stability was evaluated. Excellent tolerance of the controller was demonstrated, as this one failed to performed stable feedback only in the extreme case of a strong underestimation for the ultrasound absorption parameter by a factor of 4 or more.
NASA Astrophysics Data System (ADS)
Enell, Carl-Fredrik; Kozlovsky, Alexander; Turunen, Tauno; Ulich, Thomas; Välitalo, Sirkku; Scotto, Carlo; Pezzopane, Michael
2016-03-01
This paper presents a comparison between standard ionospheric parameters manually and automatically scaled from ionograms recorded at the high-latitude Sodankylä Geophysical Observatory (SGO, ionosonde SO166, 64.1° geomagnetic latitude), located in the vicinity of the auroral oval. The study is based on 2610 ionograms recorded during the period June-December 2013. The automatic scaling was made by means of the Autoscala software. A few typical examples are shown to outline the method, and statistics are presented regarding the differences between manually and automatically scaled values of F2, F1, E and sporadic E (Es) layer parameters. We draw the conclusions that: 1. The F2 parameters scaled by Autoscala, foF2 and M(3000)F2, are reliable. 2. F1 is identified by Autoscala in significantly fewer cases (about 50 %) than in the manual routine, but if identified the values of foF1 are reliable. 3. Autoscala frequently (30 % of the cases) detects an E layer when the manual scaling process does not. When identified by both methods, the Autoscala E-layer parameters are close to those manually scaled, foE agreeing to within 0.4 MHz. 4. Es and parameters of Es identified by Autoscala are in many cases different from those of the manual scaling. Scaling of Es at auroral latitudes is often a difficult task.
Wieser, Stefan; Axmann, Markus; Schütz, Gerhard J.
2008-01-01
We propose here an approach for the analysis of single-molecule trajectories which is based on a comprehensive comparison of an experimental data set with multiple Monte Carlo simulations of the diffusion process. It allows quantitative data analysis, particularly whenever analytical treatment of a model is infeasible. Simulations are performed on a discrete parameter space and compared with the experimental results by a nonparametric statistical test. The method provides a matrix of p-values that assess the probability for having observed the experimental data at each setting of the model parameters. We show the testing approach for three typical situations observed in the cellular plasma membrane: i), free Brownian motion of the tracer, ii), hop diffusion of the tracer in a periodic meshwork of squares, and iii), transient binding of the tracer to slowly diffusing structures. By plotting the p-value as a function of the model parameters, one can easily identify the most consistent parameter settings but also recover mutual dependencies and ambiguities which are difficult to determine by standard fitting routines. Finally, we used the test to reanalyze previous data obtained on the diffusion of the glycosylphosphatidylinositol-protein CD59 in the plasma membrane of the human T24 cell line. PMID:18805933
Ali, Roushown; Yashima, Masatomo
2003-05-01
Lattice parameters and the structural phase transition of La(0.68)(Ti(0.95),Al(0.05))O(3) have been investigated in situ in the temperature range 301-689 K by the synchrotron radiation powder diffraction (SR-PD) technique. High-angular-resolution SR-PD is confirmed to be a powerful technique for determining precise lattice parameters around a phase-transition temperature. The title compound exhibits a reversible phase transition between orthorhombic and tetragonal phases at 622.3 +/- 0.6 K. The following results were obtained: (i) the lattice parameters increased continuously with temperature, while the b/a ratio decreased continuously with temperature and became unity at the orthorhombic-tetragonal transition point; (ii) no hysteresis was observed between the lattice-parameter values measured on heating and on cooling. Results (i) and (ii) indicate that the orthorhombic-tetragonal phase transition is continuous and reversible. The b/a ratio is found to exhibit a more continuous temperature evolution than does the order parameter for a typical second-order phase transition based on Landau theory.
Burgette, Lane F; Reiter, Jerome P
2013-06-01
Multinomial outcomes with many levels can be challenging to model. Information typically accrues slowly with increasing sample size, yet the parameter space expands rapidly with additional covariates. Shrinking all regression parameters towards zero, as often done in models of continuous or binary response variables, is unsatisfactory, since setting parameters equal to zero in multinomial models does not necessarily imply "no effect." We propose an approach to modeling multinomial outcomes with many levels based on a Bayesian multinomial probit (MNP) model and a multiple shrinkage prior distribution for the regression parameters. The prior distribution encourages the MNP regression parameters to shrink toward a number of learned locations, thereby substantially reducing the dimension of the parameter space. Using simulated data, we compare the predictive performance of this model against two other recently-proposed methods for big multinomial models. The results suggest that the fully Bayesian, multiple shrinkage approach can outperform these other methods. We apply the multiple shrinkage MNP to simulating replacement values for areal identifiers, e.g., census tract indicators, in order to protect data confidentiality in public use datasets.
NASA Astrophysics Data System (ADS)
Piecuch, C. G.; Huybers, P. J.; Tingley, M.
2016-12-01
Sea level observations from coastal tide gauges are some of the longest instrumental records of the ocean. However, these data can be noisy, biased, and gappy, featuring missing values, and reflecting land motion and local effects. Coping with these issues in a formal manner is a challenging task. Some studies use Bayesian approaches to estimate sea level from tide gauge records, making inference probabilistically. Such methods are typically empirically Bayesian in nature: model parameters are treated as known and assigned point values. But, in reality, parameters are not perfectly known. Empirical Bayes methods thus neglect a potentially important source of uncertainty, and so may overestimate the precision (i.e., underestimate the uncertainty) of sea level estimates. We consider whether empirical Bayes methods underestimate uncertainty in sea level from tide gauge data, comparing to a full Bayes method that treats parameters as unknowns to be solved for along with the sea level field. We develop a hierarchical algorithm that we apply to tide gauge data on the North American northeast coast over 1893-2015. The algorithm is run in full Bayes mode, solving for the sea level process and parameters, and in empirical mode, solving only for the process using fixed parameter values. Error bars on sea level from the empirical method are smaller than from the full Bayes method, and the relative discrepancies increase with time; the 95% credible interval on sea level values from the empirical Bayes method in 1910 and 2010 is 23% and 56% narrower, respectively, than from the full Bayes approach. To evaluate the representativeness of the credible intervals, empirical Bayes and full Bayes methods are applied to corrupted data of a known surrogate field. Using rank histograms to evaluate the solutions, we find that the full Bayes method produces generally reliable error bars, whereas the empirical Bayes method gives too-narrow error bars, such that the 90% credible interval only encompasses 70% of true process values. Results demonstrate that parameter uncertainty is an important source of process uncertainty, and advocate for the fully Bayesian treatment of tide gauge records in ocean circulation and climate studies.
The influence of environmental factors on the deposition velocity of thoron progeny.
Li, H; Zhang, L; Guo, Q
2012-11-01
Passive measuring devices are comprehensively employed in thoron progeny surveys, while the deposition velocity of thoron progeny is the most critical parameter, which varies in different environments. In this study, to analyse the influence of environmental factors on thoron progeny deposition velocity, an improved model was proposed on the basis of Lai's aerosol deposition model and the Jacobi's model, and a series of measurements were carried out to verify the model. According to the calculations, deposition velocity decreases with increasing aerosol diameter and also aerosol concentration, while increases with increasing ventilation rate. In typical indoor environments, a typical value of 1.26 × 10(-5)m s(-1) is recommended, with a range between 7.6 × 10(-7) and 3.2 × 10(-4) m s(-1).
Time‐dependent renewal‐model probabilities when date of last earthquake is unknown
Field, Edward H.; Jordan, Thomas H.
2015-01-01
We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.
Ventricular beat classifier using fractal number clustering.
Bakardjian, H
1992-09-01
A two-stage ventricular beat 'associative' classification procedure is described. The first stage separates typical beats from extrasystoles on the basis of area and polarity rules. At the second stage, the extrasystoles are classified in self-organised cluster formations of adjacent shape parameter values. This approach avoids the use of threshold values for discrimination between ectopic beats of different shapes, which could be critical in borderline cases. A pattern shape feature conventionally called a 'fractal number', in combination with a polarity attribute, was found to be a good criterion for waveform evaluation. An additional advantage of this pattern classification method is its good computational efficiency, which affords the opportunity to implement it in real-time systems.
Australian aerosol backscatter survey
NASA Technical Reports Server (NTRS)
Gras, John L.; Jones, William D.
1989-01-01
This paper describes measurements of the atmospheric backscatter coefficient in and around Australia during May and June 1986. One set of backscatter measurements was made with a CO2 lidar operating at 10.6 microns; the other set was obtained from calculations using measured aerosol parameters. Despite the two quite different data collection techniques, there is quite good agreement between the two methods. Backscatter values range from near 1 x 10 to the -8th/m per sr near the surface to 4 - 5 x 10 to the -11th/m per sr in the free troposphere at 5-7-km altitude. The values in the free troposphere are somewhat lower than those typically measured at the same height in the Northern Hemisphere.
Dynamic characteristics of organic bulk-heterojunction solar cells
NASA Astrophysics Data System (ADS)
Babenko, S. D.; Balakai, A. A.; Moskvin, Yu. L.; Simbirtseva, G. V.; Troshin, P. A.
2010-12-01
Transient characteristics of organic bulk-heterojunction solar cells have been studied using pulsed laser probing. An analysis of the photoresponse waveforms of a typical solar cell measured by varying load resistance within broad range at different values of the bias voltage provided detailed information on the photocell parameters that characterize electron-transport properties of active layers. It is established that the charge carrier mobility is sufficient to ensure high values of the fill factor (˜0.6) in the obtained photocells. On approaching the no-load voltage, the differential capacitance of the photocell exhibits a sixfold increase as compared to the geometric capacitance. A possible mechanism of recombination losses in the active medium is proposed.
Upper Limit of the Viscosity Parameter in Accretion Flows around a Black Hole with Shock Waves
NASA Astrophysics Data System (ADS)
Nagarkoti, Shreeram; Chakrabarti, Sandip K.
2016-01-01
Black hole accretion is necessarily transonic; thus, flows must become supersonic and, therefore, sub-Keplerian before they enter into the black hole. The viscous timescale is much longer than the infall timescale close to a black hole. Hence, the angular momentum remains almost constant and the centrifugal force ˜ {l}2/{r}3 becomes increasingly dominant over the gravitational force ˜ 1/{r}2. The slowed down matter piles creating an accretion shock. The flow between shock and inner sonic point is puffed up and behaves like a boundary layer. This so-called Comptonizing cloud/corona produces hard X-rays and jets/outflows and, therefore, is an important component of black hole astrophysics. In this paper, we study steady state viscous, axisymmetric, transonic accretion flows around a Schwarzschild black hole. We adopt a viscosity parameter α and compute the highest possible value of α (namely, {α }{cr}) for each pair of two inner boundary parameters (namely, specific angular momentum carried to horizon, lin and specific energy at inner sonic point, E({x}{in})) which is still capable of producing a standing or oscillating shock. We find that while such possibilities exist for α as high as {α }{cr}=0.3 in very small regions of the flow parameter space, typical {α }{cr} appears to be about ˜0.05-0.1. Coincidentally, this also happens to be the typical viscosity parameter achieved by simulations of magnetorotational instabilities in accretion flows. We therefore believe that all realistic accretion flows are likely to have centrifugal pressure supported shocks unless the viscosity parameter everywhere is higher than {α }{cr}.
NASA Astrophysics Data System (ADS)
Kraus, Michal; Juhásová Šenitková, Ingrid
2017-10-01
Building environmental audit and the assessment of indoor air quality (IAQ) in typical residential buildings is necessary process to ensure users’ health and well-being. The paper deals with the concentrations on indoor dust particles (PM10) in the context of hygrothermal microclimate in indoor environment. The indoor temperature, relative humidity and air movement are basic significant factors determining the PM10 concentration [μg/m3]. The experimental measurements in this contribution represent the impact of indoor physical parameters on the concentration of particulate matter mass concentration. The occurrence of dust particles is typical for the almost two-thirds of interiors of the buildings. Other parameters indoor environment, such as air change rate, volume of the room, roughness and porosity of the building material surfaces, static electricity, light ions and others, were set constant and they are not taken into account in this study. The mass concentration of PM10 is measured during summer season in apartment of residential prefabricated building. The values of global temperature [°C] and relative humidity of indoor air [%] are also monitored. The quantity of particulate mass matter is determined gravimetrically by weighing according to CSN EN 12 341 (2014). The obtained results show that the temperature difference of the internal environment does not have a significant effect on the concentration PM10. Vice versa, the difference of relative humidity exhibits a difference of the concentration of dust particles. Higher levels of indoor particulates are observed for low values of relative humidity. The decreasing of relative air humidity about 10% caused 10µg/m3 of PM10 concentration increasing. The hygienic limit value of PM10 concentration is not exceeded at any point of experimental measurement.
The potential benefits of location-specific biometeorological indexes
NASA Astrophysics Data System (ADS)
Wong, Ho Ting; Wang, Jinfeng; Yin, Qian; Chen, Si; Lai, Poh Chin
2017-09-01
It is becoming popular to use biometeorological indexes to study the effects of weather on human health. Most of the biometeorological indexes were developed decades ago and only applicable to certain locations because of different climate types. Merely using standard biometeorological indexes to replace typical weather factors in biometeorological studies of different locations may not be an ideal research direction. This research is aimed at assessing the difference of statistical power between using standard biometeorological indexes and typical weather factors on describing the effects of extreme weather conditions on daily ambulance demands in Hong Kong. Results showed that net effective temperature and apparent temperature did not perform better than typical weather factors in describing daily ambulance demands in this study. The maximum adj- R 2 improvement was only 0.08, whereas the maximum adj- R 2 deterioration was 0.07. In this study, biometeorological indexes did not perform better than typical weather factors, possibly due to the differences of built environments and lifestyles in different locations and eras. Regarding built environments, the original parameters for calculating the index values may not be applicable to Hong Kong as buildings in Hong Kong are extremely dense and most are equipped with air conditioners. Regarding lifestyles, the parameters, which were set decades ago, may be outdated and not suitable to modern lifestyles as using hand-held electrical fans on the street to help reduce heat stress are popular. Hence, it is ideal to have tailor-made updated location-specific biometeorological indexes to study the effects of weather on human health.
Multiverse understanding of cosmological coincidences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bousso, Raphael; Hall, Lawrence J.; Nomura, Yasunori
2009-09-15
There is a deep cosmological mystery: although dependent on very different underlying physics, the time scales of structure formation, of galaxy cooling (both radiatively and against the CMB), and of vacuum domination do not differ by many orders of magnitude, but are all comparable to the present age of the universe. By scanning four landscape parameters simultaneously, we show that this quadruple coincidence is resolved. We assume only that the statistical distribution of parameter values in the multiverse grows towards certain catastrophic boundaries we identify, across which there are drastic regime changes. We find order-of-magnitude predictions for the cosmological constant,more » the primordial density contrast, the temperature at matter-radiation equality, the typical galaxy mass, and the age of the universe, in terms of the fine structure constant and the electron, proton and Planck masses. Our approach permits a systematic evaluation of measure proposals; with the causal patch measure, we find no runaway of the primordial density contrast and the cosmological constant to large values.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana L. Kelly
Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition,more » substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peryshkin, A. Yu., E-mail: alexb700@yandex.ru; Makarov, P. V., E-mail: bacardi@ispms.ru; Eremin, M. O., E-mail: bacardi@ispms.ru
An evolutionary approach proposed in [1, 2] combining the achievements of traditional macroscopic theory of solid mechanics and basic ideas of nonlinear dynamics is applied in a numerical simulation of present-day tectonic plates motion and seismic process in Central Asia. Relative values of strength parameters of rigid blocks with respect to the soft zones were characterized by the δ parameter that was varied in the numerical experiments within δ = 1.1–1.8 for different groups of the zonal-block divisibility. In general, the numerical simulations of tectonic block motion and accompanying seismic process in the model geomedium indicate that the numerical solutionsmore » of the solid mechanics equations characterize its deformation as a typical behavior of a nonlinear dynamic system under conditions of self-organized criticality.« less
The Effect of Yaw Coupling in Turning Maneuvers of Large Transport Aircraft
NASA Technical Reports Server (NTRS)
McNeill, Walter E.; Innis, Robert C.
1965-01-01
A study has been made, using a piloted moving simulator, of the effects of the yaw-coupling parameters N(sub p) and N(sub delta(sub a) on the lateral-directional handling qualities of a large transport airplane at landing-approach airspeed. It is shown that the desirable combinations of these parameters tend to be more proverse when compared with values typical of current aircraft. Results of flight tests in a large variable-stability jet transport showed trends which were similar to those of the simulator data. Areas of minor disagreement, which were traced to differences in airplane geometry, indicate that pilot consciousness of side acceleration forces can be an important factor in handling qualities of future long-nosed transport aircraft.
Optical fiber designs for beam shaping
NASA Astrophysics Data System (ADS)
Farley, Kevin; Conroy, Michael; Wang, Chih-Hao; Abramczyk, Jaroslaw; Campbell, Stuart; Oulundsen, George; Tankala, Kanishka
2014-03-01
A large number of power delivery applications for optical fibers require beams with very specific output intensity profiles; in particular applications that require a focused high intensity beam typically image the near field (NF) intensity distribution at the exit surface of an optical fiber. In this work we discuss optical fiber designs that shape the output beam profile to more closely correspond to what is required in many real world industrial applications. Specifically we present results demonstrating the ability to transform Gaussian beams to shapes required for industrial applications and how that relates to system parameters such as beam product parameter (BPP) values. We report on the how different waveguide structures perform in the NF and show results on how to achieve flat-top with circular outputs.
Pupillographic assessment of sleepiness in sleep-deprived healthy subjects.
Wilhelm, B; Wilhelm, H; Lüdtke, H; Streicher, P; Adler, M
1998-05-01
Spontaneous pupillary-behavior in darkness provides information about a subject's level of sleepiness. In the present work, pupil measurements in complete darkness and quiet have been recorded continuously over 11-minute period with infrared video pupillography at 25 Hz. The data have been analyzed to yield three parameters describing pupil behavior; the power of diameter variation at frequencies below 0.8 Hz (slow changes in pupil size), the pupillary unrest index, and the average pupil size. To investigate the changes of these parameters in sleep deprivation, spontaneous pupillary behavior in darkness was recorded every 2 hours in 13 healthy subjects from 19:00 to 07:00 during forced wakefulness. On each occasion, comparative subjective sleepiness was assessed with a self-rating scale (Stanford Sleepiness Scale, SSS). The power of slow pupillary oscillations (< or = 0.8 Hz) increased significantly and so did the values of SSS, while basic pupil diameter decreased significantly. Slow pupillary oscillations and SSS did not correlate well in general but high values of pupil parameters were always associated with high values in subjective rating. Our results demonstrate a strong relationship between ongoing sleep deprivation and typical changes in the frequency profiles of spontaneous pupillary oscillations and the tendency to instability in pupil size in normals. These findings suggest that the results of pupil data analysis permit an objective measurement of sleepiness.
NASA Astrophysics Data System (ADS)
Hoppe, C. J. M.; Langer, G.; Rokitta, S. D.; Wolf-Gladrow, D. A.; Rost, B.
2012-07-01
The growing field of ocean acidification research is concerned with the investigation of organism responses to increasing pCO2 values. One important approach in this context is culture work using seawater with adjusted CO2 levels. As aqueous pCO2 is difficult to measure directly in small-scale experiments, it is generally calculated from two other measured parameters of the carbonate system (often AT, CT or pH). Unfortunately, the overall uncertainties of measured and subsequently calculated values are often unknown. Especially under high pCO2, this can become a severe problem with respect to the interpretation of physiological and ecological data. In the few datasets from ocean acidification research where all three of these parameters were measured, pCO2 values calculated from AT and CT are typically about 30% lower (i.e. ~300 μatm at a target pCO2 of 1000 μatm) than those calculated from AT and pH or CT and pH. This study presents and discusses these discrepancies as well as likely consequences for the ocean acidification community. Until this problem is solved, one has to consider that calculated parameters of the carbonate system (e.g. pCO2, calcite saturation state) may not be comparable between studies, and that this may have important implications for the interpretation of CO2 perturbation experiments.
NASA Astrophysics Data System (ADS)
Hoppe, C. J. M.; Langer, G.; Rokitta, S. D.; Wolf-Gladrow, D. A.; Rost, B.
2012-02-01
The growing field of ocean acidification research is concerned with the investigation of organisms' responses to increasing pCO2 values. One important approach in this context is culture work using seawater with adjusted CO2 levels. As aqueous pCO2 is difficult to measure directly in small scale experiments, it is generally calculated from two other measured parameters of the carbonate system (often AT, CT or pH). Unfortunately, the overall uncertainties of measured and subsequently calculated values are often unknown. Especially under high pCO2, this can become a severe problem with respect to the interpretation of physiological and ecological data. In the few datasets from ocean acidification research where all three of these parameters were measured, pCO2 values calculated from AT and CT are typically about 30 % lower (i.e. ~300 μatm at a target pCO2 of 1000 μatm) than those calculated from AT and pH or CT and pH. This study presents and discusses these discrepancies as well as likely consequences for the ocean acidification community. Until this problem is solved, one has to consider that calculated parameters of the carbonate system (e.g. pCO2, calcite saturation state) may not be comparable between studies, and that this may have important implications for the interpretation of CO2 perturbation experiments.
A comprehensive photometric study of dynamically evolved small van den Bergh-Hagen open clusters
NASA Astrophysics Data System (ADS)
Piatti, Andrés E.
2016-12-01
We present results from Johnson UBV, Kron-Cousins RI and Washington CT1T2 photometries for seven van den Bergh-Hagen (vdBH) open clusters, namely, vdBH 1, 10, 31, 72, 87, 92, and 118. The high-quality, multiband photometric data sets were used to trace the cluster stellar density radial profiles and to build colour-magnitude diagrams and colour-colour diagrams from which we estimated their structural parameters and fundamental astrophysical properties. The clusters in our sample cover a wide age range, from ˜60 Myr up to 2.8 Gyr, are of relatively small size (˜1-6 pc) and are placed at distances from the Sun which vary between 1.8 and 6.3 kpc, respectively. We also estimated lower limits for the cluster present-day masses as well as half-mass relaxation times (tr). The resulting values in combination with the structural parameter values suggest that the studied clusters are in advanced stages of their internal dynamical evolution (age/tr ˜ 20-320), possibly in the typical phase of those tidally filled with mass segregation in their core regions. Compared to open clusters in the solar neighbourhood, the seven vdBH clusters are within more massive (˜80-380 M⊙), with higher concentration parameter values (c ˜ 0.75-1.15) and dynamically evolved ones.
Quasar spectral variability from the XMM-Newton serendipitous source catalogue
NASA Astrophysics Data System (ADS)
Serafinelli, R.; Vagnetti, F.; Middei, R.
2017-04-01
Context. X-ray spectral variability analyses of active galactic nuclei (AGN) with moderate luminosities and redshifts typically show a "softer when brighter" behaviour. Such a trend has rarely been investigated for high-luminosity AGNs (Lbol ≳ 1044 erg/s), nor for a wider redshift range (e.g. 0 ≲ z ≲ 5). Aims: We present an analysis of spectral variability based on a large sample of 2700 quasars, measured at several different epochs, extracted from the fifth release of the XMM-Newton Serendipitous Source Catalogue. Methods: We quantified the spectral variability through the parameter β defined as the ratio between the change in the photon index Γ and the corresponding logarithmic flux variation, β = -ΔΓ/Δlog FX. Results: Our analysis confirms a softer when brighter behaviour for our sample, extending the previously found general trend to high luminosity and redshift. We estimate an ensemble value of the spectral variability parameter β = -0.69 ± 0.03. We do not find dependence of β on redshift, X-ray luminosity, black hole mass or Eddington ratio. A subsample of radio-loud sources shows a smaller spectral variability parameter. There is also some change with the X-ray flux, with smaller β (in absolute value) for brighter sources. We also find significant correlations for a small number of individual sources, indicating more negative values for some sources.
Interpreting the Weibull fitting parameters for diffusion-controlled release data
NASA Astrophysics Data System (ADS)
Ignacio, Maxime; Chubynsky, Mykyta V.; Slater, Gary W.
2017-11-01
We examine the diffusion-controlled release of molecules from passive delivery systems using both analytical solutions of the diffusion equation and numerically exact Lattice Monte Carlo data. For very short times, the release process follows a √{ t } power law, typical of diffusion processes, while the long-time asymptotic behavior is exponential. The crossover time between these two regimes is determined by the boundary conditions and initial loading of the system. We show that while the widely used Weibull function provides a reasonable fit (in terms of statistical error), it has two major drawbacks: (i) it does not capture the correct limits and (ii) there is no direct connection between the fitting parameters and the properties of the system. Using a physically motivated interpolating fitting function that correctly includes both time regimes, we are able to predict the values of the Weibull parameters which allows us to propose a physical interpretation.
Invariant polygons in systems with grazing-sliding.
Szalai, R; Osinga, H M
2008-06-01
The paper investigates generic three-dimensional nonsmooth systems with a periodic orbit near grazing-sliding. We assume that the periodic orbit is unstable with complex multipliers so that two dominant frequencies are present in the system. Because grazing-sliding induces a dimension loss and the instability drives every trajectory into sliding, the system has an attractor that consists of forward sliding orbits. We analyze this attractor in a suitably chosen Poincare section using a three-parameter generalized map that can be viewed as a normal form. We show that in this normal form the attractor must be contained in a finite number of lines that intersect in the vertices of a polygon. However the attractor is typically larger than the associated polygon. We classify the number of lines involved in forming the attractor as a function of the parameters. Furthermore, for fixed values of parameters we investigate the one-dimensional dynamics on the attractor.
Geothermal reservoir simulation of hot sedimentary aquifer system using FEFLOW®
NASA Astrophysics Data System (ADS)
Nur Hidayat, Hardi; Gala Permana, Maximillian
2017-12-01
The study presents the simulation of hot sedimentary aquifer for geothermal utilization. Hot sedimentary aquifer (HSA) is a conduction-dominated hydrothermal play type utilizing deep aquifer, which is heated by near normal heat flow. One of the examples of HSA is Bavarian Molasse Basin in South Germany. This system typically uses doublet wells: an injection and production well. The simulation was run for 3650 days of simulation time. The technical feasibility and performance are analysed in regards to the extracted energy from this concept. Several parameters are compared to determine the model performance. Parameters such as reservoir characteristics, temperature information and well information are defined. Several assumptions are also defined to simplify the simulation process. The main results of the simulation are heat period budget or total extracted heat energy, and heat rate budget or heat production rate. Qualitative approaches for sensitivity analysis are conducted by using five parameters in which assigned lower and higher value scenarios.
Stability of Gradient Field Corrections for Quantitative Diffusion MRI.
Rogers, Baxter P; Blaber, Justin; Welch, E Brian; Ding, Zhaohua; Anderson, Adam W; Landman, Bennett A
2017-02-11
In magnetic resonance diffusion imaging, gradient nonlinearity causes significant bias in the estimation of quantitative diffusion parameters such as diffusivity, anisotropy, and diffusion direction in areas away from the magnet isocenter. This bias can be substantially reduced if the scanner- and coil-specific gradient field nonlinearities are known. Using a set of field map calibration scans on a large (29 cm diameter) phantom combined with a solid harmonic approximation of the gradient fields, we predicted the obtained b-values and applied gradient directions throughout a typical field of view for brain imaging for a typical 32-direction diffusion imaging sequence. We measured the stability of these predictions over time. At 80 mm from scanner isocenter, predicted b-value was 1-6% different than intended due to gradient nonlinearity, and predicted gradient directions were in error by up to 1 degree. Over the course of one month the change in these quantities due to calibration-related factors such as scanner drift and variation in phantom placement was <0.5% for b-values, and <0.5 degrees for angular deviation. The proposed calibration procedure allows the estimation of gradient nonlinearity to correct b-values and gradient directions ahead of advanced diffusion image processing for high angular resolution data, and requires only a five-minute phantom scan that can be included in a weekly or monthly quality assurance protocol.
Gaussian copula as a likelihood function for environmental models
NASA Astrophysics Data System (ADS)
Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.
2017-12-01
Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an interesting departure from the usage of fully parametric distributions as likelihood functions - and they could help us to better capture the statistical properties of errors and make more reliable predictions.
Signs Indicating Imminent Death in Escherichia coli-Infected Broilers.
Matthijs, M G R; Nieuwenhuis, J F; Dwars, R M
2017-09-01
Broilers were observed during 9 days for clinical signs after intratracheal inoculation at 8 days of age with 10 7 E. coli 506. It was determined if these signs were predictive for imminent death. Hourly observations were made daily from a distance of 1-2 m and nightly by camera observation, with respect to the following parameters: level of attention, locomotory activity, posture and appearance, interaction, and impairment of respiration. For deviations of the normal state for these five parameters (i.e., typical clinical signs of disease), scores were defined in up to four classes. The periods of time elapsing from attaining a score for the first time to death were registered per bird for each score for each parameter. Of 114 birds, 85 did not present typical signs of illness as described, and 29 presented the following clinical history: 25 died after presenting signs of illness, 2 died without previous signs, 1 fell ill but survived, and 1 fell ill and recovered. Extended clinical examination was performed in birds presenting clinical signs; temperature, heart rate, respiratory rate, and subcutaneous capillary refill time were measured. The level of attention, and posture and appearance were affected most often in ill birds; 25% of these birds died within 5 and 4 hr, respectively; 50% died within 12 hr; and 75% died within 20 and 19 hr, respectively. Any of these typical signs of illness visible from 1-2 m indicated imminent death, with 75% of the birds dying within 20 hr. Measurements resulting from extended clinical examination proved of lesser predictive value. From these observations, a protocol for intervention to prevent animal suffering may be designed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, J
Purpose: To investigate the potential utility of in-line phase-contrast imaging (ILPCI) technique with synchrotron radiation in detecting early hepatocellular carcinoma and cavernous hemangioma of live using in vitro model system. Methods: Without contrast agents, three typical early hepatocellular carcinoma specimens and three typical cavernous hemangioma of live specimens were imaged using ILPCI. To quantitatively discriminate early hepatocellular carcinoma tissues and cavernous hemangioma tissues, the projection images texture feature based on gray level co-occurrence matrix (GLCM) were extracted. The texture parameters of energy, inertia, entropy, correlation, sum average, sum entropy, difference average, difference entropy and inverse difference moment, were obtained respectively.more » Results: In the ILPCI planar images of early hepatocellular carcinoma specimens, vessel trees were clearly visualized on the micrometer scale. Obvious distortion deformation was presented, and the vessel mostly appeared as a ‘dry stick’. Liver textures appeared not regularly. In the ILPCI planar images of cavernous hemangioma of live specimens, typical vessels had not been found compared with the early hepatocellular carcinoma planar images. The planar images of cavernous hemangioma of live specimens clearly displayed the dilated hepatic sinusoids with the diameter of less than 100 microns, but all of them were overlapped with each other. The texture parameters of energy, inertia, entropy, correlation, sum average, sum entropy, and difference average, showed a statistically significant between the two types specimens image (P<0.01), except the texture parameters of difference entropy and inverse difference moment(P>0.01). Conclusion: The results indicate that there are obvious changes in morphological levels including vessel structures and liver textures. The study proves that this imaging technique has a potential value in evaluating early hepatocellular carcinoma and cavernous hemangioma of live.« less
Kinetic analysis of single molecule FRET transitions without trajectories
NASA Astrophysics Data System (ADS)
Schrangl, Lukas; Göhring, Janett; Schütz, Gerhard J.
2018-03-01
Single molecule Förster resonance energy transfer (smFRET) is a popular tool to study biological systems that undergo topological transitions on the nanometer scale. smFRET experiments typically require recording of long smFRET trajectories and subsequent statistical analysis to extract parameters such as the states' lifetimes. Alternatively, analysis of probability distributions exploits the shapes of smFRET distributions at well chosen exposure times and hence works without the acquisition of time traces. Here, we describe a variant that utilizes statistical tests to compare experimental datasets with Monte Carlo simulations. For a given model, parameters are varied to cover the full realistic parameter space. As output, the method yields p-values which quantify the likelihood for each parameter setting to be consistent with the experimental data. The method provides suitable results even if the actual lifetimes differ by an order of magnitude. We also demonstrated the robustness of the method to inaccurately determine input parameters. As proof of concept, the new method was applied to the determination of transition rate constants for Holliday junctions.
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
Lu, Zhiming
2018-01-30
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhiming
Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less
f1: a code to compute Appell's F1 hypergeometric function
NASA Astrophysics Data System (ADS)
Colavecchia, F. D.; Gasaneo, G.
2004-02-01
In this work we present the FORTRAN code to compute the hypergeometric function F1( α, β1, β2, γ, x, y) of Appell. The program can compute the F1 function for real values of the variables { x, y}, and complex values of the parameters { α, β1, β2, γ}. The code uses different strategies to calculate the function according to the ideas outlined in [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29]. Program summaryTitle of the program: f1 Catalogue identifier: ADSJ Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSJ Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computers: PC compatibles, SGI Origin2∗ Operating system under which the program has been tested: Linux, IRIX Programming language used: Fortran 90 Memory required to execute with typical data: 4 kbytes No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 52 325 Distribution format: tar gzip file External subprograms used: Numerical Recipes hypgeo [W.H. Press et al., Numerical Recipes in Fortran 77, Cambridge Univ. Press, 1996] or chyp routine of R.C. Forrey [J. Comput. Phys. 137 (1997) 79], rkf45 [L.F. Shampine and H.H. Watts, Rep. SAND76-0585, 1976]. Keywords: Numerical methods, special functions, hypergeometric functions, Appell functions, Gauss function Nature of the physical problem: Computing the Appell F1 function is relevant in atomic collisions and elementary particle physics. It is usually the result of multidimensional integrals involving Coulomb continuum states. Method of solution: The F1 function has a convergent-series definition for | x|<1 and | y|<1, and several analytic continuations for other regions of the variable space. The code tests the values of the variables and selects one of the precedent cases. In the convergence region the program uses the series definition near the origin of coordinates, and a numerical integration of the third-order differential parametric equation for the F1 function. Also detects several special cases according to the values of the parameters. Restrictions on the complexity of the problem: The code is restricted to real values of the variables { x, y}. Also, there are some parameter domains that are not covered. These usually imply differences between integer parameters that lead to negative integer arguments of Gamma functions. Typical running time: Depends basically on the variables. The computation of Table 4 of [F.D. Colavecchia et al., Comput. Phys. Comm. 138 (1) (2001) 29] (64 functions) requires approximately 0.33 s in a Athlon 900 MHz processor.
NASA Astrophysics Data System (ADS)
Liu, Zhiguo; Yan, Guangyao; Mu, Zhitao; Li, Xudong
2018-01-01
The accelerated pitting corrosion test of 7B04 aluminum alloy specimen was carried out according to the spectrum which simulated airport environment, and the corresponding pitting corrosion damage was obtained and was defined through three parameters A and B and C which respectively denoted the corrosion pit surface length and width and corrosion pit depth. The ratio between three parameters could determine the morphology characteristics of corrosion pits. On this basis the stress concentration factor of typical corrosion pit morphology under certain load conditions was quantitatively analyzed. The research shows that the corrosion pits gradually incline to be ellipse in surface and moderate in depth, and most value of B/A and C/A lies in 1 between 4 and few maximum exceeds 4; The stress concentration factor Kf of corrosion pits is obviously affected by the its morphology, the value of Kf increases with corrosion pits depth increasement under certain corrosion pits surface geometry. Also, the value of Kf decreases with surface width increasement under certain corrosion pits depth. The research conclusion can set theory basis for corrosion fatigue life analysis of aircraft aluminum alloy structure.
Statistical characterization of discrete conservative systems: The web map
NASA Astrophysics Data System (ADS)
Ruiz, Guiomar; Tirnakli, Ugur; Borges, Ernesto P.; Tsallis, Constantino
2017-10-01
We numerically study the two-dimensional, area preserving, web map. When the map is governed by ergodic behavior, it is, as expected, correctly described by Boltzmann-Gibbs statistics, based on the additive entropic functional SB G[p (x ) ] =-k ∫d x p (x ) lnp (x ) . In contrast, possible ergodicity breakdown and transitory sticky dynamical behavior drag the map into the realm of generalized q statistics, based on the nonadditive entropic functional Sq[p (x ) ] =k 1/-∫d x [p(x ) ] q q -1 (q ∈R ;S1=SB G ). We statistically describe the system (probability distribution of the sum of successive iterates, sensitivity to the initial condition, and entropy production per unit time) for typical values of the parameter that controls the ergodicity of the map. For small (large) values of the external parameter K , we observe q -Gaussian distributions with q =1.935 ⋯ (Gaussian distributions), like for the standard map. In contrast, for intermediate values of K , we observe a different scenario, due to the fractal structure of the trajectories embedded in the chaotic sea. Long-standing non-Gaussian distributions are characterized in terms of the kurtosis and the box-counting dimension of chaotic sea.
A novel chaos-based image encryption algorithm using DNA sequence operations
NASA Astrophysics Data System (ADS)
Chai, Xiuli; Chen, Yiran; Broyde, Lucie
2017-01-01
An image encryption algorithm based on chaotic system and deoxyribonucleic acid (DNA) sequence operations is proposed in this paper. First, the plain image is encoded into a DNA matrix, and then a new wave-based permutation scheme is performed on it. The chaotic sequences produced by 2D Logistic chaotic map are employed for row circular permutation (RCP) and column circular permutation (CCP). Initial values and parameters of the chaotic system are calculated by the SHA 256 hash of the plain image and the given values. Then, a row-by-row image diffusion method at DNA level is applied. A key matrix generated from the chaotic map is used to fuse the confused DNA matrix; also the initial values and system parameters of the chaotic system are renewed by the hamming distance of the plain image. Finally, after decoding the diffused DNA matrix, we obtain the cipher image. The DNA encoding/decoding rules of the plain image and the key matrix are determined by the plain image. Experimental results and security analyses both confirm that the proposed algorithm has not only an excellent encryption result but also resists various typical attacks.
Performance evaluation of image-intensifier-TV fluoroscopy systems
NASA Astrophysics Data System (ADS)
van der Putten, Wilhelm J.; Bouley, Shawn
1995-05-01
Through use of a computer model and an aluminum low contrast phantom developed in-house, a method has been developed which is able to grade the imaging performance of fluoroscopy systems through use of a variable, K. This parameter was derived from Rose's model of image perception and is here used as a figure of merit to grade fluoroscopy systems. From Rose's model for an ideal system, a typical value of K for the perception of low contrast details should be between 3 and 7, assuming threshold vision by human observers. Thus, various fluoroscopy systems are graded with different values of K, with a lower value of K indicating better imaging performance of the system. A series of fluoroscopy systems have been graded where the best system produces a value in the low teens, while the poorest systems produce a value in the low twenties. Correlation with conventional image quality measurements is good and the method has the potential for automated assessment of image quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hatayama, Ariyoshi; Ogasawara, Masatada; Yamauchi, Michinori
1994-08-01
Plasma size and other basic performance parameters for 1000-MW(electric) power production are calculated with the blanket energy multiplication factor, the M value, as a parameter. The calculational model is base don the International Thermonuclear Experimental Reactor (ITER) physics design guidelines and includes overall plant power flow. Plasma size decreases as the M value increases. However, the improvement in the plasma compactness and other basic performance parameters, such as the total plant power efficiency, becomes saturated above the M = 5 to 7 range. THus, a value in the M = 5 to 7 range is a reasonable choice for 1000-MW(electric)more » hybrids. Typical plasma parameters for 1000-MW(electric) hybrids with a value of M = 7 are a major radius of R = 5.2 m, minor radius of a = 1.7 m, plasma current of I{sub p} = 15 MA, and toroidal field on the axis of B{sub o} = 5 T. The concept of a thermal fission blanket that uses light water as a coolant is selected as an attractive candidate for electricity-producing hybrids. An optimization study is carried out for this blanket concept. The result shows that a compact, simple structure with a uniform fuel composition for the fissile region is sufficient to obtain optimal conditions for suppressing the thermal power increase caused by fuel burnup. The maximum increase in the thermal power is +3.2%. The M value estimated from the neutronics calculations is {approximately}7.0, which is confirmed to be compatible with the plasma requirement. These studies show that it is possible to use a tokamak fusion core with design requirements similar to those of ITER for a 1000-MW(electric) power reactor that uses existing thermal reactor technology for the blanket. 30 refs., 22 figs., 4 tabs.« less
NASA Astrophysics Data System (ADS)
Virtanen, I. O. I.; Virtanen, I. I.; Pevtsov, A. A.; Yeates, A.; Mursula, K.
2017-07-01
Aims: We aim to use the surface flux transport model to simulate the long-term evolution of the photospheric magnetic field from historical observations. In this work we study the accuracy of the model and its sensitivity to uncertainties in its main parameters and the input data. Methods: We tested the model by running simulations with different values of meridional circulation and supergranular diffusion parameters, and studied how the flux distribution inside active regions and the initial magnetic field affected the simulation. We compared the results to assess how sensitive the simulation is to uncertainties in meridional circulation speed, supergranular diffusion, and input data. We also compared the simulated magnetic field with observations. Results: We find that there is generally good agreement between simulations and observations. Although the model is not capable of replicating fine details of the magnetic field, the long-term evolution of the polar field is very similar in simulations and observations. Simulations typically yield a smoother evolution of polar fields than observations, which often include artificial variations due to observational limitations. We also find that the simulated field is fairly insensitive to uncertainties in model parameters or the input data. Due to the decay term included in the model the effects of the uncertainties are somewhat minor or temporary, lasting typically one solar cycle.
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas
2016-11-01
Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
NASA Technical Reports Server (NTRS)
DeLannoy, Gabrielle J. M.; Reichle, Rolf H.; Vrugt, Jasper A.
2013-01-01
Uncertainties in L-band (1.4 GHz) radiative transfer modeling (RTM) affect the simulation of brightness temperatures (Tb) over land and the inversion of satellite-observed Tb into soil moisture retrievals. In particular, accurate estimates of the microwave soil roughness, vegetation opacity and scattering albedo for large-scale applications are difficult to obtain from field studies and often lack an uncertainty estimate. Here, a Markov Chain Monte Carlo (MCMC) simulation method is used to determine satellite-scale estimates of RTM parameters and their posterior uncertainty by minimizing the misfit between long-term averages and standard deviations of simulated and observed Tb at a range of incidence angles, at horizontal and vertical polarization, and for morning and evening overpasses. Tb simulations are generated with the Goddard Earth Observing System (GEOS-5) and confronted with Tb observations from the Soil Moisture Ocean Salinity (SMOS) mission. The MCMC algorithm suggests that the relative uncertainty of the RTM parameter estimates is typically less than 25 of the maximum a posteriori density (MAP) parameter value. Furthermore, the actual root-mean-square-differences in long-term Tb averages and standard deviations are found consistent with the respective estimated total simulation and observation error standard deviations of m3.1K and s2.4K. It is also shown that the MAP parameter values estimated through MCMC simulation are in close agreement with those obtained with Particle Swarm Optimization (PSO).
NASA Astrophysics Data System (ADS)
Espinoza, Néstor; Jordán, Andrés
2016-04-01
Very precise measurements of exoplanet transit light curves both from ground- and space-based observatories make it now possible to fit the limb-darkening coefficients in the transit-fitting procedure rather than fix them to theoretical values. This strategy has been shown to give better results, as fixing the coefficients to theoretical values can give rise to important systematic errors which directly impact the physical properties of the system derived from such light curves such as the planetary radius. However, studies of the effect of limb-darkening assumptions on the retrieved parameters have mostly focused on the widely used quadratic limb-darkening law, leaving out other proposed laws that are either simpler or better descriptions of model intensity profiles. In this work, we show that laws such as the logarithmic, square-root and three-parameter law do a better job that the quadratic and linear laws when deriving parameters from transit light curves, both in terms of bias and precision, for a wide range of situations. We therefore recommend to study which law to use on a case-by-case basis. We provide code to guide the decision of when to use each of these laws and select the optimal one in a mean-square error sense, which we note has a dependence on both stellar and transit parameters. Finally, we demonstrate that the so-called exponential law is non-physical as it typically produces negative intensities close to the limb and should therefore not be used.
A geostatistical extreme-value framework for fast simulation of natural hazard events
Stephenson, David B.
2016-01-01
We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student’s t-process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements. PMID:27279768
1984-12-30
as three dimensional, when the assumption is made that all SUTRA parameters and coefficients have a constant value in the third space direction. A...finite element. The type of element employed by SUTRA for two-dimensional simulation is a quadrilateral which has a finite thickness in the third ... space dimension. This type of a quad- rilateral element and a typical two-dimensional mesh is shown in Figure 3.1. - All twelve edges of the two
Relation between the Surface Friction of Plates and their Statistical Microgeometry
1980-01-01
3-6 and -7. Calibration-- are taken for each of the Uicr~r unit exponent values and best fit li;nes by least squares fitted through each"n set of...parameter, [ = 1.de (2-43) (Clauser 1954, 1956). Data from near equilibrium flows (Coles & Hurst 1968) was plotted along with some typical non-equilibrium...too bad a fit even for the non equilibrium flows. Coles and Hurst (1968) recommended that the fit of the law of the wake to velocity profiles should be
A computer program for estimation from incomplete multinomial data
NASA Technical Reports Server (NTRS)
Credeur, K. R.
1978-01-01
Coding is given for maximum likelihood and Bayesian estimation of the vector p of multinomial cell probabilities from incomplete data. Also included is coding to calculate and approximate elements of the posterior mean and covariance matrices. The program is written in FORTRAN 4 language for the Control Data CYBER 170 series digital computer system with network operating system (NOS) 1.1. The program requires approximately 44000 octal locations of core storage. A typical case requires from 72 seconds to 92 seconds on CYBER 175 depending on the value of the prior parameter.
SCIENCE PARAMETRICS FOR MISSIONS TO SEARCH FOR EARTH-LIKE EXOPLANETS BY DIRECT IMAGING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Robert A., E-mail: rbrown@stsci.edu
2015-01-20
We use N{sub t} , the number of exoplanets observed in time t, as a science metric to study direct-search missions like Terrestrial Planet Finder. In our model, N has 27 parameters, divided into three categories: 2 astronomical, 7 instrumental, and 18 science-operational. For various ''27-vectors'' of those parameters chosen to explore parameter space, we compute design reference missions to estimate N{sub t} . Our treatment includes the recovery of completeness c after a search observation, for revisits, solar and antisolar avoidance, observational overhead, and follow-on spectroscopy. Our baseline 27-vector has aperture D = 16 m, inner working angle IWAmore » = 0.039'', mission time t = 0-5 yr, occurrence probability for Earth-like exoplanets η = 0.2, and typical values for the remaining 23 parameters. For the baseline case, a typical five-year design reference mission has an input catalog of ∼4700 stars with nonzero completeness, ∼1300 unique stars observed in ∼2600 observations, of which ∼1300 are revisits, and it produces N {sub 1} ∼ 50 exoplanets after one year and N {sub 5} ∼ 130 after five years. We explore offsets from the baseline for 10 parameters. We find that N depends strongly on IWA and only weakly on D. It also depends only weakly on zodiacal light for Z < 50 zodis, end-to-end efficiency for h > 0.2, and scattered starlight for ζ < 10{sup –10}. We find that observational overheads, completeness recovery and revisits, solar and antisolar avoidance, and follow-on spectroscopy are all important factors in estimating N.« less
Total solar eclipse effects on VLF signals: Observations and modeling
NASA Astrophysics Data System (ADS)
Clilverd, Mark A.; Rodger, Craig J.; Thomson, Neil R.; Lichtenberger, János; Steinbach, Péter; Cannon, Paul; Angling, Matthew J.
During the total solar eclipse observed in Europe on August 11, 1999, measurements were made of the amplitude and phase of four VLF transmitters in the frequency range 16-24 kHz. Five receiver sites were set up, and significant variations in phase and amplitude are reported for 17 paths, more than any previously during an eclipse. Distances from transmitter to receiver ranged from 90 to 14,510 km, although the majority were <2000 km. Typically, positive amplitude changes were observed throughout the whole eclipse period on path lengths <2000 km, while negative amplitude changes were observed on paths >10,000 km. Negative phase changes were observed on most paths, independent of path length. Although there was significant variation from path to path, the typical changes observed were ~3 dB and ~50°. The changes observed were modeled using the Long Wave Propagation Capability waveguide code. Maximum eclipse effects occurred when the Wait inverse scale height parameter β was 0.5 km-1 and the effective ionospheric height parameter H' was 79 km, compared with β=0.43km-1 and H'=71km for normal daytime conditions. The resulting changes in modeled amplitude and phase show good agreement with the majority of the observations. The modeling undertaken provides an interpretation of why previous estimates of height change during eclipses have shown such a range of values. A D region gas-chemistry model was compared with electron concentration estimates inferred from the observations made during the solar eclipse. Quiet-day H' and β parameters were used to define the initial ionospheric profile. The gas-chemistry model was then driven only by eclipse-related solar radiation levels. The calculated electron concentration values at 77 km altitude throughout the period of the solar eclipse show good agreement with the values determined from observations at all times, which suggests that a linear variation in electron production rate with solar ionizing radiation is reasonable. At times of minimum electron concentration the chemical model predicts that the D region profile would be parameterized by the same β and H' as the LWPC model values, and rocket profiles, during totality and can be considered a validation of the chemical processes defined within the model.
Open cluster Dolidze 25: Stellar parameters and the metallicity in the Galactic anticentre
NASA Astrophysics Data System (ADS)
Negueruela, I.; Simón-Díaz, S.; Lorenzo, J.; Castro, N.; Herrero, A.
2015-12-01
Context. The young open cluster Dolidze 25, in the direction of the Galactic anticentre, has been attributed a very low metallicity, with typical abundances between -0.5 and -0.7 dex below solar. Aims: We intend to derive accurate cluster parameters and accurate stellar abundances for some of its members. Methods: We have obtained a large sample of intermediate- and high-resolution spectra for stars in and around Dolidze 25. We used the fastwind code to generate stellar atmosphere models to fit the observed spectra. We derive stellar parameters for a large number of OB stars in the area, and abundances of oxygen and silicon for a number of stars with spectral types around B0. Results: We measure low abundances in stars of Dolidze 25. For the three stars with spectral types around B0, we find 0.3 dex (Si) and 0.5 dex (O) below the values typical in the solar neighbourhood. These values, even though not as low as those given previously, confirm Dolidze 25 and the surrounding H ii region Sh2-284 as the most metal-poor star-forming environment known in the Milky Way. We derive a distance 4.5 ± 0.3 kpc to the cluster (rG ≈ 12.3 kpc). The cluster cannot be older than ~3 Myr, and likely is not much younger. One star in its immediate vicinity, sharing the same distance, has Si and O abundances at most 0.15 dex below solar. Conclusions: The low abundances measured in Dolidze 25 are compatible with currently accepted values for the slope of the Galactic metallicity gradient, if we take into account that variations of at least ±0.15 dex are observed at a given radius. The area traditionally identified as Dolidze 25 is only a small part of a much larger star-forming region that comprises the whole dust shell associated with Sh2-284 and very likely several other smaller H ii regions in its vicinity. Based on observations made with the Nordic Optical Telescope, the Mercator Telescope, and the telescopes of the Isaac Newton Group.
NASA Astrophysics Data System (ADS)
Wang, Hua; Tao, Guo; Shang, Xue-Feng; Fang, Xin-Ding; Burns, Daniel R.
2013-12-01
In acoustic logging-while-drilling (ALWD) finite difference in time domain (FDTD) simulations, large drill collar occupies, most of the fluid-filled borehole and divides the borehole fluid into two thin fluid columns (radius ˜27 mm). Fine grids and large computational models are required to model the thin fluid region between the tool and the formation. As a result, small time step and more iterations are needed, which increases the cumulative numerical error. Furthermore, due to high impedance contrast between the drill collar and fluid in the borehole (the difference is >30 times), the stability and efficiency of the perfectly matched layer (PML) scheme is critical to simulate complicated wave modes accurately. In this paper, we compared four different PML implementations in a staggered grid finite difference in time domain (FDTD) in the ALWD simulation, including field-splitting PML (SPML), multiaxial PML(MPML), non-splitting PML (NPML), and complex frequency-shifted PML (CFS-PML). The comparison indicated that NPML and CFS-PML can absorb the guided wave reflection from the computational boundaries more efficiently than SPML and M-PML. For large simulation time, SPML, M-PML, and NPML are numerically unstable. However, the stability of M-PML can be improved further to some extent. Based on the analysis, we proposed that the CFS-PML method is used in FDTD to eliminate the numerical instability and to improve the efficiency of absorption in the PML layers for LWD modeling. The optimal values of CFS-PML parameters in the LWD simulation were investigated based on thousands of 3D simulations. For typical LWD cases, the best maximum value of the quadratic damping profile was obtained using one d 0. The optimal parameter space for the maximum value of the linear frequency-shifted factor ( α 0) and the scaling factor ( β 0) depended on the thickness of the PML layer. For typical formations, if the PML thickness is 10 grid points, the global error can be reduced to <1% using the optimal PML parameters, and the error will decrease as the PML thickness increases.
Reaction-Diffusion-Delay Model for EPO/TNF-α Interaction in articular cartilage lesion abatement
2012-01-01
Background Injuries to articular cartilage result in the development of lesions that form on the surface of the cartilage. Such lesions are associated with articular cartilage degeneration and osteoarthritis. The typical injury response often causes collateral damage, primarily an effect of inflammation, which results in the spread of lesions beyond the region where the initial injury occurs. Results and discussion We present a minimal mathematical model based on known mechanisms to investigate the spread and abatement of such lesions. The first case corresponds to the parameter values listed in Table 1, while the second case has parameter values as in Table 2. In particular we represent the "balancing act" between pro-inflammatory and anti-inflammatory cytokines that is hypothesized to be a principal mechanism in the expansion properties of cartilage damage during the typical injury response. We present preliminary results of in vitro studies that confirm the anti-inflammatory activities of the cytokine erythropoietin (EPO). We assume that the diffusion of cytokines determine the spatial behavior of injury response and lesion expansion so that a reaction diffusion system involving chemical species and chondrocyte cell state population densities is a natural way to represent cartilage injury response. We present computational results using the mathematical model showing that our representation is successful in capturing much of the interesting spatial behavior of injury associated lesion development and abatement in articular cartilage. Further, we discuss the use of this model to study the possibility of using EPO as a therapy for reducing the amount of inflammation induced collateral damage to cartilage during the typical injury response. Table 1 Model Parameter Values for Results in Figure 5 Table of Parameter Values Corresponding to Simulations in Figure 5 Parameter Value Units Reason D R 0.1 c m 2 day Determined from [13] D M 0.05 c m 2 day Determined from [13] D F 0.05 c m 2 day Determined from [13] D P 0.005 c m 2 day Determined from [13] δ R 0.01 1 day Approximated δ M 0.6 1 day Approximated δ F 0.6 1 day Approximated δ P 0.0087 1 day Approximated δ U 0.0001 1 day Approximated σ R 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ M 0.00001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ F 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ P 0 micromolar ⋅ c m 2 day ⋅ cells Case with no anti-inflammatory response Λ 10 micromolar Approximated λ R 10 micromolar Approximated λ M 10 micromolar Approximated λ F 10 micromolar Approximated λ P 10 micromolar Approximated α 0 1 day Case with no anti-inflammatory response β 1 100 1 day Approximated Β 2 50 1 day Approximated γ 10 1 day Approximated ν 0.5 1 day Approximated μ S A 1 1 day Approximated μ D N 0.5 1 day Approximated τ 1 0.5 days Taken from [5] τ 2 1 days Taken from [5] Table 2 Model Parameter Values for Results in Figure 6 Table of Parameter Values Corresponding to Simulations in Figure 6 Parameter Value Units Reason D R 0.1 c m 2 day Determined from [13] D M 0.05 c m 2 day Determined from [13] D F 0.05 c m 2 day Determined from [13] DP 0.005 c m 2 day Determined from [13] δ R 0.01 1 day Approximated δ M 0.6 1 day Approximated δ F 0.6 1 day Approximated δ P 0.0087 1 day Approximated δ U 0.0001 1 day Approximated σ R 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ M 0.00001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ F 0.0001 micromolar ⋅ c m 2 day ⋅ cells Approximated σ P 0.001 micromolar ⋅ c m 2 day ⋅ cells Approximated Λ 10 micromolar Approximated λ R 10 micromolar Approximated λ M 10 micromolar Approximated λ F 10 micromolar Approximated λ P 10 micromolar Approximated α 10 1 day Approximated β 1 100 1 day Approximated β 2 50 1 day Approximated γ 10 1 day Approximated ν 0.5 1 day Approximated μ S A 1 1 day Approximated μ D N 0.5 1 day Approximated τ 1 0.5 days Taken from [5] τ 2 1 days Taken from [5] Conclusions The mathematical model presented herein suggests that not only are anti-inflammatory cy-tokines, such as EPO necessary to prevent chondrocytes signaled by pro-inflammatory cytokines from entering apoptosis, they may also influence how chondrocytes respond to signaling by pro-inflammatory cytokines. Reviewers This paper has been reviewed by Yang Kuang, James Faeder and Anna Marciniak-Czochra. PMID:22353555
TrackEtching - A Java based code for etched track profile calculations in SSNTDs
NASA Astrophysics Data System (ADS)
Muraleedhara Varier, K.; Sankar, V.; Gangadathan, M. P.
2017-09-01
A java code incorporating a user friendly GUI has been developed to calculate the parameters of chemically etched track profiles of ion-irradiated solid state nuclear track detectors. Huygen's construction of wavefronts based on secondary wavelets has been used to numerically calculate the etched track profile as a function of the etching time. Provision for normal incidence and oblique incidence on the detector surface has been incorporated. Results in typical cases are presented and compared with experimental data. Different expressions for the variation of track etch rate as a function of the ion energy have been utilized. The best set of values of the parameters in the expressions can be obtained by comparing with available experimental data. Critical angle for track development can also be calculated using the present code.
NASA Technical Reports Server (NTRS)
Murphree, J. S.
1980-01-01
A representative set of data from ISIS 2 covering a range of operating modes and geophysical conditions is presented. The data show the typical values and range of ionospheric and magnetospheric characteristics, as viewed from 1400 km with the ISIS 2 instruments. The definition of each data set depends partly on geophysical parameters and partly on satellite operating mode. Preceding the data set is a description of the organizational parameters and a review of the objectives and general characteristics of the data set. The data are shown as a selection from 12 different data formats. Each data has a different selection of formats, but uniformity of a given format selection is preserved throughout each data set.
Water analysis in a lab-on-a-chip system
NASA Astrophysics Data System (ADS)
Freimuth, Herbert; von Germar, Frithjof; Frese, Ines; Nahrstedt, Elzbieta; Küpper, Michael; Schenk, Rainer; Baser, Björn; Ott, Johannes; Drese, Klaus; Detemple, Peter; Doll, Theodor
2006-01-01
The development of a lab-on-chip system which allows the parallel detection of a variety of different parameters of a water sample is presented. Water analysis typically comprises the determination of around 30 physical and chemical parameters. An even larger number can arise when special contaminations of organic molecules are of interest. A demonstration system has been realised to show the feasibility and performance of an integrated device for the determination of physical quantities like electrical conductivity, light absorption and turbidity. Additionally, chemical quantities like the pH-value and the content of inorganic and organic contaminations are also determined. Two chips of credit card size contain the analytical functions and will be fabricated by injection moulding. First prototypes have been manufactured by milling or precision milling for the optical components.
Large storms: Airglow and related measurements. VLF observations, volume 4
NASA Technical Reports Server (NTRS)
1981-01-01
The data presented show the typical values and range of ionospheric and magnetospheric characteristics, as viewed from 1400 km with the ISIS 2 instruments. The definition of each data set depends partly on geophysical parameters and partly on satellite operating mode. Preceding the data set is a description of the organizational parameters and a review of the objectives and general characteristics of the data set. The data are shown as a selection from 12 different data formats. Each data set has a different selection of formats, but uniformity of a given format selection is preserved throughout each data set. Each data set consists of a selected number of passes, each comprising a format combination that is most appropriae for the particular data set. Description of ISIS 2 instruments are provided.
Atmospheric particulate analysis using angular light scattering
NASA Technical Reports Server (NTRS)
Hansen, M. Z.
1980-01-01
Using the light scattering matrix elements measured by a polar nephelometer, a procedure for estimating the characteristics of atmospheric particulates was developed. A theoretical library data set of scattering matrices derived from Mie theory was tabulated for a range of values of the size parameter and refractive index typical of atmospheric particles. Integration over the size parameter yielded the scattering matrix elements for a variety of hypothesized particulate size distributions. A least squares curve fitting technique was used to find a best fit from the library data for the experimental measurements. This was used as a first guess for a nonlinear iterative inversion of the size distributions. A real index of 1.50 and an imaginary index of -0.005 are representative of the smoothed inversion results for the near ground level atmospheric aerosol in Tucson.
Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.
2013-01-01
DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.
Probabilistic treatment of the uncertainty from the finite size of weighted Monte Carlo data
NASA Astrophysics Data System (ADS)
Glüsenkamp, Thorsten
2018-06-01
Parameter estimation in HEP experiments often involves Monte Carlo simulation to model the experimental response function. A typical application are forward-folding likelihood analyses with re-weighting, or time-consuming minimization schemes with a new simulation set for each parameter value. Problematically, the finite size of such Monte Carlo samples carries intrinsic uncertainty that can lead to a substantial bias in parameter estimation if it is neglected and the sample size is small. We introduce a probabilistic treatment of this problem by replacing the usual likelihood functions with novel generalized probability distributions that incorporate the finite statistics via suitable marginalization. These new PDFs are analytic, and can be used to replace the Poisson, multinomial, and sample-based unbinned likelihoods, which covers many use cases in high-energy physics. In the limit of infinite statistics, they reduce to the respective standard probability distributions. In the general case of arbitrary Monte Carlo weights, the expressions involve the fourth Lauricella function FD, for which we find a new finite-sum representation in a certain parameter setting. The result also represents an exact form for Carlson's Dirichlet average Rn with n > 0, and thereby an efficient way to calculate the probability generating function of the Dirichlet-multinomial distribution, the extended divided difference of a monomial, or arbitrary moments of univariate B-splines. We demonstrate the bias reduction of our approach with a typical toy Monte Carlo problem, estimating the normalization of a peak in a falling energy spectrum, and compare the results with previously published methods from the literature.
The analysis of soil cores polluted with certain metals using the Box-Cox transformation.
Meloun, Milan; Sánka, Milan; Nemec, Pavel; Krítková, Sona; Kupka, Karel
2005-09-01
To define the soil properties for a given area or country including the level of pollution, soil survey and inventory programs are essential tools. Soil data transformations enable the expression of the original data on a new scale, more suitable for data analysis. In the computer-aided interactive analysis of large data files of soil characteristics containing outliers, the diagnostic plots of the exploratory data analysis (EDA) often find that the sample distribution is systematically skewed or reject sample homogeneity. Under such circumstances the original data should be transformed. The Box-Cox transformation improves sample symmetry and stabilizes spread. The logarithmic plot of a profile likelihood function enables the optimum transformation parameter to be found. Here, a proposed procedure for data transformation in univariate data analysis is illustrated on a determination of cadmium content in the plough zone of agricultural soils. A typical soil pollution survey concerns the determination of the elements Be (16 544 values available), Cd (40 317 values), Co (22 176 values), Cr (40 318 values), Hg (32 344 values), Ni (34 989 values), Pb (40 344 values), V (20 373 values) and Zn (36 123 values) in large samples.
NASA Technical Reports Server (NTRS)
Norikane, L.; Freeman, A.; Way, J.; Okonek, S.; Casey, R.
1992-01-01
Recent updates to a geographical information system (GIS) called VICAR (Video Image Communication and Retrieval)/IBIS are described. The system is designed to handle data from many different formats (vector, raster, tabular) and many different sources (models, radar images, ground truth surveys, optical images). All the data are referenced to a single georeference plane, and average or typical values for parameters defined within a polygonal region are stored in a tabular file, called an info file. The info file format allows tracking of data in time, maintenance of links between component data sets and the georeference image, conversion of pixel values to `actual' values (e.g., radar cross-section, luminance, temperature), graph plotting, data manipulation, generation of training vectors for classification algorithms, and comparison between actual measurements and model predictions (with ground truth data as input).
Epstein, Scott A; Riipinen, Ilona; Donahue, Neil M
2010-01-15
To model the temperature-induced partitioning of semivolatile organics in laboratory experiments or atmospheric models, one must know the appropriate heats of vaporization. Current treatments typically assume a constant value of the heat of vaporization or else use specific values from a small set of surrogate compounds. With published experimental vapor-pressure data from over 800 organic compounds, we have developed a semiempirical correlation between the saturation concentration (C*, microg m(-3)) and the heat of vaporization (deltaH(VAP), kJ mol(-1)) for organics in the volatility basis set. Near room temperature, deltaH(VAP) = -11 log(10)C(300)(*) + 129. Knowledge of the relationship between C* and deltaH(VAP) constrains a free parameter in thermodenuder data analysis. A thermodenuder model using our deltaH(VAP) values agrees well with thermal behavior observed in laboratory experiments.
NASA Astrophysics Data System (ADS)
Schweitzer, Susanne; Nemitz, Wolfgang; Sommer, Christian; Hartmann, Paul; Fulmek, Paul; Nicolics, Johann; Pachler, Peter; Hoschopf, Hans; Schrank, Franz; Langer, Gregor; Wenzl, Franz P.
2014-09-01
For a systematic approach to improve the white light quality of phosphor converted light-emitting diodes (LEDs) for general lighting applications it is imperative to get the individual sources of error for color temperature reproducibility under control. In this regard, it is imperative to understand how compositional, optical and materials properties of the color conversion element (CCE), which typically consists of phosphor particles embedded in a transparent matrix material, affect the constancy of a desired color temperature of a white LED source. In this contribution we use an LED assembly consisting of an LED die mounted on a printed circuit board (PCB) by chip-on-board technology and a CCE with a glob-top configuration as a model system and discuss the impact of potential sources for color temperature deviation among individual devices. Parameters that are investigated include imprecisions in the amount of materials deposition, deviations from the target value for the phosphor concentration in the matrix material, deviations from the target value for the particle sizes of the phosphor material, deviations from the target values for the refractive indexes of phosphor and matrix material as well as deviations from the reflectivity of the substrate surface. From these studies, some general conclusions can be drawn which of these parameters have the largest impact on color deviation and have to be controlled most precisely in a fabrication process in regard of color temperature reproducibility among individual white LED sources.
Mechanics of the taper integrated screwed-in (TIS) abutments used in dental implants.
Bozkaya, Dinçer; Müftü, Sinan
2005-01-01
The tapered implant-abutment interface is becoming more popular due to the mechanical reliability of retention it provides. Consequently, understanding the mechanical properties of the tapered interface with or without a screw at the bottom has been the subject of a considerable amount of studies involving experiments and finite element (FE) analysis. This paper focuses on the tapered implant-abutment interface with a screw integrated at the bottom of the abutment. The tightening and loosening torques are the main factors in determining the reliability and the stability of the attachment. Analytical formulas are developed to predict tightening and loosening torque values by combining the equations related to the tapered interface with screw mechanics equations. This enables the identification of the effects of the parameters such as friction, geometric properties of the screw, the taper angle, and the elastic properties of the materials on the mechanics of the system. In particular, a relation between the tightening torque and the screw pretension is identified. It was shown that the loosening torque is smaller than the tightening torque for typical values of the parameters. Most of the tightening load is carried by the tapered section of the abutment, and in certain combinations of the parameters the pretension in the screw may become zero. The calculations performed to determine the loosening torque as a percentage of tightening torque resulted in the range 85-137%, depending on the values of taper angle and the friction coefficient.
Kernel machines for epilepsy diagnosis via EEG signal classification: a comparative study.
Lima, Clodoaldo A M; Coelho, André L V
2011-10-01
We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely, Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). Copyright © 2011 Elsevier B.V. All rights reserved.
Influence of season and type of restaurants on sashimi microbiota.
Miguéis, S; Moura, A T; Saraiva, C; Esteves, A
2016-10-01
In recent years, an increase in the consumption of Japanese food in European countries has been verified, including in Portugal. These specialities made with raw fish, typical Japanese meals, have been prepared in typical and on non-typical restaurants, and represent a challenge to risk analysis on HACCP plans. The aim of this study was to evaluate the influence of the type of restaurant, season and type of fish used on sashimi microbiota. Sashimi samples (n = 114) were directly collected from 23 sushi restaurants and were classified as Winter and Summer Samples. They were also categorized according to the type of restaurant where they were obtained: as typical or non-typical. The samples were processed using international standards procedures. A middling seasonality influence was observed in microbiota using mesophilic aerobic bacteria, psychrotrophic microorganisms, Lactic acid bacteria, Pseudomonas spp., H 2 S positive bacteria, mould and Bacillus cereus counts parameters. During the Summer Season, samples classified as unacceptable or potentially Hazardous were observed. Non-typical restaurants had the most cases of Unacceptable/potentially hazardous samples 83.33%. These unacceptable results were obtained as a result of high values of pathogenic bacteria like Listeria monocytogenes and Staphylococcus aureus No significant differences were observed on microbiota counts from different fish species. The need to implement more accurate food safety systems was quite evident, especially in the warmer season, as well as in restaurants where other kinds of food, apart from Japanese meals, was prepared. © Crown copyright 2016.
KEWPIE2: A cascade code for the study of dynamical decay of excited nuclei
NASA Astrophysics Data System (ADS)
Lü, Hongliang; Marchix, Anthony; Abe, Yasuhisa; Boilley, David
2016-03-01
KEWPIE-a cascade code devoted to investigating the dynamical decay of excited nuclei, specially designed for treating very low probability events related to the synthesis of super-heavy nuclei formed in fusion-evaporation reactions-has been improved and rewritten in C++ programming language to become KEWPIE2. The current version of the code comprises various nuclear models concerning the light-particle emission, fission process and statistical properties of excited nuclei. General features of the code, such as the numerical scheme and the main physical ingredients, are described in detail. Some typical calculations having been performed in the present paper clearly show that theoretical predictions are generally in accordance with experimental data. Furthermore, since the values of some input parameters cannot be determined neither theoretically nor experimentally, a sensibility analysis is presented. To this end, we systematically investigate the effects of using different parameter values and reaction models on the final results. As expected, in the case of heavy nuclei, the fission process has the most crucial role to play in theoretical predictions. This work would be essential for numerical modeling of fusion-evaporation reactions.
Rheology and fluid mechanics of a hyper-concentrated biomass suspension
NASA Astrophysics Data System (ADS)
Botto, Lorenzo; Xu, Xiao
2013-11-01
The production of bioethanol from biomass material originating from energy crops requires mixing of highly concentrated suspensions, which are composed of millimetre-sized lignocellulosic fibers. In these applications, the solid concentration is typically extremely high. Owing to the large particle porosity, for a solid mass concentration slightly larger than 10%, the dispersed solid phase can fill the available space almost completely. To extract input parameters for simulations, we have carried out rheological measurements of a lignocellulosic suspension of Miscanthus, a fast-growing plant, for particle concentrations close to maximum random packing. We find that in this regime the rheometric curves exhibit features similar to those observed in model ``gravitational suspensions,'' including viscoplastic behaviour, strong shear-banding, non-continuum effects, and a marked influence of the particle weight. In the talk, these aspects will be examined in some detail, and differences between Miscanthus and corn stover, currently the most industrially relevant biomass substrate, briefly discussed. We will also comment on values of the Reynolds and Oldroyd numbers found in biofuel applications, and the flow patterns expected for these parameter values.
NASA Astrophysics Data System (ADS)
Ugon, B.; Nandong, J.; Zang, Z.
2017-06-01
The presence of unstable dead-time systems in process plants often leads to a daunting challenge in the design of standard PID controllers, which are not only intended to provide close-loop stability but also to give good performance-robustness overall. In this paper, we conduct stability analysis on a double-loop control scheme based on the Routh-Hurwitz stability criteria. We propose to use this unstable double-loop control scheme which employs two P/PID controllers to control first-order or second-order unstable dead-time processes typically found in process industries. Based on the Routh-Hurwitz stability necessary and sufficient criteria, we establish several stability regions which enclose within them the P/PID parameter values that guarantee close-loop stability of the double-loop control scheme. A systematic tuning rule is developed for the purpose of obtaining the optimal P/PID parameter values within the established regions. The effectiveness of the proposed tuning rule is demonstrated using several numerical examples and the result are compared with some well-established tuning methods reported in the literature.
Inflation in the mixed Higgs-R2 model
NASA Astrophysics Data System (ADS)
He, Minxi; Starobinsky, Alexei A.; Yokoyama, Jun'ichi
2018-05-01
We analyze a two-field inflationary model consisting of the Ricci scalar squared (R2) term and the standard Higgs field non-minimally coupled to gravity in addition to the Einstein R term. Detailed analysis of the power spectrum of this model with mass hierarchy is presented, and we find that one can describe this model as an effective single-field model in the slow-roll regime with a modified sound speed. The scalar spectral index predicted by this model coincides with those given by the R2 inflation and the Higgs inflation implying that there is a close relation between this model and the R2 inflation already in the original (Jordan) frame. For a typical value of the self-coupling of the standard Higgs field at the high energy scale of inflation, the role of the Higgs field in parameter space involved is to modify the scalaron mass, so that the original mass parameter in the R2 inflation can deviate from its standard value when non-minimal coupling between the Ricci scalar and the Higgs field is large enough.
ILIAD Testing; and a Kalman Filter for 3-D Pose Estimation
NASA Technical Reports Server (NTRS)
Richardson, A. O.
1996-01-01
This report presents the results of a two-part project. The first part presents results of performance assessment tests on an Internet Library Information Assembly Data Base (ILIAD). It was found that ILLAD performed best when queries were short (one-to-three keywords), and were made up of rare, unambiguous words. In such cases as many as 64% of the typically 25 returned documents were found to be relevant. It was also found that a query format that was not so rigid with respect to spelling errors and punctuation marks would be more user-friendly. The second part of the report shows the design of a Kalman Filter for estimating motion parameters of a three dimensional object from sequences of noisy data derived from two-dimensional pictures. Given six measured deviation values represendng X, Y, Z, pitch, yaw, and roll, twelve parameters were estimated comprising the six deviations and their time rate of change. Values for the state transiton matrix, the observation matrix, the system noise covariance matrix, and the observation noise covariance matrix were determined. A simple way of initilizing the error covariance matrix was pointed out.
Evaluation of a Commercial Tractor Safety Monitoring System Using a Reverse Engineering Procedure.
Casazza, Camilla; Martelli, Roberta; Rondelli, Valda
2016-10-17
There is a high rate of work-related deaths in agriculture. In Italy, despite the obligato-ry installation of ROPS, fatal accidents involving tractors represent more than 40% of work-related deaths in agriculture. As death is often due to an overturn that the driver is incapable of predicting, driver assistance devices that can signal critical stability conditions have been studied and marketed to prevent accidents. These devices measure the working parameters of the tractor through sensors and elaborate the values using an algorithm that, taking into account the geometric characteristics of the tractor, pro-vides a risk index based on models elaborated on a theoretical basis. This research aimed to verify one of these stability indexes in the field, using a commercial driver as-sistance device to monitor five tractors on the University of Bologna experimental farm. The setup of the device involved determining the coordinates of the center of gravity of the tractor and the implement mounted on the tractor. The analysis of the stability in-dex, limited to events with a significant risk level, revealed a clear separation into two groups: events with high values of roll or pitch and low speeds, typical of a tractor when working, and events with low values of roll and pitch and high steering angle and forward speed, typical of travel on the road. The equation for calculating the critical speed when turning provided a significant contribution only for events that were typi-cal of travel rather than field work, suggesting a diversified calculation approach ac-cording to the work phase. Copyright© by the American Society of Agricultural Engineers.
Disorder Problem In Diluted Magnetic Semiconductors
NASA Astrophysics Data System (ADS)
Nelson, Ryky; Ekuma, Chinedu; Terletska, Hanna; Sudhindra, Vidhyadhiraja; Moreno, Juana; Jarrell, Mark
2015-03-01
Motivated by experimental studies addressing the role of impurity disorder in diluted magnetic semiconductors (DMS), we investigate the effects of disorder using a simple tight-binding Hamiltonian with random impurity potential and spin-fermion exchange which is self-consistently solved using the typical medium theory. Adopting the typical density of states (TDoS) as the order parameter, we find that the TDoS vanishes below a critical concentration of the impurity, which indicates an Anderson localization transition in the system. Our results qualitatively explain why at concentrations lower than a critical value DMS are insulating and paramagnetic, while at larger concentrations are ferromagnetic. We also compare several simple models to explore the interplay between ferromagnetic order and disorder induced insulating behavior, and the role of the spin-orbit interaction on this competition. We apply our findings to (Ga,Mn)As and (Ga,Mn)N to compare and contrast their phase diagrams.
Lava flow topographic measurements for radar data interpretation
NASA Technical Reports Server (NTRS)
Campbell, Bruce A.; Garvin, James B.
1993-01-01
Topographic profiles at 25- and 5-cm horizontal resolution for three sites along a lava flow on Kilauea Volcano are presented, and these data are used to illustrate techniques for surface roughness analysis. Height and slope distributions and the height autocorrelation function are evaluated as a function of varying lowpass filter wavelength for the 25-cm data. Rms slopes are found to increase rapidly with decreasing topographic scale and are typically much higher than those found by modeling of Magellan altimeter data for Venus. A more robust description of the surface roughness appears to be the ratio of rms height to surface height correlation length. For all three sites this parameter falls within the range of values typically found from model fits to Magellan altimeter waveforms. The 5-cm profile data are used to estimate the effect of small-scale roughness on quasi-specular scattering.
NASA Astrophysics Data System (ADS)
Zhao, Wenyu; Zhang, Haiyi; Ji, Yuefeng; Xu, Daxiong
2004-05-01
Based on the proposed polarization mode dispersion (PMD) compensation simulation model and statistical analysis method (Monte-Carlo), the critical parameters initialization of two typical optical domain PMD compensators, which include optical PMD method with fixed compensation differential group delay (DGD) and that with variable compensation DGD, are detailedly investigated by numerical method. In the simulation, the line PMD values are chosen as 3ps, 4ps and 5ps and run samples are set to 1000 in order to achieve statistical evaluation for PMD compensated systems, respectively. The simulation results show that for the PMD value pre-known systems, the value of the fixed DGD compensator should be set to 1.5~1.6 times of line PMD value in order to reach the optimum performance, but for the second kind of PMD compensator, the DGD range of lower limit should be 1.5~1.6 times of line PMD provided that of upper limit is set to 3 times of line PMD, if no effective ways are chosen to resolve the problem of local minimum in optimum process. Another conclusion can be drawn from the simulation is that, although the second PMD compensator holds higher PMD compensation performance, it will spend more feedback loops to look up the optimum DGD value in the real PMD compensation realization, and this will bring more requirements on adjustable DGD device, not only wider adjustable range, but rapid adjusting speed for real time PMD equalization.
Frictional behaviour and evolution of rough faults in limestone
NASA Astrophysics Data System (ADS)
Harbord, C. W. A.; Nielsen, S. B.; De Paola, N.; Holdsworth, R.
2017-12-01
Fault roughness is an important parameter which influences the frictional behaviour of seismically active faults, in particular the nucleation stage of earthquakes. Here we investigate frictional sliding and stability of roughened micritic limestone surfaces from the seismogenic layer in Northern-Central Apennines of Italy. Samples are roughened using #60, #220 and #400 grit and deformed in a direct shear configuration at conditions typical of the shallow upper crust (15-60 MPa normal stress). We perform velocity steps between 0.01-1 μm s-1 to obtain rate-and-state friction parameters a, b and L. At low normal stress conditions (30 MPa) and at displacements of <3-4mm there is a clear 2 state evolution of friction with two state parameters, b1 and b2, and accompanying critical slip distances L1 and L2 for all roughnesses. In some cases, on smooth faults (#400 grit), the short term evolution leads to silent slow instability which is modulated by the second state evolution. With increasing slip displacement (>2-4 mm) friction can be modelled with a single state parameter, b, as the short frictional evolution disappears. The longer term state evolution, b2, gives negative values of b, reminiscent of plastic creep experiments at high temperature, reaching a steady state at 3-4 mm displacement. Microstructural observations reveal shiny surfaces decorated by nanometric gouge particles with variable porosity. When normal stress is increased, rough faults (#60 grit) revert to a single state evolution with positive values of b, whilst smoother faults (#220 & #400 grit) retain a two state evolution with negative b2 values. These observations suggest that on carbonate hosted faults sliding may be controlled by plastic processes which can lead to slow stick-slip instability, which may be supressed by frictional wear and accompanying gouge build-up.
NASA Technical Reports Server (NTRS)
Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)
2002-01-01
We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.
Sensitivity of estimated muscle force in forward simulation of normal walking
Xiao, Ming; Higginson, Jill
2009-01-01
Generic muscle parameters are often used in muscle-driven simulations of human movement estimate individual muscle forces and function. The results may not be valid since muscle properties vary from subject to subject. This study investigated the effect of using generic parameters in a muscle-driven forward simulation on muscle force estimation. We generated a normal walking simulation in OpenSim and examined the sensitivity of individual muscle to perturbations in muscle parameters, including the number of muscles, maximum isometric force, optimal fiber length and tendon slack length. We found that when changing the number muscles included in the model, only magnitude of the estimated muscle forces was affected. Our results also suggest it is especially important to use accurate values of tendon slack length and optimal fiber length for ankle plantarflexors and knee extensors. Changes in force production one muscle were typically compensated for by changes in force production by muscles in the same functional muscle group, or the antagonistic muscle group. Conclusions regarding muscle function based on simulations with generic musculoskeletal parameters should be interpreted with caution. PMID:20498485
Pierrillas, Philippe B; Tod, Michel; Amiel, Magali; Chenel, Marylore; Henin, Emilie
2016-09-01
The purpose of this study was to explore the impact of censoring due to animal sacrifice on parameter estimates and tumor volume calculated from two diameters in larger tumors during tumor growth experiments in preclinical studies. The type of measurement error that can be expected was also investigated. Different scenarios were challenged using the stochastic simulation and estimation process. One thousand datasets were simulated under the design of a typical tumor growth study in xenografted mice, and then, eight approaches were used for parameter estimation with the simulated datasets. The distribution of estimates and simulation-based diagnostics were computed for comparison. The different approaches were robust regarding the choice of residual error and gave equivalent results. However, by not considering missing data induced by sacrificing the animal, parameter estimates were biased and led to false inferences in terms of compound potency; the threshold concentration for tumor eradication when ignoring censoring was 581 ng.ml(-1), but the true value was 240 ng.ml(-1).
Purified reconstituted lac carrier protein from Escherichia coli is fully functional.
Viitanen, P; Garcia, M L; Kaback, H R
1984-03-01
Proteoliposomes reconstituted with lac carrier protein purified from the plasma membrane of Escherichia coli catalyze each of the translocation reactions typical of the beta-galactoside transport system (i.e., active transport, counterflow, facilitated influx and efflux) with turnover numbers and apparent Km values comparable to those observed in right-side-out membrane vesicles. Furthermore, detailed kinetic studies show that the reconstituted system exhibits properties analogous to those observed in membrane vesicles. Imposition of a membrane potential (delta psi, interior negative) causes a marked decrease in apparent Km (by a factor of 7 to 10) with a smaller increase in Vmax (approximately equal to 3-fold). At submaximal values of delta psi, the reconstituted carrier exhibits biphasic kinetics, with one component manifesting the kinetic parameters of active transport and the other exhibiting the characteristics of facilitated diffusion. Finally, at low lactose concentrations, the initial velocity of influx varies linearly with the square of the proton electro-chemical gradient. The results provide quantitative support for the contention that a single polypeptide species, the product of the lac y gene, is responsible for each of the transport reactions typical of the beta-galactoside transport system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pestehe, S. J., E-mail: sjpest@tabrizu.ac.ir; Mohammadnejad, M.; Research Institute for Applied Physics and Astronomy, University of Tabriz, Tabriz
A theoretical model is developed to study the signals from a typical dynamic Faraday cup, and using this model the output signals from this structure are obtained. A detailed discussion on the signal structure, using different experimental conditions, is also given. It is argued that there is a possibility of determining the total charge of the generated ion pulse, the maximum velocity of the ions, ion velocity distribution, and the number of ion species for mixed working gases, under certain conditions. In addition, the number of different ionization stages, the number of different pinches in one shot, and the numbermore » of different existing acceleration mechanisms can also be determined provided that the mentioned conditions being satisfied. An experiment is carried out on the Filippov type 90 kJ Sahand plasma focus using Ar as the working gas at the pressure of 0.25 Torr. The data from a typical shot are fitted to a signal from the model and the total charge of the related energetic ion pulse is deduced using the values of the obtained fit parameters. Good agreement between the obtained amount of the total charge and the values obtained during other experiments on the same plasma focus device is observed.« less
Rms-flux relation and fast optical variability simulations of the nova-like system MV Lyr
NASA Astrophysics Data System (ADS)
Dobrotka, A.; Mineshige, S.; Ness, J.-U.
2015-03-01
The stochastic variability (flickering) of the nova-like system (subclass of cataclysmic variable) MV Lyr yields a complicated power density spectrum with four break frequencies. Scaringi et al. analysed high-cadence Kepler data of MV Lyr, taken almost continuously over 600 d, giving the unique opportunity to study multicomponent Power Density Spectra (PDS) over a wide frequency range. We modelled this variability with our statistical model based on disc angular momentum transport via discrete turbulent bodies with an exponential distribution of the dimension scale. Two different models were used, a full disc (developed from the white dwarf to the outer radius of ˜1010 cm) and a radially thin disc (a ring at a distance of ˜1010 cm from the white dwarf) that imitates an outer disc rim. We succeed in explaining the two lowest observed break frequencies assuming typical values for a disc radius of 0.5 and 0.9 times the primary Roche lobe and an α parameter of 0.1-0.4. The highest observed break frequency was also modelled, but with a rather small accretion disc with a radius of 0.3 times the primary Roche lobe and a high α value of 0.9 consistent with previous findings by Scaringi. Furthermore, the simulated light curves exhibit the typical linear rms-flux proportionality linear relation and the typical log-normal flux distribution. As the turbulent process is generating fluctuations in mass accretion that propagate through the disc, this confirms the general knowledge that the typical rms-flux relation is mainly generated by these fluctuations. In general, a higher rms is generated by a larger amount of superposed flares which is compatible with a higher mass accretion rate expressed by a larger flux.
Model Parameter Variability for Enhanced Anaerobic Bioremediation of DNAPL Source Zones
NASA Astrophysics Data System (ADS)
Mao, X.; Gerhard, J. I.; Barry, D. A.
2005-12-01
The objective of the Source Area Bioremediation (SABRE) project, an international collaboration of twelve companies, two government agencies and three research institutions, is to evaluate the performance of enhanced anaerobic bioremediation for the treatment of chlorinated ethene source areas containing dense, non-aqueous phase liquids (DNAPL). This 4-year, 5.7 million dollars research effort focuses on a pilot-scale demonstration of enhanced bioremediation at a trichloroethene (TCE) DNAPL field site in the United Kingdom, and includes a significant program of laboratory and modelling studies. Prior to field implementation, a large-scale, multi-laboratory microcosm study was performed to determine the optimal system properties to support dehalogenation of TCE in site soil and groundwater. This statistically-based suite of experiments measured the influence of key variables (electron donor, nutrient addition, bioaugmentation, TCE concentration and sulphate concentration) in promoting the reductive dechlorination of TCE to ethene. As well, a comprehensive biogeochemical numerical model was developed for simulating the anaerobic dehalogenation of chlorinated ethenes. An appropriate (reduced) version of this model was combined with a parameter estimation method based on fitting of the experimental results. Each of over 150 individual microcosm calibrations involved matching predicted and observed time-varying concentrations of all chlorinated compounds. This study focuses on an analysis of this suite of fitted model parameter values. This includes determining the statistical correlation between parameters typically employed in standard Michaelis-Menten type rate descriptions (e.g., maximum dechlorination rates, half-saturation constants) and the key experimental variables. The analysis provides insight into the degree to which aqueous phase TCE and cis-DCE inhibit dechlorination of less-chlorinated compounds. Overall, this work provides a database of the numerical modelling parameters typically employed for simulating TCE dechlorination relevant for a range of system conditions (e.g, bioaugmented, high TCE concentrations, etc.). The significance of the obtained variability of parameters is illustrated with one-dimensional simulations of enhanced anaerobic bioremediation of residual TCE DNAPL.
Meltzer, H Y; Matsubara, S; Lee, J C
1989-10-01
The pKi values of 13 reference typical and 7 reference atypical antipsychotic drugs (APDs) for rat striatal dopamine D-1 and D-2 receptor binding sites and cortical serotonin (5-HT2) receptor binding sites were determined. The atypical antipsychotics had significantly lower pKi values for the D-2 but not 5-HT2 binding sites. There was a trend for a lower pKi value for the D-1 binding site for the atypical APD. The 5-HT2 and D-1 pKi values were correlated for the typical APD whereas the 5-HT2 and D-2 pKi values were correlated for the atypical APD. A stepwise discriminant function analysis to determine the independent contribution of each pKi value for a given binding site to the classification as a typical or atypical APD entered the D-2 pKi value first, followed by the 5-HT2 pKi value. The D-1 pKi value was not entered. A discriminant function analysis correctly classified 19 of 20 of these compounds plus 14 of 17 additional test compounds as typical or atypical APD for an overall correct classification rate of 89.2%. The major contributors to the discriminant function were the D-2 and 5-HT2 pKi values. A cluster analysis based only on the 5-HT2/D2 ratio grouped 15 of 17 atypical + one typical APD in one cluster and 19 of 20 typical + two atypical APDs in a second cluster, for an overall correct classification rate of 91.9%. When the stepwise discriminant function was repeated for all 37 compounds, only the D-2 and 5-HT2 pKi values were entered into the discriminant function.(ABSTRACT TRUNCATED AT 250 WORDS)
Telesca, Luciano; Lovallo, Michele; Ramirez-Rojas, Alejandro; Flores-Marquez, Leticia
2014-01-01
By using the method of the visibility graph (VG) the synthetic seismicity generated by a simple stick-slip system with asperities is analysed. The stick-slip system mimics the interaction between tectonic plates, whose asperities are given by sandpapers of different granularity degrees. The VG properties of the seismic sequences have been put in relationship with the typical seismological parameter, the b-value of the Gutenberg-Richter law. Between the b-value of the synthetic seismicity and the slope of the least square line fitting the k-M plot (relationship between the magnitude M of each synthetic event and its connectivity degree k) a close linear relationship is found, also verified by real seismicity.
Characterization of SWIR cameras by MRC measurements
NASA Astrophysics Data System (ADS)
Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.
2014-05-01
Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera system are discussed.
Scene-based nonuniformity correction technique for infrared focal-plane arrays.
Liu, Yong-Jin; Zhu, Hong; Zhao, Yi-Gong
2009-04-20
A scene-based nonuniformity correction algorithm is presented to compensate for the gain and bias nonuniformity in infrared focal-plane array sensors, which can be separated into three parts. First, an interframe-prediction method is used to estimate the true scene, since nonuniformity correction is a typical blind-estimation problem and both scene values and detector parameters are unavailable. Second, the estimated scene, along with its corresponding observed data obtained by detectors, is employed to update the gain and the bias by means of a line-fitting technique. Finally, with these nonuniformity parameters, the compensated output of each detector is obtained by computing a very simple formula. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of every module is demonstrated with simulated and real infrared image sequences. Experimental results indicate that the proposed algorithm exhibits a superior correction effect.
Ortega, Cristina; Solo-Gabriele, Helena M.; Abdelzaher, Amir; Wright, Mary; Deng, Yang; Stark, Lillian M.
2009-01-01
The objective of this study was to evaluate whether indicator microbes and physical-chemical parameters were correlated with pathogens within a tidally influenced estuary. Measurements included the analysis of physical-chemical parameters (pH, salinity, temperature, and turbidity), measurements of bacterial indicators (enterococci, fecal coliform, E. coli, and total coliform), viral indicators (somatic and MS2 coliphage), viral pathogens (enterovirus by culture), and protozoan pathogens (Cryptosporidium and Giardia). All pathogen results were negative with the exception of one sample which tested positive for culturable reovirus (8.5 MPN/100 L).. Notable physical-chemical parameters for this sample included low salinity (<1 ppt) and high water temperature (31 °C). Indicator bacteria and indicator virus levels for this sample were within average values typically measured within the study site and were low in comparison with levels observed in other freshwater environments. Overall results suggest that high levels of bacterial and viral indicators were associated with low salinity sites. PMID:19464704
Jets in a strongly coupled anisotropic plasma
NASA Astrophysics Data System (ADS)
Fadafan, Kazem Bitaghsir; Morad, Razieh
2018-01-01
In this paper, we study the dynamics of the light quark jet moving through the static, strongly coupled N=4, anisotropic plasma with and without charge. The light quark is presented by a 2-parameters point-like initial condition falling string in the context of the AdS/CFT. We calculate the stopping distance of the light quark in the anisotropic medium and compare it with its isotropic value. We study the dependency of the stopping distance to the both string initial conditions and background parameters such as anisotropy parameter or chemical potential. Although the typical behavior of the string in the anisotropic medium is similar to the one in the isotropic AdS-Sch background, the string falls faster to the horizon depending on the direction of moving. Particularly, the enhancement of quenching is larger in the beam direction. We find that the suppression of stopping distance is more prominent when the anisotropic plasma have the same temperature as the isotropic plasma.
A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND
Bessonova, O.V.; Khokhlova, V.A.; Canney, M.S.; Bailey, M.R.; Crum, L.A.
2010-01-01
Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue. PMID:20582159
A derating method for therapeutic applications of high intensity focused ultrasound
NASA Astrophysics Data System (ADS)
Bessonova, O. V.; Khokhlova, V. A.; Canney, M. S.; Bailey, M. R.; Crum, L. A.
2010-05-01
Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. A new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal wave-forms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.
A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND.
Bessonova, O V; Khokhlova, V A; Canney, M S; Bailey, M R; Crum, L A
2010-01-01
Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.
Applications of computer algebra to distributed parameter systems
NASA Technical Reports Server (NTRS)
Storch, Joel A.
1993-01-01
In the analysis of vibrations of continuous elastic systems, one often encounters complicated transcendental equations with roots directly related to the system's natural frequencies. Typically, these equations contain system parameters whose values must be specified before a numerical solution can be obtained. The present paper presents a method whereby the fundamental frequency can be obtained in analytical form to any desired degree of accuracy. The method is based upon truncation of rapidly converging series involving inverse powers of the system natural frequencies. A straightforward method to developing these series and summing them in closed form is presented. It is demonstrated how Computer Algebra can be exploited to perform the intricate analytical procedures which otherwise would render the technique difficult to apply in practice. We illustrate the method by developing two analytical approximations to the fundamental frequency of a vibrating cantilever carrying a rigid tip body. The results are compared to the numerical solution of the exact (transcendental) frequency equation over a range of system parameters.
NASA Technical Reports Server (NTRS)
Hansman, R. J., Jr.
1982-01-01
The feasibility of computerized simulation of the physics of advanced microwave anti-icing systems, which preheat impinging supercooled water droplets prior to impact, was investigated. Theoretical and experimental work performed to create a physically realistic simulation is described. The behavior of the absorption cross section for melting ice particles was measured by a resonant cavity technique and found to agree with theoretical predictions. Values of the dielectric parameters of supercooled water were measured by a similar technique at lambda = 2.82 cm down to -17 C. The hydrodynamic behavior of accelerated water droplets was studied photograhically in a wind tunnel. Droplets were found to initially deform as oblate spheroids and to eventually become unstable and break up in Bessel function modes for large values of acceleration or droplet size. This confirms the theory as to the maximum stable droplet size in the atmosphere. A computer code which predicts droplet trajectories in an arbitrary flow field was written and confirmed experimentally. The results were consolidated into a simulation to study the heating by electromagnetic fields of droplets impinging onto an object such as an airfoil. It was determined that there is sufficient time to heat droplets prior to impact for typical parameter values. Design curves for such a system are presented.
NASA Astrophysics Data System (ADS)
Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef
2016-12-01
Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.
Johansen, M P; Barnett, C L; Beresford, N A; Brown, J E; Černe, M; Howard, B J; Kamboj, S; Keum, D-K; Smodiš, B; Twining, J R; Vandenhove, H; Vives i Batlle, J; Wood, M D; Yu, C
2012-06-15
Radiological doses to terrestrial wildlife were examined in this model inter-comparison study that emphasised factors causing variability in dose estimation. The study participants used varying modelling approaches and information sources to estimate dose rates and tissue concentrations for a range of biota types exposed to soil contamination at a shallow radionuclide waste burial site in Australia. Results indicated that the dominant factor causing variation in dose rate estimates (up to three orders of magnitude on mean total dose rates) was the soil-to-organism transfer of radionuclides that included variation in transfer parameter values as well as transfer calculation methods. Additional variation was associated with other modelling factors including: how participants conceptualised and modelled the exposure configurations (two orders of magnitude); which progeny to include with the parent radionuclide (typically less than one order of magnitude); and dose calculation parameters, including radiation weighting factors and dose conversion coefficients (typically less than one order of magnitude). Probabilistic approaches to model parameterisation were used to encompass and describe variable model parameters and outcomes. The study confirms the need for continued evaluation of the underlying mechanisms governing soil-to-organism transfer of radionuclides to improve estimation of dose rates to terrestrial wildlife. The exposure pathways and configurations available in most current codes are limited when considering instances where organisms access subsurface contamination through rooting, burrowing, or using different localised waste areas as part of their habitual routines. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.
Damping of short gravity-capillary waves due to oil derivatives film on the water surface
NASA Astrophysics Data System (ADS)
Sergievskaya, Irina; Ermakov, Stanislav; Lazareva, Tatyana
2016-10-01
In this paper new results of laboratory studies of damping of gravity-capillary waves on the water surface covered by kerosene are presented and compared with our previous analysis of characteristics of crude oil and diesel fuel films. Investigations of kerosene films were carried out in a wide range values of film thicknesses (from some hundreds millimetres to a few millimetres) and in a wide range of surface wave frequencies (from 10 to 27 Hz). The selected frequency range corresponds to the operating wavelengths of microwave, X- to Ka-band radars typically used for the ocean remote sensing. The studied range of film thickness covers typical thicknesses of routine spills in the ocean. It is obtained that characteristics of waves, measured in the presence of oil derivatives films differ from those for crude oil films, in particular, because the volume viscosity of oil derivatives and crude oil is strongly different. To retrieve parameters of kerosene films from the experimental data the surface wave damping was analyzed theoretically in the frame of a model of two-layer fluid. The films are assumed to be soluble, so the elasticity on the upper and lower boundaries is considered as a function of wave frequency. Physical parameters of oil derivative films were estimated when tuning the film parameters to fit theory and experiment. Comparison between wave damping due to crude oil, kerosene and diesel fuel films have shown some capabilities of distinguishing of oil films from remote sensing of short surface waves.
Anisotropic Mesoscale Eddy Transport in Ocean General Circulation Models
NASA Astrophysics Data System (ADS)
Reckinger, S. J.; Fox-Kemper, B.; Bachman, S.; Bryan, F.; Dennis, J.; Danabasoglu, G.
2014-12-01
Modern climate models are limited to coarse-resolution representations of large-scale ocean circulation that rely on parameterizations for mesoscale eddies. The effects of eddies are typically introduced by relating subgrid eddy fluxes to the resolved gradients of buoyancy or other tracers, where the proportionality is, in general, governed by an eddy transport tensor. The symmetric part of the tensor, which represents the diffusive effects of mesoscale eddies, is universally treated isotropically in general circulation models. Thus, only a single parameter, namely the eddy diffusivity, is used at each spatial and temporal location to impart the influence of mesoscale eddies on the resolved flow. However, the diffusive processes that the parameterization approximates, such as shear dispersion, potential vorticity barriers, oceanic turbulence, and instabilities, typically have strongly anisotropic characteristics. Generalizing the eddy diffusivity tensor for anisotropy extends the number of parameters to three: a major diffusivity, a minor diffusivity, and the principal axis of alignment. The Community Earth System Model (CESM) with the anisotropic eddy parameterization is used to test various choices for the newly introduced parameters, which are motivated by observations and the eddy transport tensor diagnosed from high resolution simulations. Simply setting the ratio of major to minor diffusivities to a value of five globally, while aligning the major axis along the flow direction, improves biogeochemical tracer ventilation and reduces global temperature and salinity biases. These effects can be improved even further by parameterizing the anisotropic transport mechanisms in the ocean.
Dehghani, Seyed Mohsen; Taghavi, Seyed Alireza; Javaherizadeh, Hazhir; Nasri, Maryam
2016-01-01
- Gastroesophageal reflux disease is the most common esophageal disorder in pediatrics. - The aim of this study was to compare reflux parameters of typical and atypical symptoms of gastroesophageal reflux disease using 24-hour esophageal pH monitoring and multichannel intraluminal impedance in pediatric population. - In this prospective study, 43 patients aged less than 18 year with suspected gastroesophageal reflux disease were enrolled. The patients were divided into two groups based on the main presenting symptoms (typical versus atypical). Twenty four-hour pH monitoring and multichannel intraluminal impedance were performed in all the patients for comparing these two group regarding association of symptoms and reflux. Number of refluxes, pH related reflux, total reflux time, reflux more than 5 minutes, longest time of the reflux, lowest pH at reflux, reflux index were recorded and compared. Data comparison was done using SPSS. - The mean age of the patients was 5.7±3.4 years and 65.1% were male. Out of 43 patients 24 cases had typical symptoms and 19 had atypical symptoms. The mean reflux events detected by multichannel intraluminal impedance was more than mean reflux events detected by pH monitoring (308.4±115.8 vs 69.7±66.6) with P value of 0.037, which is statistically significant. The mean symptom index and symptom association probability were 35.01% ± 20.78% and 86.42% ± 25.79%, respectively in multichannel intraluminal impedance versus 12.73% ± 12.48% and 45.16% ± 42.29% in pH monitoring (P value <0.001). Number of acid reflux was 46.26±47.16 and 30.9±22.09 for atypical and typical symptoms respectively. The mean symptom index was 18.12% ± 13.101% and 8.30% ± 10.301% in atypical and typical symptoms respectively (P=0.034). Bolus clearance was longer in atypical symptoms compared typical symptoms(P<0.05). - Symptom index was significantly higher in atypical symptoms compared to typical symptoms. Higher number of acid reflux was found in children with atypical symptoms of reflux. Longer duration of bolus clearance was found in group with atypical symptoms of reflux.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosking, Jonathan R. M.; Natarajan, Ramesh
The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.
Quantifying crustal thickness over time in magmatic arcs
NASA Astrophysics Data System (ADS)
Profeta, Lucia; Ducea, Mihai N.; Chapman, James B.; Paterson, Scott R.; Gonzales, Susana Marisol Henriquez; Kirsch, Moritz; Petrescu, Lucian; Decelles, Peter G.
2015-12-01
We present global and regional correlations between whole-rock values of Sr/Y and La/Yb and crustal thickness for intermediate rocks from modern subduction-related magmatic arcs formed around the Pacific. These correlations bolster earlier ideas that various geochemical parameters can be used to track changes of crustal thickness through time in ancient subduction systems. Inferred crustal thicknesses using our proposed empirical fits are consistent with independent geologic constraints for the Cenozoic evolution of the central Andes, as well as various Mesozoic magmatic arc segments currently exposed in the Coast Mountains, British Columbia, and the Sierra Nevada and Mojave-Transverse Range regions of California. We propose that these geochemical parameters can be used, when averaged over the typical lifetimes and spatial footprints of composite volcanoes and their intrusive equivalents to infer crustal thickness changes over time in ancient orogens.
Quantifying crustal thickness over time in magmatic arcs
Profeta, Lucia; Ducea, Mihai N.; Chapman, James B.; Paterson, Scott R.; Gonzales, Susana Marisol Henriquez; Kirsch, Moritz; Petrescu, Lucian; DeCelles, Peter G.
2015-01-01
We present global and regional correlations between whole-rock values of Sr/Y and La/Yb and crustal thickness for intermediate rocks from modern subduction-related magmatic arcs formed around the Pacific. These correlations bolster earlier ideas that various geochemical parameters can be used to track changes of crustal thickness through time in ancient subduction systems. Inferred crustal thicknesses using our proposed empirical fits are consistent with independent geologic constraints for the Cenozoic evolution of the central Andes, as well as various Mesozoic magmatic arc segments currently exposed in the Coast Mountains, British Columbia, and the Sierra Nevada and Mojave-Transverse Range regions of California. We propose that these geochemical parameters can be used, when averaged over the typical lifetimes and spatial footprints of composite volcanoes and their intrusive equivalents to infer crustal thickness changes over time in ancient orogens. PMID:26633804
Two Universal Equations of State for Solids
NASA Astrophysics Data System (ADS)
Sun, Jiu-Xun; Wu, Qiang; Guo, Yang; Cai, Ling-Cang
2010-01-01
In this paper, two equations of state (EOSs) (Sun Jiu-Xun-Morse with parameters n = 3 and 4, designated by SMS3 and SMS4) with two parameters are proposed to satisfy four merits proposed previously and give improved results for the cohesive energy. By applying ten typical EOSs to fit experimental compression data of 50 materials, it is shown that the SMS4 EOS gives the best results; the Baonza and Morse EOSs give the second best results; the SMS3 and modified generalized Lennard-Jones (mGLJ) EOSs give the third best results. However, the Baonza and mGLJ EOSs cannot give physically reasonable values of cohesive energy and P-V curves in the expansion region; the SMS3 and SMS4 EOS give fairly good results, and have some advantages over the Baonza and mGLJ EOSs in practical applications.
Effect of correlated observation error on parameters, predictions, and uncertainty
Tiedeman, Claire; Green, Christopher T.
2013-01-01
Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.
Line-driven winds revisited in the context of Be stars: Ω-slow solutions with high k values
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silaj, J.; Jones, C. E.; Curé, M.
2014-11-01
The standard, or fast, solutions of m-CAK line-driven wind theory cannot account for slowly outflowing disks like the ones that surround Be stars. It has been previously shown that there exists another family of solutions—the Ω-slow solutions—that is characterized by much slower terminal velocities and higher mass-loss rates. We have solved the one-dimensional m-CAK hydrodynamical equation of rotating radiation-driven winds for this latter solution, starting from standard values of the line force parameters (α, k, and δ), and then systematically varying the values of α and k. Terminal velocities and mass-loss rates that are in good agreement with those foundmore » in Be stars are obtained from the solutions with lower α and higher k values. Furthermore, the equatorial densities of such solutions are comparable to those that are typically assumed in ad hoc models. For very high values of k, we find that the wind solutions exhibit a new kind of behavior.« less
NASA Technical Reports Server (NTRS)
Troy, B. E., Jr.; Maier, E. J.
1975-01-01
The effects of the grid transparency and finite collector size on the values of thermal ion density and temperature determined by the standard RPA (retarding potential analyzer) analysis method are investigated. The current-voltage curves calculated for varying RPA parameters and a given ion mass, temperature, and density are analyzed by the standard RPA method. It is found that only small errors in temperature and density are introduced for an RPA with typical dimensions, and that even when the density error is substantial for nontypical dimensions, the temperature error remains minimum.
Applications of DC-Self Bias in CCP Deposition Systems
NASA Astrophysics Data System (ADS)
Keil, D. L.; Augustyniak, E.; Sakiyama, Y.
2013-09-01
In many commercial CCP plasma process systems the DC-self bias is available as a reported process parameter. Since commercial systems typically limit the number of onboard diagnostics, there is great incentive to understand how DC-self bias can be expected to respond to various system perturbations. This work reviews and examines DC self bias changes in response to tool aging, chamber film accumulation and wafer processing. The diagnostic value of the DC self bias response to transient and various steady state current draw schemes are examined. Theoretical models and measured experimental results are compared and contrasted.
Potential accuracy of methods of laser Doppler anemometry in the single-particle scattering mode
NASA Astrophysics Data System (ADS)
Sobolev, V. S.; Kashcheeva, G. A.
2017-05-01
Potential accuracy of methods of laser Doppler anemometry is determined for the singleparticle scattering mode where the only disturbing factor is shot noise generated by the optical signal itself. The problem is solved by means of computer simulations with the maximum likelihood method. The initial parameters of simulations are chosen to be the number of real or virtual interference fringes in the measurement volume of the anemometer, the signal discretization frequency, and some typical values of the signal/shot noise ratio. The parameters to be estimated are the Doppler frequency as the basic parameter carrying information about the process velocity, the signal amplitude containing information about the size and concentration of scattering particles, and the instant when the particles arrive at the center of the measurement volume of the anemometer, which is needed for reconstruction of the examined flow velocity as a function of time. The estimates obtained in this study show that shot noise produces a minor effect (0.004-0.04%) on the frequency determination accuracy in the entire range of chosen values of the initial parameters. For the signal amplitude and the instant when the particles arrive at the center of the measurement volume of the anemometer, the errors induced by shot noise are in the interval of 0.2-3.5%; if the number of interference fringes is sufficiently large (more than 20), the errors do not exceed 0.2% regardless of the shot noise level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2011-01-01
The need for a defendable and systematic uncertainty and sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008. The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This report summarized the results of the initial investigations performed with SUSA,more » utilizing a typical High Temperature Reactor benchmark (the IAEA CRP-5 PBMR 400MW Exercise 2) and the PEBBED-THERMIX suite of codes. The following steps were performed as part of the uncertainty and sensitivity analysis: 1. Eight PEBBED-THERMIX model input parameters were selected for inclusion in the uncertainty study: the total reactor power, inlet gas temperature, decay heat, and the specific heat capability and thermal conductivity of the fuel, pebble bed and reflector graphite. 2. The input parameters variations and probability density functions were specified, and a total of 800 PEBBED-THERMIX model calculations were performed, divided into 4 sets of 100 and 2 sets of 200 Steady State and Depressurized Loss of Forced Cooling (DLOFC) transient calculations each. 3. The steady state and DLOFC maximum fuel temperature, as well as the daily pebble fuel load rate data, were supplied to SUSA as model output parameters of interest. The 6 data sets were statistically analyzed to determine the 5% and 95% percentile values for each of the 3 output parameters with a 95% confidence level, and typical statistical indictors were also generated (e.g. Kendall, Pearson and Spearman coefficients). 4. A SUSA sensitivity study was performed to obtain correlation data between the input and output parameters, and to identify the primary contributors to the output data uncertainties. It was found that the uncertainties in the decay heat, pebble bed and reflector thermal conductivities were responsible for the bulk of the propagated uncertainty in the DLOFC maximum fuel temperature. It was also determined that the two standard deviation (2s) uncertainty on the maximum fuel temperature was between ±58oC (3.6%) and ±76oC (4.7%) on a mean value of 1604 oC. These values mostly depended on the selection of the distributions types, and not on the number of model calculations above the required Wilks criteria (a (95%,95%) statement would usually require 93 model runs).« less
Energy Factor Analysis for Gas Heat Pump Water Heaters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gluesenkamp, Kyle R
2016-01-01
Gas heat pump water heaters (HPWHs) can improve water heating efficiency with zero GWP and zero ODP working fluids. The energy factor (EF) of a gas HPWH is sensitive to several factors. In this work, expressions are derived for EF of gas HPWHs, as a function of heat pump cycle COP, tank heat losses, burner efficiency, electrical draw, and effectiveness of supplemental heat exchangers. The expressions are used to investigate the sensitivity of EF to each parameter. EF is evaluated on a site energy basis (as used by the US DOE for rating water heater EF), and a primary energy-basismore » energy factor (PEF) is also defined and included. Typical ranges of values for the six parameters are given. For gas HPWHs, using typical ranges for component performance, EF will be 59 80% of the heat pump cycle thermal COP (for example, a COP of 1.60 may result in an EF of 0.94 1.28). Most of the reduction in COP is due to burner efficiency and tank heat losses. Gas-fired HPWHs are theoretically be capable of an EF of up to 1.7 (PEF of 1.6); while an EF of 1.1 1.3 (PEF of 1.0 1.1) is expected from an early market entry.« less
Quantum effects on compressional Alfven waves in compensated semiconductors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amin, M. R.
2015-03-15
Amplitude modulation of a compressional Alfven wave in compensated electron-hole semiconductor plasmas is considered in the quantum magnetohydrodynamic regime in this paper. The important ingredients of this study are the inclusion of the particle degeneracy pressure, exchange-correlation potential, and the quantum diffraction effects via the Bohm potential in the momentum balance equations of the charge carriers. A modified nonlinear Schrödinger equation is derived for the evolution of the slowly varying amplitude of the compressional Alfven wave by employing the standard reductive perturbation technique. Typical values of the parameters for GaAs, GaSb, and GaN semiconductors are considered in analyzing the linearmore » and nonlinear dispersions of the compressional Alfven wave. Detailed analysis of the modulation instability in the long-wavelength regime is presented. For typical parameter ranges of the semiconductor plasmas and at the long-wavelength regime, it is found that the wave is modulationally unstable above a certain critical wavenumber. Effects of the exchange-correlation potential and the Bohm potential in the wave dynamics are also studied. It is found that the effect of the Bohm potential may be neglected in comparison with the effect of the exchange-correlation potential in the linear and nonlinear dispersions of the compressional Alfven wave.« less
Mathematical Modeling of RNA-Based Architectures for Closed Loop Control of Gene Expression.
Agrawal, Deepak K; Tang, Xun; Westbrook, Alexandra; Marshall, Ryan; Maxwell, Colin S; Lucks, Julius; Noireaux, Vincent; Beisel, Chase L; Dunlop, Mary J; Franco, Elisa
2018-05-08
Feedback allows biological systems to control gene expression precisely and reliably, even in the presence of uncertainty, by sensing and processing environmental changes. Taking inspiration from natural architectures, synthetic biologists have engineered feedback loops to tune the dynamics and improve the robustness and predictability of gene expression. However, experimental implementations of biomolecular control systems are still far from satisfying performance specifications typically achieved by electrical or mechanical control systems. To address this gap, we present mathematical models of biomolecular controllers that enable reference tracking, disturbance rejection, and tuning of the temporal response of gene expression. These controllers employ RNA transcriptional regulators to achieve closed loop control where feedback is introduced via molecular sequestration. Sensitivity analysis of the models allows us to identify which parameters influence the transient and steady state response of a target gene expression process, as well as which biologically plausible parameter values enable perfect reference tracking. We quantify performance using typical control theory metrics to characterize response properties and provide clear selection guidelines for practical applications. Our results indicate that RNA regulators are well-suited for building robust and precise feedback controllers for gene expression. Additionally, our approach illustrates several quantitative methods useful for assessing the performance of biomolecular feedback control systems.
Constraining Lunar Cold Spot Properties Using Eclipse and Twilight Temperature Behavior
NASA Astrophysics Data System (ADS)
Powell, T. M.; Greenhagen, B. T.; Hayne, P. O.; Bandfield, J. L.
2016-12-01
Thermal mapping of the nighttime lunar surface by the Diviner instrument on the Lunar Reconnaissance Orbiter (LRO) has revealed anomalous "cold spot" regions surrounding young impact craters. These regions typically show 5-10K lower nighttime temperatures than background regolith. Previous modeling has shown that cold spot regions can be explained by a "fluffing-up" of the top centimeters of regolith, resulting in a layer of lower-density, highly-insulating material (Bandfield et al., 2014). The thickness of this layer is characterized by the H-parameter, which describes the rate of density increase with depth (Vasavada et al., 2012). Contrary to expectations, new Diviner and ground-based telescopic data have revealed that these cold spot regions remain warmer than typical lunar regolith during eclipses and for a short twilight period at the beginning of lunar night (Hayne et al., 2015). These events act on much shorter timescales than the full diurnal day-night cycle, and the surface temperature response is sensitive to the properties of the top few millimeters of regolith. Thermal modeling in this study shows that this behavior can be explained by a profile with higher surface density and higher H-parameter relative to typical regolith. This results in a relative increase in thermal inertia in the top few millimeters of regolith, but decreased thermal inertia at centimeter depth scales. Best-fit surface density and H-parameter values are consistent with the temperature behavior observed during diurnal night as well as early twilight and eclipse scenarios. We interpret this behavior to indicate the presence of small rocks at the surface deposited by granular flow mixing during cold spot formation. This study also shows that eclipse and twilight data can be used as an important constraint in determining the thermophysical properties of lunar regolith. References: Bandfield, et al. (2014), Icarus, 231, 221-231. Hayne, et al. (2015), In Lunar and Planetary Science Conference (Vol. 46, p. 1997). Vasavada, et al. (2012), J. Geophys. Res., 117(E12).
Parameter estimation in Probabilistic Seismic Hazard Analysis: current problems and some solutions
NASA Astrophysics Data System (ADS)
Vermeulen, Petrus
2017-04-01
A typical Probabilistic Seismic Hazard Analysis (PSHA) comprises identification of seismic source zones, determination of hazard parameters for these zones, selection of an appropriate ground motion prediction equation (GMPE), and integration over probabilities according the Cornell-McGuire procedure. Determination of hazard parameters often does not receive the attention it deserves, and, therefore, problems therein are often overlooked. Here, many of these problems are identified, and some of them addressed. The parameters that need to be identified are those associated with the frequency-magnitude law, those associated with earthquake recurrence law in time, and the parameters controlling the GMPE. This study is concerned with the frequency-magnitude law and temporal distribution of earthquakes, and not with GMPEs. TheGutenberg-Richter frequency-magnitude law is usually adopted for the frequency-magnitude law, and a Poisson process for earthquake recurrence in time. Accordingly, the parameters that need to be determined are the slope parameter of the Gutenberg-Richter frequency-magnitude law, i.e. the b-value, the maximum value at which the Gutenberg-Richter law applies mmax, and the mean recurrence frequency,λ, of earthquakes. If, instead of the Cornell-McGuire, the "Parametric-Historic procedure" is used, these parameters do not have to be known before the PSHA computations, they are estimated directly during the PSHA computation. The resulting relation for the frequency of ground motion vibration parameters has an analogous functional form to the frequency-magnitude law, which is described by parameters γ (analogous to the b¬-value of the Gutenberg-Richter law) and the maximum possible ground motion amax (analogous to mmax). Originally, the approach was possible to apply only to the simple GMPE, however, recently a method was extended to incorporate more complex forms of GMPE's. With regards to the parameter mmax, there are numerous methods of estimation, none of which is accepted as the standard one. There is also much controversy surrounding this parameter. In practice, when estimating the above mentioned parameters from seismic catalogue, the magnitude, mmin, from which a seismic catalogue is complete becomes important.Thus, the parameter mmin is also considered as a parameter to be estimated in practice. Several methods are discussed in the literature, and no specific method is preferred. Methods usually aim at identifying the point where a frequency-magnitude plot starts to deviate from linearity due to data loss. Parameter estimation is clearly a rich field which deserves much attention and, possibly standardization, of methods. These methods should be the sound and efficient, and a query into which methods are to be used - and for that matter which ones are not to be used - is in order.
Carr, Brian I.; Giannini, Edoardo G.; Farinati, Fabio; Ciccarese, Francesca; Rapaccini, Gian Ludovico; Marco, Maria Di; Benvegnù, Luisa; Zoli, Marco; Borzio, Franco; Caturelli, Eugenio; Chiaramonte, Maria; Trevisani, Franco
2014-01-01
Background Previous work has shown that 2 general processes contribute to hepatocellular cancer (HCC) prognosis. They are: a. liver damage, monitored by indices such as blood bilirubin, prothrombin time and AST; as well as b. tumor biology, monitored by indices such as tumor size, tumor number, presence of PVT and blood AFP levels. These 2 processes may affect one another, with prognostically significant interactions between multiple tumor and host parameters. These interactions form a context that provide personalization of the prognostic meaning of these factors for every patient. Thus, a given level of bilirubin or tumor diameter might have a different significance in different personal contexts. We previously applied Network Phenotyping Strategy (NPS) to characterize interactions between liver function indices of Asian HCC patients and recognized two clinical phenotypes, S and L, differing in tumor size and tumor nodule numbers. Aims To validate the applicability of the NPS-based HCC S/L classification on an independent European HCC cohort, for which survival information was additionally available. Methods Four sets of peripheral blood parameters, including AFP-platelets, derived from routine blood parameter levels and tumor indices from the ITA.LI.CA database, were analyzed using NPS, a graph-theory based approach, which compares personal patterns of complete relationships between clinical data values to reference patterns with significant association to disease outcomes. Results Without reference to the actual tumor sizes, patients were classified by NPS into 2 subgroups with S and L phenotypes. These two phenotypes were recognized using solely the HCC screening test results, consisting of eight common blood parameters, paired by their significant correlations, including an AFP-Platelets relationship. These trends were combined with patient age, gender and self-reported alcoholism into NPS personal patient profiles. We subsequently validated (using actual scan data) that patients in L phenotype group had 1.5x larger mean tumor masses relative to S, p=6×10−16. Importantly, with the new data, liver test pattern-identified S-phenotype patients had typically 1.7 × longer survival compared to L-phenotype. NPS integrated the liver, tumor and basic demographic factors. Cirrhosis associated thrombocytopenia was typical for smaller S-tumors. In L-tumor phenotype, typical platelet levels increased with the tumor mass. Hepatic inflammation and tumor factors contributed to more aggressive L tumors, with parenchymal destruction and shorter survival. Summary NPS provides integrative interpretation for HCC behavior, identifying two tumor and survival phenotypes by clinical parameter patterns. The NPS classifier is provided as an Excel tool. The NPS system shows the importance of considering each tumor marker and parameter in the total context of all the other parameters of an individual patient. PMID:25023357
Reliable sagittal plane kinematic gait assessments are feasible using low-cost webcam technology.
Saner, Robert J; Washabaugh, Edward P; Krishnan, Chandramouli
2017-07-01
Three-dimensional (3-D) motion capture systems are commonly used for gait analysis because they provide reliable and accurate measurements. However, the downside of this approach is that it is expensive and requires technical expertise; thus making it less feasible in the clinic. To address this limitation, we recently developed and validated (using a high-precision walking robot) a low-cost, two-dimensional (2-D) real-time motion tracking approach using a simple webcam and LabVIEW Vision Assistant. The purpose of this study was to establish the repeatability and minimal detectable change values of hip and knee sagittal plane gait kinematics recorded using this system. Twenty-one healthy subjects underwent two kinematic assessments while walking on a treadmill at a range of gait velocities. Intraclass correlation coefficients (ICC) and minimal detectable change (MDC) values were calculated for commonly used hip and knee kinematic parameters to demonstrate the reliability of the system. Additionally, Bland-Altman plots were generated to examine the agreement between the measurements recorded on two different days. The system demonstrated good to excellent reliability (ICC>0.75) for all the gait parameters tested on this study. The MDC values were typically low (<5°) for most of the parameters. The Bland-Altman plots indicated that there was no systematic error or bias in kinematic measurements and showed good agreement between measurements obtained on two different days. These results indicate that kinematic gait assessments using webcam technology can be reliably used for clinical and research purposes. Copyright © 2017 Elsevier B.V. All rights reserved.
New Insights into the Estimation of Extreme Geomagnetic Storm Occurrences
NASA Astrophysics Data System (ADS)
Ruffenach, Alexis; Winter, Hugo; Lavraud, Benoit; Bernardara, Pietro
2017-04-01
Space weather events such as intense geomagnetic storms are major disturbances of the near-Earth environment that may lead to serious impacts on our modern society. As such, it is of great importance to estimate their probability, and in particular that of extreme events. One approach largely used in statistical sciences for extreme events probability estimates is Extreme Value Analysis (EVA). Using this rigorous statistical framework, estimations of the occurrence of extreme geomagnetic storms are performed here based on the most relevant global parameters related to geomagnetic storms, such as ground parameters (e.g. geomagnetic Dst and aa indexes), and space parameters related to the characteristics of Coronal Mass Ejections (CME) (velocity, southward magnetic field component, electric field). Using our fitted model, we estimate the annual probability of a Carrington-type event (Dst = -850nT) to be on the order of 10-3, with a lower limit of the uncertainties on the return period of ˜500 years. Our estimate is significantly higher than that of most past studies, which typically had a return period of a few 100 years at maximum. Thus precautions are required when extrapolating intense values. Currently, the complexity of the processes and the length of available data inevitably leads to significant uncertainties in return period estimates for the occurrence of extreme geomagnetic storms. However, our application of extreme value models for extrapolating into the tail of the distribution provides a mathematically justified framework for the estimation of extreme return periods, thereby enabling the determination of more accurate estimates and reduced associated uncertainties.
Anatomical background noise power spectrum in differential phase contrast breast images
NASA Astrophysics Data System (ADS)
Garrett, John; Ge, Yongshuai; Li, Ke; Chen, Guang-Hong
2015-03-01
In x-ray breast imaging, the anatomical noise background of the breast has a significant impact on the detection of lesions and other features of interest. This anatomical noise is typically characterized by a parameter, β, which describes a power law dependence of anatomical noise on spatial frequency (the shape of the anatomical noise power spectrum). Large values of β have been shown to reduce human detection performance, and in conventional mammography typical values of β are around 3.2. Recently, x-ray differential phase contrast (DPC) and the associated dark field imaging methods have received considerable attention as possible supplements to absorption imaging for breast cancer diagnosis. However, the impact of these additional contrast mechanisms on lesion detection is not yet well understood. In order to better understand the utility of these new methods, we measured the β indices for absorption, DPC, and dark field images in 15 cadaver breast specimens using a benchtop DPC imaging system. We found that the measured β value for absorption was consistent with the literature for mammographic acquisitions (β = 3.61±0.49), but that both DPC and dark field images had much lower values of β (β = 2.54±0.75 for DPC and β = 1.44±0.49 for dark field). In addition, visual inspection showed greatly reduced anatomical background in both DPC and dark field images. These promising results suggest that DPC and dark field imaging may help provide improved lesion detection in breast imaging, particularly for those patients with dense breasts, in whom anatomical noise is a major limiting factor in identifying malignancies.
Fandel, Christina L.; Lippmann, Thomas C.; Foster, Diane L.; Brothers, Laura L.
2017-01-01
Current observations and sediment characteristics acquired within and along the rim of two pockmarks in Belfast Bay, Maine, were used to characterize periods of sediment transport and to investigate conditions favorable to the settling of suspended sediment. Hourly averaged Shields parameters determined from horizontal current velocity profiles within the center of each pockmark never exceed the critical value (approximated with the theoretical model of Dade et al. 1992). However, Shields parameters estimated at the pockmark rims periodically exceed the critical value, consistent with conditions that support the onset of sediment transport and suspension. Below the rim in the near-center of each pockmark, depth-averaged vertical velocities were less than zero (downward) 60% and 55% of the time in the northern and southern pockmarks, and were often comparable to depth-averaged horizontal velocities. Along the rim, depth-averaged vertical velocities over the lower 8 m of the water column were primarily downward but much less than depth-averaged horizontal velocities indicating that suspended sediment may be moved to distant locations. Maximum grain sizes capable of remaining in suspension under terminal settling flow conditions (ranging 10–170 μm) were typically much greater than the observed median grain diameter (about 7 μm) at the bed. During upwelling flow within the pockmarks, and in the absence of flocculation, suspended sediment would not settle. The greater frequency of predicted periods of sediment transport along the rim of the southern pockmark is consistent with pockmark morphology in Belfast Bay, which transitions from more spherical to more elongated toward the south, suggesting near-bed sediment transport may contribute to post-formation pockmark evolution during typical conditions in Belfast Bay.
[Acoustic conditions in open plan office - Application of technical measures in a typical room].
Mikulski, Witold
2018-03-09
Noise in open plan offices should not exceed acceptable levels for the hearing protection. Its major negative effects on employees are nuisance and impediment in execution of work. Specific technical solutions should be introduced to provide proper acoustic conditions for work performance. Acoustic evaluation of a typical open plan office was presented in the article published in "Medycyna Pracy" 5/2016. None of the rooms meets all the criteria, therefore, in this article one of the rooms was chosen to apply different technical solutions to check the possibility of reaching proper acoustic conditions. Acoustic effectiveness of those solutions was verified by means of digital simulation. The model was checked by comparing the results of measurements and calculations before using simulation. The analyzis revealed that open plan offices supplemented with signals for masking speech signals can meet all the required criteria. It is relatively easy to reach proper reverberation time (i.e., sound absorption). It is more difficult to reach proper values of evaluation parameters determined from A-weighted sound pressure level (SPLA) of speech. The most difficult is to provide proper values of evaluation parameters determined from speech transmission index (STI). Finally, it is necessary (besides acoustic treatment) to use devices for speech masking. The study proved that it is technically possible to reach proper acoustic condition. Main causes of employees complaints in open plan office are inadequate acoustic work conditions. Therefore, it is necessary to apply specific technical solutions - not only sound absorbing suspended ceiling and high acoustic barriers, but also devices for speech masking. Med Pr 2018;69(2):153-165. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
Wu, Liejun; Chen, Maoxue; Chen, Yongli; Li, Qing X.
2013-01-01
Gas holdup time (tM) is a basic parameter in isothermal gas chromatography (GC). Determination and evaluation of tM and retention behaviors of n-alkanes under isothermal GC conditions have been extensively studied since the 1950s, but still remains unresolved. The difference equation (DE) model [J. Chromatogr. A 1260:215–223] reveals retention behaviors of n-alkanes excluding tM, while the quadratic equation (QE) model [J. Chromatogr. A 1260:224–231] including tM is suitable for applications. In the present study, tM values were calculated with the QE model, which is referred to as tMT, evaluated and compared with other three typical nonlinear models. The QE model gives an accurate estimation of tM in isothermal GC. The tMT values are highly accurate, stable, and easy to calculate and use. There is only one tMT value at each GC condition. The proper classification of tM values can clarify their disagreement and facilitate GC retention data standardization for which tMT values are promising reference tM values. PMID:23726077
How Knowledge Organisations Work: The Case of Software Firms
ERIC Educational Resources Information Center
Gottschalk, Petter
2007-01-01
Knowledge workers in software firms solve client problems in sequential and cyclical work processes. Sequential and cyclical work takes place in the value configuration of a value shop. While typical examples of value chains are manufacturing industries such as paper and car production, typical examples of value shops are law firms and medical…
Ramdani, Sofiane; Bonnet, Vincent; Tallon, Guillaume; Lagarde, Julien; Bernard, Pierre Louis; Blain, Hubert
2016-08-01
Entropy measures are often used to quantify the regularity of postural sway time series. Recent methodological developments provided both multivariate and multiscale approaches allowing the extraction of complexity features from physiological signals; see "Dynamical complexity of human responses: A multivariate data-adaptive framework," in Bulletin of Polish Academy of Science and Technology, vol. 60, p. 433, 2012. The resulting entropy measures are good candidates for the analysis of bivariate postural sway signals exhibiting nonstationarity and multiscale properties. These methods are dependant on several input parameters such as embedding parameters. Using two data sets collected from institutionalized frail older adults, we numerically investigate the behavior of a recent multivariate and multiscale entropy estimator; see "Multivariate multiscale entropy: A tool for complexity analysis of multichannel data," Physics Review E, vol. 84, p. 061918, 2011. We propose criteria for the selection of the input parameters. Using these optimal parameters, we statistically compare the multivariate and multiscale entropy values of postural sway data of non-faller subjects to those of fallers. These two groups are discriminated by the resulting measures over multiple time scales. We also demonstrate that the typical parameter settings proposed in the literature lead to entropy measures that do not distinguish the two groups. This last result confirms the importance of the selection of appropriate input parameters.
NASA Technical Reports Server (NTRS)
Hong, S. H.; Wilhelm, H. E.
1978-01-01
An electrical discharge between two ring electrodes embedded in the mantle of a cylindrical chamber is considered, in which the plasma in the anode and cathode regions rotates in opposite directions under the influence of an external axial magnetic field. The associated boundary-value problem for the coupled partial differential equations describing the azimuthal velocity and radial current-density fields is solved in closed form. The velocity, current density, induced magnetic induction, and electric fields are presented for typical Hartmann numbers, magnetic Reynolds numbers, and geometry parameters. The discharge is shown to produce anodic and cathodic plasma sections rotating at speeds of the order 1,000,000 cm/sec for conventional magnetic field intensities. Possible application of the magnetoactive discharge as a plasma centrifuge for isotope separation is discussed.
NASA Astrophysics Data System (ADS)
Merka, J.; Dolan, C. F.
2015-12-01
Finding and retrieving space physics data is often a complicated taskeven for publicly available data sets: Thousands of relativelysmall and many large data sets are stored in various formats and, inthe better case, accompanied by at least some documentation. VirtualHeliospheric and Magnetospheric Observatories (VHO and VMO) help researches by creating a single point of uniformdiscovery, access, and use of heliospheric (VHO) and magnetospheric(VMO) data.The VMO and VHO functionality relies on metadata expressed using theSPASE data model. This data model is developed by the SPASE WorkingGroup which is currently the only international group supporting globaldata management for Solar and Space Physics. The two Virtual Observatories(VxOs) have initiated and lead a development of a SPASE-related standardnamed SPASE Query Language for provided a standard way of submittingqueries and receiving results.The VMO and VHO use SPASE and SPASEQL for searches based on various criteria such as, for example, spatial location, time of observation, measurement type, parameter values, etc. The parameter values are represented by their statisticalestimators calculated typically over 10-minute intervals: mean, median, standard deviation, minimum, and maximum. The use of statistical estimatorsenables science driven data queries that simplify and shorten the effort tofind where and/or how often the sought phenomenon is observed, as we will present.
Revisiting linear plasma waves for finite value of the plasma parameter
NASA Astrophysics Data System (ADS)
Grismayer, Thomas; Fahlen, Jay; Decyk, Viktor; Mori, Warren
2010-11-01
We investigate through theory and PIC simulations the Landau-damping of plasma waves with finite plasma parameter. We concentrate on the linear regime, γφB, where the waves are typically small and below the thermal noise. We simulate these condition using 1,2,3D electrostatic PIC codes (BEPS), noting that modern computers now allow us to simulate cases where (nλD^3 = [1e2;1e6]). We study these waves by using a subtraction technique in which two simulations are carried out. In the first, a small wave is initialized or driven, in the second no wave is excited. The results are subtracted to provide a clean signal that can be studied. As nλD^3 is decreased, the number of resonant electrons can be small for linear waves. We show how the damping changes as a result of having few resonant particles. We also find that for small nλD^3 fluctuations can cause the electrons to undergo collisions that eventually destroy the initial wave. A quantity of interest is the the life time of a particular mode which depends on the plasma parameter and the wave number. The life time is estimated and then compared with the numerical results. A surprising result is that even for large values of nλD^3 some non-Vlasov discreteness effects appear to be important.
NASA Astrophysics Data System (ADS)
Daniell, R. E.; Strickland, D. J.; Decker, D. T.; Jasperse, J. R.; Carlson, H. C., Jr.
1985-04-01
The possible use of satellite ultraviolet measurements to deduce the ionospheric electron density profile (EDP) on a global basis is discussed. During 1984 comparisons were continued between the hybrid daytime ionospheric model and the experimental observations. These comparison studies indicate that: (1) the essential features of the EDP and certain UV emissions can be modelled; (2) the models are sufficiently sensitive to input parameters to yield poor agreement with observations when typical input values are used; (3) reasonable adjustments of the parameters can produce excellent agreement between theory and data for either EDP or airglow but not both; and (4) the qualitative understanding of the relationship between two input parameters (solar flux and neutral densities) and the model EDP and airglow features has been verified. The development of a hybrid dynamic model for the nighttime midlatitude ionosphere has been initiated. This model is similar to the daytime hybrid model, but uses the sunset EDP as an initial value and calculates the EDP as a function of time through the night. In addition, a semiempirical model has been developed, based on the assumption that the nighttime EDP is always well described by a modified Chapman function. This model has great simplicity and allows the EDP to be inferred in a straightforward manner from optical observations. Comparisons with data are difficult, however, because of the low intensity of the nightglow.
Yao, Zhongping; Xia, Qixing; Ju, Pengfei; Wang, Jiankang; Su, Peibo; Li, Dongqi; Jiang, Zhaohua
2016-01-01
Thermal control ceramic coatings on Mg–Li alloys have been successfully prepared in silicate electrolyte system by plasma electrolytic oxidation (PEO) method. The PEO coatings are mainly composed of crystallized Mg2SiO4 and MgO, which have typical porous structure with some bulges on the surface; OES analysis shows that the plasma temperature, which is influenced by the technique parameters, determines the formation of the coatings with different crystalline phases and morphologies, combined with “quick cooling effect” by the electrolyte; and the electron concentration is constant, which is related to the electric spark breakdown, determined by the nature of the coating and the interface of coating/electrolyte. Technique parameters influence the coating thickness, roughness and surface morphology, but do not change the coating composition in the specific PEO regime, and therefore the absorptance (αS) and emissivity (ε) of the coatings can be adjusted by the technique parameters through changing thickness and roughness in a certain degree. The coating prepared at 10 A/dm2, 50 Hz, 30 min and 14 g/L Na2SiO3 has the minimum value of αS (0.35) and the maximum value of ε (0.82), with the balance temperature of 320 K. PMID:27383569
NASA Astrophysics Data System (ADS)
Wolbang, Daniel; Biernat, Helfried; Schwingenschuh, Konrad; Eichelberger, Hans; Prattes, Gustav; Besser, Bruno; Boudjada, Mohammed Y.; Rozhnoi, Alexander; Solovieva, Maria; Biagi, Pier Francesco; Friedrich, Martin
2013-04-01
We present a comparative study of seismic and non-seismic sub-ionospheric VLF anomalies. Our method is based on parameter variations of the sub-ionospheric VLF waveguide formed by the surface and the lower ionosphere. The used radio links working in the frequency range between 10 and 50 kHz, the receivers are part of the European and Russian networks. Various authors investigated the lithopsheric-atmospheric-ionospheric coupling and predicted the lowering of the ionosphere over earthquake preparation zones [1]. The received nighttime signal of a sub-ionospheric waveguide depends strongly on the height of the ionospheric E-layer, typically 80 to 85 km. This height is characterized by a typical gradient of the electron density near the atmospheric-ionospheric boundary [2]. In the last years it has been turned out that one of the major issues of sub-ionospheric seismo-electromagnetic VLF studies are the non-seismic influences on the links, which have to be carefully characterized. Among others this could be traveling ionospheric disturbances, geomagnetic storms as well as electron precipitation. Our emphasis is on the analysis of daily, monthly and annual variations of the VLF amplitude. To improve the statistics we investigate the behavior and typical variations of the VLF amplitude and phase over a period of more than 2 years. One important parameter considered is the rate how often the fluctuations are falling below a significant level derived from a mean value. The temporal variations and the amplitudes of these depressions are studied for several years for sub-ionospheric VLF radio links with the receivers in Graz and Kamchatka. In order to study the difference between seismic and non-seismic turbulences in the lower ionosphere a power spectrum analysis of the received signal is performed too. We are especially interested in variations T>6 min which are typical for atmospheric gravity waves causing the lithospheric-atmospheric-ionospheric coupling [3]. All measured and derived VLF parameters are compared with VLF observations several weeks before an earthquake (e.g. L'Aquila, Italy, April 6, 2009) and with co- and post-seismic phenomena. It is shown that this comparative study will improve the one parameter seismo-electromagnetic VLF methods. References: [1] A. Molchanov, M. Hayakawa: Seismo-Electromagnetics and related Phenomena: History and latest results, Terrapub, 2008. [2] S. Pulinets, K. Boyarchuk: Ionospheric Precursors of Earthquakes, Springer, 2004 [3] A. Rozhnoi et al.: Observation evidences of atmospheric Gravity Waves induced by seismic activity from analysis of subionospheric LF signal spectra, National Hazards and Earth System Sciences, 7, 625-628, 2007.
Wildfire risk assessment in a typical Mediterranean wildland-urban interface of Greece.
Mitsopoulos, Ioannis; Mallinis, Giorgos; Arianoutsou, Margarita
2015-04-01
The purpose of this study was to assess spatial wildfire risk in a typical Mediterranean wildland-urban interface (WUI) in Greece and the potential effect of three different burning condition scenarios on the following four major wildfire risk components: burn probability, conditional flame length, fire size, and source-sink ratio. We applied the Minimum Travel Time fire simulation algorithm using the FlamMap and ArcFuels tools to characterize the potential response of the wildfire risk to a range of different burning scenarios. We created site-specific fuel models of the study area by measuring the field fuel parameters in representative natural fuel complexes, and we determined the spatial extent of the different fuel types and residential structures in the study area using photointerpretation procedures of large scale natural color orthophotographs. The results included simulated spatially explicit fire risk components along with wildfire risk exposure analysis and the expected net value change. Statistical significance differences in simulation outputs between the scenarios were obtained using Tukey's significance test. The results of this study provide valuable information for decision support systems for short-term predictions of wildfire risk potential and inform wildland fire management of typical WUI areas in Greece.
Wildfire Risk Assessment in a Typical Mediterranean Wildland-Urban Interface of Greece
NASA Astrophysics Data System (ADS)
Mitsopoulos, Ioannis; Mallinis, Giorgos; Arianoutsou, Margarita
2015-04-01
The purpose of this study was to assess spatial wildfire risk in a typical Mediterranean wildland-urban interface (WUI) in Greece and the potential effect of three different burning condition scenarios on the following four major wildfire risk components: burn probability, conditional flame length, fire size, and source-sink ratio. We applied the Minimum Travel Time fire simulation algorithm using the FlamMap and ArcFuels tools to characterize the potential response of the wildfire risk to a range of different burning scenarios. We created site-specific fuel models of the study area by measuring the field fuel parameters in representative natural fuel complexes, and we determined the spatial extent of the different fuel types and residential structures in the study area using photointerpretation procedures of large scale natural color orthophotographs. The results included simulated spatially explicit fire risk components along with wildfire risk exposure analysis and the expected net value change. Statistical significance differences in simulation outputs between the scenarios were obtained using Tukey's significance test. The results of this study provide valuable information for decision support systems for short-term predictions of wildfire risk potential and inform wildland fire management of typical WUI areas in Greece.
Pessiglione, Mathias
2017-01-01
A standard view in neuroeconomics is that to make a choice, an agent first assigns subjective values to available options, and then compares them to select the best. In choice tasks, these cardinal values are typically inferred from the preference expressed by subjects between options presented in pairs. Alternatively, cardinal values can be directly elicited by asking subjects to place a cursor on an analog scale (rating task) or to exert a force on a power grip (effort task). These tasks can vary in many respects: they can notably be more or less costly and consequential. Here, we compared the value functions elicited by choice, rating and effort tasks on options composed of two monetary amounts: one for the subject (gain) and one for a charity (donation). Bayesian model selection showed that despite important differences between the three tasks, they all elicited a same value function, with similar weighting of gain and donation, but variable concavity. Moreover, value functions elicited by the different tasks could predict choices with equivalent accuracy. Our finding therefore suggests that comparable value functions can account for various motivated behaviors, beyond economic choice. Nevertheless, we report slight differences in the computational efficiency of parameter estimation that may guide the design of future studies. PMID:29161252
A mechanism for value-sensitive decision-making.
Pais, Darren; Hogan, Patrick M; Schlegel, Thomas; Franks, Nigel R; Leonard, Naomi E; Marshall, James A R
2013-01-01
We present a dynamical systems analysis of a decision-making mechanism inspired by collective choice in house-hunting honeybee swarms, revealing the crucial role of cross-inhibitory 'stop-signalling' in improving the decision-making capabilities. We show that strength of cross-inhibition is a decision-parameter influencing how decisions depend both on the difference in value and on the mean value of the alternatives; this is in contrast to many previous mechanistic models of decision-making, which are typically sensitive to decision accuracy rather than the value of the option chosen. The strength of cross-inhibition determines when deadlock over similarly valued alternatives is maintained or broken, as a function of the mean value; thus, changes in cross-inhibition strength allow adaptive time-dependent decision-making strategies. Cross-inhibition also tunes the minimum difference between alternatives required for reliable discrimination, in a manner similar to Weber's law of just-noticeable difference. Finally, cross-inhibition tunes the speed-accuracy trade-off realised when differences in the values of the alternatives are sufficiently large to matter. We propose that the model, and the significant role of the values of the alternatives, may describe other decision-making systems, including intracellular regulatory circuits, and simple neural circuits, and may provide guidance in the design of decision-making algorithms for artificial systems, particularly those functioning without centralised control.
NASA Astrophysics Data System (ADS)
Biondi, Gabriele; Mauro, Stefano; Pastorelli, Stefano; Sorli, Massimo
2018-05-01
One of the key functionalities required by an Active Debris Removal mission is the assessment of the target kinematics and inertial properties. Passive sensors, such as stereo cameras, are often included in the onboard instrumentation of a chaser spacecraft for capturing sequential photographs and for tracking features of the target surface. A plenty of methods, based on Kalman filtering, are available for the estimation of the target's state from feature positions; however, to guarantee the filter convergence, they typically require continuity of measurements and the capability of tracking a fixed set of pre-defined features of the object. These requirements clash with the actual tracking conditions: failures in feature detection often occur and the assumption of having some a-priori knowledge about the shape of the target could be restrictive in certain cases. The aim of the presented work is to propose a fault-tolerant alternative method for estimating the angular velocity and the relative magnitudes of the principal moments of inertia of the target. Raw data regarding the positions of the tracked features are processed to evaluate corrupted values of a 3-dimentional parameter which entirely describes the finite screw motion of the debris and which primarily is invariant on the particular set of considered features of the object. Missing values of the parameter are completely restored exploiting the typical periodicity of the rotational motion of an uncontrolled satellite: compressed sensing techniques, typically adopted for recovering images or for prognostic applications, are herein used in a completely original fashion for retrieving a kinematic signal that appears sparse in the frequency domain. Due to its invariance about the features, no assumptions are needed about the target's shape and continuity of the tracking. The obtained signal is useful for the indirect evaluation of an attitude signal that feeds an unscented Kalman filter for the estimation of the global rotational state of the target. The results of the computer simulations showed a good robustness of the method and its potential applicability for general motion conditions of the target.
Standardization of pitch-range settings in voice acoustic analysis.
Vogel, Adam P; Maruff, Paul; Snyder, Peter J; Mundt, James C
2009-05-01
Voice acoustic analysis is typically a labor-intensive, time-consuming process that requires the application of idiosyncratic parameters tailored to individual aspects of the speech signal. Such processes limit the efficiency and utility of voice analysis in clinical practice as well as in applied research and development. In the present study, we analyzed 1,120 voice files, using standard techniques (case-by-case hand analysis), taking roughly 10 work weeks of personnel time to complete. The results were compared with the analytic output of several automated analysis scripts that made use of preset pitch-range parameters. After pitch windows were selected to appropriately account for sex differences, the automated analysis scripts reduced processing time of the 1,120 speech samples to less than 2.5 h and produced results comparable to those obtained with hand analysis. However, caution should be exercised when applying the suggested preset values to pathological voice populations.
CARES/Life Software for Designing More Reliable Ceramic Parts
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Baker, Eric H.
1997-01-01
Products made from advanced ceramics show great promise for revolutionizing aerospace and terrestrial propulsion, and power generation. However, ceramic components are difficult to design because brittle materials in general have widely varying strength values. The CAPES/Life software eases this task by providing a tool to optimize the design and manufacture of brittle material components using probabilistic reliability analysis techniques. Probabilistic component design involves predicting the probability of failure for a thermomechanically loaded component from specimen rupture data. Typically, these experiments are performed using many simple geometry flexural or tensile test specimens. A static, dynamic, or cyclic load is applied to each specimen until fracture. Statistical strength and SCG (fatigue) parameters are then determined from these data. Using these parameters and the results obtained from a finite element analysis, the time-dependent reliability for a complex component geometry and loading is then predicted. Appropriate design changes are made until an acceptable probability of failure has been reached.
NASA Technical Reports Server (NTRS)
Tolson, Robert H.; Lugo, Rafael A.; Baird, Darren T.; Cianciolo, Alicia D.; Bougher, Stephen W.; Zurek, Richard M.
2017-01-01
The Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft is a NASA orbiter designed to explore the Mars upper atmosphere, typically from 140 to 160 km altitude. In addition to the nominal science mission, MAVEN has performed several Deep Dip campaigns in which the orbit's closest point of approach, also called periapsis, was lowered to an altitude range of 115 to 135 km. MAVEN accelerometer data were used during mission operations to estimate atmospheric parameters such as density, scale height, along-track gradients, and wave structures. Density and scale height estimates were compared against those obtained from the Mars Global Reference Atmospheric Model and used to aid the MAVEN navigation team in planning maneuvers to raise and lower periapsis during Deep Dip operations. This paper describes the processes used to reconstruct atmosphere parameters from accelerometers data and presents the results of their comparison to model and navigation-derived values.
EMITTING ELECTRONS AND SOURCE ACTIVITY IN MARKARIAN 501
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mankuzhiyil, Nijil; Ansoldi, Stefano; Persic, Massimo
2012-07-10
We study the variation of the broadband spectral energy distribution (SED) of the BL Lac object Mrk 501 as a function of source activity, from quiescent to flaring. Through {chi}{sup 2}-minimization we model eight simultaneous SED data sets with a one-zone synchrotron self-Compton (SSC) model, and examine how model parameters vary with source activity. The emerging variability pattern of Mrk 501 is complex, with the Compton component arising from {gamma}-e scatterings that sometimes are (mostly) Thomson and sometimes (mostly) extreme Klein-Nishina. This can be seen from the variation of the Compton to synchrotron peak distance according to source state. Themore » underlying electron spectra are faint/soft in quiescent states and bright/hard in flaring states. A comparison with Mrk 421 suggests that the typical values of the SSC parameters are different in the two sources: however, in both jets the energy density is particle-dominated in all states.« less
NASA Astrophysics Data System (ADS)
Gillette, V. H.; Patiño, N. E.; Granada, J. R.; Mayer, R. E.
1989-08-01
Using a synthetic incoherent scattering function which describes the interaction of neutrons with molecular gases we provide analytical expressions for zero- and first-order scattering kernels, σ0( E0 → E), σ1( E0 → E), and total cross section σ0( E0). Based on these quantities, we have performed calculations of thermalization parameters and transport coefficients for H 2O, D 2O, C 6H 6 and (CH 2) n at room temperature. Comparison of such values with available experimental data and other calculations is satisfactory. We also generated nuclear data libraries for H 2O with 47 thermal groups at 300 K and performed some benchmark calculations ( 235U, 239Pu, PWR cell and typical APWR cell); the resulting reactivities are compared with experimental data and ENDF/B-IV calculations.
NASA Astrophysics Data System (ADS)
Dudaryonok, A. S.; Lavrentieva, N. N.; Buldyreva, J.
2018-06-01
(J, K)-line broadening and shift coefficients with their temperature-dependence characteristics are computed for the perpendicular (ΔK = ±1) ν6 band of the 12CH3D-N2 system. The computations are based on a semi-empirical approach which consists in the use of analytical Anderson-type expressions multiplied by a few-parameter correction factor to account for various deviations from Anderson's theory approximations. A mathematically convenient form of the correction factor is chosen on the basis of experimental rotational dependencies of line widths, and its parameters are fitted on some experimental line widths at 296 K. To get the unknown CH3D polarizability in the excited vibrational state v6 for line-shift calculations, a parametric vibration-state-dependent expression is suggested, with two parameters adjusted on some room-temperature experimental values of line shifts. Having been validated by comparison with available in the literature experimental values for various sub-branches of the band, this approach is used to generate massive data of line-shape parameters for extended ranges of rotational quantum numbers (J up to 70 and K up to 20) typically requested for spectroscopic databases. To obtain the temperature-dependence characteristics of line widths and line shifts, computations are done for various temperatures in the range 200-400 K recommended for HITRAN and least-squares fit procedures are applied. For the case of line widths strong sub-branch dependence with increasing K is observed in the R- and P-branches; for the line shifts such dependence is stated for the Q-branch.
Constraints on CDM cosmology from galaxy power spectrum, CMB and SNIa evolution
NASA Astrophysics Data System (ADS)
Ferramacho, L. D.; Blanchard, A.; Zolnierowski, Y.
2009-05-01
Aims: We examine the constraints that can be obtained on standard cold dark matter models from the most currently used data set: CMB anisotropies, type Ia supernovae and the SDSS luminous red galaxies. We also examine how these constraints are widened when the equation of state parameter w and the curvature parameter Ωk are left as free parameters. Finally, we investigate the impact on these constraints of a possible form of evolution in SNIa intrinsic luminosity. Methods: We obtained our results from MCMC analysis using the full likelihood of each data set. Results: For the ΛCDM model, our “vanilla” model, cosmological parameters are tightly constrained and consistent with current estimates from various methods. When the dark energy parameter w is free we find that the constraints remain mostly unchanged, i.e. changes are smaller than the 1 sigma uncertainties. Similarly, relaxing the assumption of a flat universe leads to nearly identical constraints on the dark energy density parameter of the universe Ω_Λ , baryon density of the universe Ω_b, the optical depth τ, the index of the power spectrum of primordial fluctuations n_S, with most one sigma uncertainties better than 5%. More significant changes appear on other parameters: while preferred values are almost unchanged, uncertainties for the physical dark matter density Ω_ch^2, Hubble constant H0 and σ8 are typically twice as large. The constraint on the age of the Universe, which is very accurate for the vanilla model, is the most degraded. We found that different methodological approaches on large scale structure estimates lead to appreciable differences in preferred values and uncertainty widths. We found that possible evolution in SNIa intrinsic luminosity does not alter these constraints by much, except for w, for which the uncertainty is twice as large. At the same time, this possible evolution is severely constrained. Conclusions: We conclude that systematic uncertainties for some estimated quantities are similar or larger than statistical ones.
Ismail, Ahmad Muhaimin; Mohamad, Mohd Saberi; Abdul Majid, Hairudin; Abas, Khairul Hamimah; Deris, Safaai; Zaki, Nazar; Mohd Hashim, Siti Zaiton; Ibrahim, Zuwairie; Remli, Muhammad Akmal
2017-12-01
Mathematical modelling is fundamental to understand the dynamic behavior and regulation of the biochemical metabolisms and pathways that are found in biological systems. Pathways are used to describe complex processes that involve many parameters. It is important to have an accurate and complete set of parameters that describe the characteristics of a given model. However, measuring these parameters is typically difficult and even impossible in some cases. Furthermore, the experimental data are often incomplete and also suffer from experimental noise. These shortcomings make it challenging to identify the best-fit parameters that can represent the actual biological processes involved in biological systems. Computational approaches are required to estimate these parameters. The estimation is converted into multimodal optimization problems that require a global optimization algorithm that can avoid local solutions. These local solutions can lead to a bad fit when calibrating with a model. Although the model itself can potentially match a set of experimental data, a high-performance estimation algorithm is required to improve the quality of the solutions. This paper describes an improved hybrid of particle swarm optimization and the gravitational search algorithm (IPSOGSA) to improve the efficiency of a global optimum (the best set of kinetic parameter values) search. The findings suggest that the proposed algorithm is capable of narrowing down the search space by exploiting the feasible solution areas. Hence, the proposed algorithm is able to achieve a near-optimal set of parameters at a fast convergence speed. The proposed algorithm was tested and evaluated based on two aspartate pathways that were obtained from the BioModels Database. The results show that the proposed algorithm outperformed other standard optimization algorithms in terms of accuracy and near-optimal kinetic parameter estimation. Nevertheless, the proposed algorithm is only expected to work well in small scale systems. In addition, the results of this study can be used to estimate kinetic parameter values in the stage of model selection for different experimental conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
Chwirot, B W; Chwirot, S; Sypniewska, N; Michniewicz, Z; Redzinski, J; Kurzawski, G; Ruka, W
2001-12-01
Multicenter study of the diagnostic parameters was conducted by three groups in Poland to determine if in situ fluorescence detection of human cutaneous melanoma based on digital imaging of spectrally resolved autofluorescence can be used as a tool for a preliminary selection of patients at increased risk of the disease. Fluorescence examinations were performed for 7228 pigmented lesions in 4079 subjects. Histopathologic examinations showed 56 cases of melanoma. A sensitivity of fluorescence detection of melanoma was 82.7% in agreement with 82.5% found in earlier work. Using as a reference only the results of histopathologic examinations obtained for 568 cases we found a specificity of 59.9% and a positive predictive value of 17.5% (melanomas versus all pigmented lesions) or 24% (melanomas versus common and dysplastic naevi). The specificity and positive predictive value found in this work are significantly lower than reported earlier but still comparable with those reported for typical screening programs. In conclusion, the fluorescence method of in situ detection of melanoma can be used in screening large populations of patients for a selection of patients who should be examined by specialists.
Kityk, A V
2014-07-15
A long-range-corrected time-dependent density functional theory (LC-TDDFT) in combination with polarizable continuum model (PCM) have been applied to study charge transfer (CT) optical absorption and fluorescence emission energies basing on parameterized LC-BLYP xc-potential. The molecule of 4-(9-acridyl)julolidine selected for this study represents typical CT donor-acceptor dye with strongly solvent dependent optical absorption and fluorescence emission spectra. The result of calculations are compared with experimental spectra reported in the literature to derive an optimal value of the model screening parameter ω. The first absorption band appears to be quite well predictable within DFT/TDDFT/PCM with the screening parameter ω to be solvent independent (ω ≈ 0.245 Bohr(-1)) whereas the fluorescence emission exhibits a strong dependence on the range separation with ω-value varying on a rising solvent polarity from about 0.225 to 0.151 Bohr(-1). Dipolar properties of the initial state participating in the electronic transition have crucial impact on the effective screening. Copyright © 2014 Elsevier B.V. All rights reserved.
Characterizing Isozymes of Chlorite Dismutase for Water Treatment
Mobilia, Kellen C.; Hutchison, Justin M.; Zilles, Julie L.
2017-01-01
This work investigated the potential for biocatalytic degradation of micropollutants, focusing on chlorine oxyanions as model contaminants, by mining biology to identify promising biocatalysts. Existing isozymes of chlorite dismutase (Cld) were characterized with respect to parameters relevant to this high volume, low-value product application: kinetic parameters, resistance to catalytic inactivation, and stability. Maximum reaction velocities (Vmax) were typically on the order of 104 μmol min-1 (μmol heme)-1. Substrate affinity (Km) values were on the order of 100 μM, except for the Cld from Candidatus Nitrospira defluvii (NdCld), which showed a significantly lower affinity for chlorite. NdCld also had the highest susceptibility to catalytic inactivation. In contrast, the Cld from Ideonella dechloratans was least susceptible to catalytic inactivation, with a maximum turnover number of approximately 150,000, more than sevenfold higher than other tested isozymes. Under non-reactive conditions, Cld was quite stable, retaining over 50% of activity after 30 days, and most samples retained activity even after 90–100 days. Overall, Cld from I. dechloratans was the most promising candidate for environmental applications, having high affinity and activity, a relatively low propensity for catalytic inactivation, and excellent stability. PMID:29312158
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fast, Ivan; Bosbach, Dirk; Aksyutina, Yuliya
A requisite for the official approval of the safe final disposal of SNF is a comprehensive specification and declaration of the nuclear inventory in SNF by the waste supplier. In the verification process both the values of the radionuclide (RN) activities and their uncertainties are required. Burn-up (BU) calculations based on typical and generic reactor operational parameters do not encompass any possible uncertainties observed in real reactor operations. At the same time, the details of the irradiation history are often not well known, which complicates the assessment of declared RN inventories. Here, we have compiled a set of burnup calculationsmore » accounting for the operational history of 339 published or anonymized real PWR fuel assemblies (FA). These histories were used as a basis for a 'SRP analysis', to provide information about the range of the values of the associated secondary reactor parameters (SRP's). Hence, we can calculate the realistic variation or spectrum of RN inventories. SCALE 6.1 has been employed for the burn-up calculations. The results have been validated using experimental data from the online database - SFCOMPO-1 and -2. (authors)« less
Orientational order in smectic liquid-crystalline phases of amphiphilic diols
NASA Astrophysics Data System (ADS)
Giesselmann, Frank; Germer, Roland; Saipa, Alexander
2005-07-01
The thermotropic smectic phases of amphiphilic 2-(trans-4-n-alkylcyclohexyl)-propane-1,3-diols were investigated by means of small- and wide-angle x-ray scattering and values of the smectic (bi-)layer spacing, the orientational order parameters ⟨P2⟩ and ⟨P4⟩, the orientational distribution function as well as the intralayer correlation length were extracted from the scattering profiles. The results for the octyl homolog indicate that these smectic phases combine a very high degree of smectic one-dimensional-translational order with remarkably low orientational order, the order parameter of which (⟨P2⟩≈0.56) is far below those values typically found in nonamphiphilic smectics. This combination, quite exceptional in thermotropic smectics, most likely originates from the intermolecular hydrogen bonding between the terminal diol groups which seems to be the specific driving force in the formation of the thermotropic smectic structure in these amphiphiles and leads to a type of microphase segregation. Even in the absence of a solvent, the liquid-crystalline ordering of the amphiphilic mesogens comes close to the structure of the so-called neat soaps, found in lyotropic liquid crystals.
King, Randy L; Liu, Yunbo; Maruvada, Subha; Herman, Bruce A; Wear, Keith A; Harris, Gerald R
2011-07-01
A tissue-mimicking material (TMM) for the acoustic and thermal characterization of high-intensity focused ultrasound (HIFU) devices has been developed. The material is a high-temperature hydrogel matrix (gellan gum) combined with different sizes of aluminum oxide particles and other chemicals. The ultrasonic properties (attenuation coefficient, speed of sound, acoustical impedance, and the thermal conductivity and diffusivity) were characterized as a function of temperature from 20 to 70°C. The backscatter coefficient and nonlinearity parameter B/A were measured at room temperature. Importantly, the attenuation coefficient has essentially linear frequency dependence, as is the case for most mammalian tissues at 37°C. The mean value is 0.64f(0.95) dB·cm(-1) at 20°C, based on measurements from 2 to 8 MHz. Most of the other relevant physical parameters are also close to the reported values, although backscatter signals are low compared with typical human soft tissues. Repeatable and consistent temperature elevations of 40°C were produced under 20-s HIFU exposures in the TMM. This TMM is appropriate for developing standardized dosimetry techniques, validating numerical models, and determining the safety and efficacy of HIFU devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Audren, Benjamin; Bellini, Emilio; Cuesta, Antonio J.
The existence of a cosmic neutrino background can be probed indirectly by CMB experiments, not only by measuring the background density of radiation in the universe, but also by searching for the typical signatures of the fluctuations of free-streaming species in the temperature and polarisation power spectrum. Previous studies have already proposed a rather generic parametrisation of these fluctuations, that could help to discriminate between the signature of ordinary free-streaming neutrinos, or of more exotic dark radiation models. Current data are compatible with standard values of these parameters, which seems to bring further evidence for the existence of a cosmicmore » neutrino background. In this work, we investigate the robustness of this conclusion under various assumptions. We generalise the definition of an effective sound speed and viscosity speed to the case of massive neutrinos or other dark radiation components experiencing a non-relativistic transition. We show that current bounds on these effective parameters do not vary significantly when considering an arbitrary value of the particle mass, or extended cosmological models with a free effective neutrino number, dynamical dark energy or a running of the primordial spectrum tilt. We conclude that it is possible to make a robust statement about the detection of the cosmic neutrino background by CMB experiments.« less
Bayesian model for fate and transport of polychlorinated biphenyl in upper Hudson River
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steinberg, L.J.; Reckhow, K.H.; Wolpert, R.L.
1996-05-01
Modelers of contaminant fate and transport in surface waters typically rely on literature values when selecting parameter values for mechanistic models. While the expert judgment with which these selections are made is valuable, the information contained in contaminant concentration measurements should not be ignored. In this full-scale Bayesian analysis of polychlorinated biphenyl (PCB) contamination in the upper Hudson River, these two sources of information are combined using Bayes` theorem. A simulation model for the fate and transport of the PCBs in the upper Hudson River forms the basis of the likelihood function while the prior density is developed from literaturemore » values. The method provides estimates for the anaerobic biodegradation half-life, aerobic biodegradation plus volatilization half-life, contaminated sediment depth, and resuspension velocity of 4,400 d, 3.2 d, 0.32 m, and 0.02 m/yr, respectively. These are significantly different than values obtained with more traditional methods, and are shown to produce better predictions than those methods when used in a cross-validation study.« less
An interactive Bayesian geostatistical inverse protocol for hydraulic tomography
Fienen, Michael N.; Clemo, Tom; Kitanidis, Peter K.
2008-01-01
Hydraulic tomography is a powerful technique for characterizing heterogeneous hydrogeologic parameters. An explicit trade-off between characterization based on measurement misfit and subjective characterization using prior information is presented. We apply a Bayesian geostatistical inverse approach that is well suited to accommodate a flexible model with the level of complexity driven by the data and explicitly considering uncertainty. Prior information is incorporated through the selection of a parameter covariance model characterizing continuity and providing stability. Often, discontinuities in the parameter field, typically caused by geologic contacts between contrasting lithologic units, necessitate subdivision into zones across which there is no correlation among hydraulic parameters. We propose an interactive protocol in which zonation candidates are implied from the data and are evaluated using cross validation and expert knowledge. Uncertainty introduced by limited knowledge of dynamic regional conditions is mitigated by using drawdown rather than native head values. An adjoint state formulation of MODFLOW-2000 is used to calculate sensitivities which are used both for the solution to the inverse problem and to guide protocol decisions. The protocol is tested using synthetic two-dimensional steady state examples in which the wells are located at the edge of the region of interest.
Muon g - 2 in the aligned two Higgs doublet model
Han, Tao; Kang, Sin Kyu; Sayre, Joshua
2016-02-16
In this paper, we study the Two-Higgs-Doublet Model with the aligned Yukawa sector (A2HDM) in light of the observed excess measured in the muon anomalous magnetic moment. We take into account the existing theoretical and experimental constraints with up-to-date values and demonstrate that a phenomenologically interesting region of parameter space exists. With a detailed parameter scan, we show a much larger region of viable parameter space in this model beyond the limiting case Type X 2HDM as obtained before. It features the existence of light scalar states with masses 3 GeV ≲ m H ≲ 50 GeV, or 10 GeVmore » ≲ m A ≲ 130 GeV, with enhanced couplings to tau leptons. The charged Higgs boson is typically heavier, with 200 GeV ≲ m H+ ≲ 630 GeV. The surviving parameter space is forced into the CP-conserving limit by EDM constraints. Some Standard Model observables may be significantly modified, including a possible new decay mode of the SMlike Higgs boson to four taus. Lastly, we comment on future measurements and direct searches for those effects at the LHC as tests of the model.« less
Consistent parameter fixing in the quark-meson model with vacuum fluctuations
NASA Astrophysics Data System (ADS)
Carignano, Stefano; Buballa, Michael; Elkamhawy, Wael
2016-08-01
We revisit the renormalization prescription for the quark-meson model in an extended mean-field approximation, where vacuum quark fluctuations are included. At a given cutoff scale the model parameters are fixed by fitting vacuum quantities, typically including the sigma-meson mass mσ and the pion decay constant fπ. In most publications the latter is identified with the expectation value of the sigma field, while for mσ the curvature mass is taken. When quark loops are included, this prescription is however inconsistent, and the correct identification involves the renormalized pion decay constant and the sigma pole mass. In the present article we investigate the influence of the parameter-fixing scheme on the phase structure of the model at finite temperature and chemical potential. Despite large differences between the model parameters in the two schemes, we find that in homogeneous matter the effect on the phase diagram is relatively small. For inhomogeneous phases, on the other hand, the choice of the proper renormalization prescription is crucial. In particular, we show that if renormalization effects on the pion decay constant are not considered, the model does not even present a well-defined renormalized limit when the cutoff is sent to infinity.
NASA Astrophysics Data System (ADS)
Kvale, Karin F.; Meissner, Katrin J.
2017-10-01
Treatment of the underwater light field in ocean biogeochemical models has been attracting increasing interest, with some models moving towards more complex parameterisations. We conduct a simple sensitivity study of a typical, highly simplified parameterisation. In our study, we vary the phytoplankton light attenuation parameter over a range constrained by data during both pre-industrial equilibrated and future climate scenario RCP8.5. In equilibrium, lower light attenuation parameters (weaker self-shading) shift net primary production (NPP) towards the high latitudes, while higher values of light attenuation (stronger shelf-shading) shift NPP towards the low latitudes. Climate forcing magnifies this relationship through changes in the distribution of nutrients both within and between ocean regions. Where and how NPP responds to climate forcing can determine the magnitude and sign of global NPP trends in this high CO2 future scenario. Ocean oxygen is particularly sensitive to parameter choice. Under higher CO2 concentrations, two simulations establish a strong biogeochemical feedback between the Southern Ocean and low-latitude Pacific that highlights the potential for regional teleconnection. Our simulations serve as a reminder that shifts in fundamental properties (e.g. light attenuation by phytoplankton) over deep time have the potential to alter global biogeochemistry.
On justification of efficient Energy-Force parameters of Hydraulic-excavator main mechanisms
NASA Astrophysics Data System (ADS)
Komissarov, Anatoliy; Lagunova, Yuliya; Shestakov, Viktor; Lukashuk, Olga
2018-03-01
The article formulates requirements for energy-efficient designs of the operational equipment of a hydraulic excavator (its boom, stick and bucket) and defines, for a mechanism of that equipment, a new term “performance characteristic”. The drives of main rotation mechanisms of the equipment are realized by hydraulic actuators (hydraulic cylinders) and transmission (leverage) mechanisms, with the actuators (the cylinders themselves, their pistons and piston rods) also acting as links of the leverage. Those drives are characterized by the complexity of translating mechanical-energy parameters of the actuators into energy parameters of the driven links (a boom, a stick and a bucket). Relations between those parameters depend as much on the types of mechanical characteristics of the hydraulic actuators as on the types of structural schematics of the transmission mechanisms. To assess how energy-force parameters of the driven links change when a typical operation is performed, it was proposed to calculate performance characteristics of the main mechanisms as represented by a set of values of transfer functions, i.e. by functional dependences between driven links and driving links (actuators). Another term “ideal performance characteristic” of a mechanism was introduced. Based on operation-emulating models for the main mechanisms of hydraulic excavators, analytical expressions were derived to calculate kinematic and force transfer functions of the main mechanisms.
Fine-structure constant constraints on dark energy. II. Extending the parameter space
NASA Astrophysics Data System (ADS)
Martins, C. J. A. P.; Pinho, A. M. M.; Carreira, P.; Gusart, A.; López, J.; Rocha, C. I. S. A.
2016-01-01
Astrophysical tests of the stability of fundamental couplings, such as the fine-structure constant α , are a powerful probe of new physics. Recently these measurements, combined with local atomic clock tests and Type Ia supernova and Hubble parameter data, were used to constrain the simplest class of dynamical dark energy models where the same degree of freedom is assumed to provide both the dark energy and (through a dimensionless coupling, ζ , to the electromagnetic sector) the α variation. One caveat of these analyses was that it was based on fiducial models where the dark energy equation of state was described by a single parameter (effectively its present day value, w0). Here we relax this assumption and study broader dark energy model classes, including the Chevallier-Polarski-Linder and early dark energy parametrizations. Even in these extended cases we find that the current data constrains the coupling ζ at the 1 0-6 level and w0 to a few percent (marginalizing over other parameters), thus confirming the robustness of earlier analyses. On the other hand, the additional parameters are typically not well constrained. We also highlight the implications of our results for constraints on violations of the weak equivalence principle and improvements to be expected from forthcoming measurements with high-resolution ultrastable spectrographs.
Assessing Interval Estimation Methods for Hill Model ...
The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet
Astashkin, Andrei V; Neese, Frank; Raitsimring, Arnold M; Cooney, J Jon A; Bultman, Eric; Enemark, John H
2005-11-30
Ka band ESEEM spectroscopy was used to determine the hyperfine (hfi) and nuclear quadrupole (nqi) interaction parameters for the oxo-17O ligand in [Mo 17O(SPh)4]-, a spectroscopic model of the oxo-Mo(V) centers of enzymes. The isotropic hfi constant of 6.5 MHz found for the oxo-17O is much smaller than the values of approximately 20-40 MHz typical for the 17O nucleus of an equatorial OH(2) ligand in molybdenum enzymes. The 17O nqi parameter (e2qQ/h = 1.45 MHz, eta approximately = 0) is the first to be obtained for an oxo group in a metal complex. The parameters of the oxo-17O ligand, as well as other magnetic resonance parameters of [Mo 17O(SPh)4]- predicted by quasi-relativistic DFT calculations, were in good agreement with those obtained in experiment. From the electronic structure of the complex revealed by DFT, it follows that the SOMO is almost entirely molybdenum d(xy) and sulfur p, while the spin density on the oxo-17O is negative, determined by spin polarization mechanisms. The results of this work will enable direct experimental identification of the oxo ligand in a variety of chemical and biological systems.
Lessons Learned from Six Decades of Radio Polarimetry
NASA Astrophysics Data System (ADS)
Wiesemeyer, Helmut; Güsten, R.; Kreysa, E.; Menten, K. M.; Morris, D.; Paubert, G.; Pillai, T.; Sievers, A.; Thum, C.
2018-01-01
The characterization of polarized emission from continuum radiation and spectral lines across large-scale galactic and extragalactic fields is a typical application of single-dish telescopes, from radio to far-infrared wavelengths. Despite its high analytical value, in many cases polarimetry was added to the design specifications of telescopes and their frontends only in advanced development stages. While in some situations the instrumental contamination of the Stokes parameters can be corrected, this becomes increasingly difficult for extended fields. This contribution summarizes the current situation at mm/submm telescopes. Strategies for post-observing polarization calibration are presented as well as methods to optimize the components in the beam path.
Simple radiative transfer model for relationships between canopy biomass and reflectance
NASA Technical Reports Server (NTRS)
Park, J. K.; Deering, D. W.
1982-01-01
A modified Kubelka-Munk model has been utilized to derive useful equations for the analysis of apparent canopy reflectance. Based on the solution to the model simple working equations were formulated by employing reflectance characteristic parameters. The relationships derived show the asymptotic nature of reflectance data that is typically observed in remote sensing studies of plant biomass. They also establish the range of expected apparent canopy reflectance values for specific plant canopy types. The usefulness of the simplified equations was demonstrated by the exceptionally close fit of the theoretical curves to two separately acquired data sets for alfalfa and shortgrass prairie canopies.
NASA Technical Reports Server (NTRS)
Mcgoogan, J. T.; Leitao, C. D.; Wells, W. T.
1975-01-01
The SKYLAB S-193 altimeter altitude results are presented in a concise format for further use and analysis by the scientific community. The altimeter mission and instrumentation is described along with the altimeter processing techniques and values of parameters used for processing. The determination of reference orbits is discussed, and the tracking systems utilized are tabulated. Techniques for determining satellite pointing are presented and a tabulation of pointing for each data mission included. The geographical location, the ocean bottom topography, the altimeter-determined ocean surface topography, and the altimeter automatic gain control history is presented. Some typical applications of this data are suggested.
Fundamental studies in X-ray astrophysics
NASA Technical Reports Server (NTRS)
Lamb, D. Q.; Lightman, A. P.
1982-01-01
An analytical model calculation of the ionization structure of matter accreting onto a degenerate dwarf was carried out. Self-consistent values of the various parameters are used. The possibility of nuclear burning of the accreting matter is included. We find the blackbody radiation emitted from the stellar surface keeps hydrogen and helium ionized out to distances much larger than a typical binary separation. Except for low mass stars or high accretion rates, the assumption of complete ionization of the elements heavier than helium is a good first approximation. For low mass stars or high accretion rates the validity of assuming complete ionization depends sensitivity on the distribution of matter in the binary system.
Sensitivity analysis of periodic errors in heterodyne interferometry
NASA Astrophysics Data System (ADS)
Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony
2011-03-01
Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.
SF-FDTD analysis of a predictive physical model for parallel aligned liquid crystal devices
NASA Astrophysics Data System (ADS)
Márquez, Andrés.; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Alvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto
2017-08-01
Recently we demonstrated a novel and simplified model enabling to calculate the voltage dependent retardance provided by parallel aligned liquid crystal devices (PA-LCoS) for a very wide range of incidence angles and any wavelength in the visible. To our knowledge it represents the most simplified approach still showing predictive capability. Deeper insight into the physics behind the simplified model is necessary to understand if the parameters in the model are physically meaningful. Since the PA-LCoS is a black-box where we do not have information about the physical parameters of the device, we cannot perform this kind of analysis using the experimental retardance measurements. In this work we develop realistic simulations for the non-linear tilt of the liquid crystal director across the thickness of the liquid crystal layer in the PA devices. We consider these profiles to have a sine-like shape, which is a good approximation for typical ranges of applied voltage in commercial PA-LCoS microdisplays. For these simulations we develop a rigorous method based on the split-field finite difference time domain (SF-FDTD) technique which provides realistic retardance values. These values are used as the experimental measurements to which the simplified model is fitted. From this analysis we learn that the simplified model is very robust, providing unambiguous solutions when fitting its parameters. We also learn that two of the parameters in the model are physically meaningful, proving a useful reverse-engineering approach, with predictive capability, to probe into internal characteristics of the PA-LCoS device.
Cost analysis of composite fan blade manufacturing processes
NASA Technical Reports Server (NTRS)
Stelson, T. S.; Barth, C. F.
1980-01-01
The relative manufacturing costs were estimated for large high technology fan blades prepared by advanced composite fabrication methods using seven candidate materials/process systems. These systems were identified as laminated resin matrix composite, filament wound resin matrix composite, superhybrid solid laminate, superhybrid spar/shell, metal matrix composite, metal matrix composite with a spar and shell, and hollow titanium. The costs were calculated utilizing analytical process models and all cost data are presented as normalized relative values where 100 was the cost of a conventionally forged solid titanium fan blade whose geometry corresponded to a size typical of 42 blades per disc. Four costs were calculated for each of the seven candidate systems to relate the variation of cost on blade size. Geometries typical of blade designs at 24, 30, 36 and 42 blades per disc were used. The impact of individual process yield factors on costs was also assessed as well as effects of process parameters, raw materials, labor rates and consumable items.
NASA Astrophysics Data System (ADS)
Chan, GuoXuan; Wang, Xin
2018-04-01
We consider two typical approximations that are used in the microscopic calculations of double-quantum dot spin qubits, namely, the Heitler-London (HL) and the Hund-Mulliken (HM) approximations, which use linear combinations of Fock-Darwin states to approximate the two-electron states under the double-well confinement potential. We compared these results to a case in which the solution to a one-dimensional Schr¨odinger equation was exactly known and found that typical microscopic calculations based on Fock-Darwin states substantially underestimate the value of the exchange interaction, which is the key parameter that controls the quantum dot spin qubits. This underestimation originates from the lack of tunneling of Fock-Darwin states, which is accurate only in the case with a single potential well. Our results suggest that the accuracies of the current two-dimensional molecular- orbit-theoretical calculations based on Fock-Darwin states should be revisited since underestimation could only deteriorate in dimensions that are higher than one.
The use of Meteonorm weather generator for climate change studies
NASA Astrophysics Data System (ADS)
Remund, J.; Müller, S. C.; Schilter, C.; Rihm, B.
2010-09-01
The global climatological database Meteonorm (www.meteonorm.com) is widely used as meteorological input for simulation of solar applications and buildings. It's a combination of a climate database, a spatial interpolation tool and a stochastic weather generator. Like this typical years with hourly or minute time resolution can be calculated for any site. The input of Meteonorm for global radiation is the Global Energy Balance Archive (GEBA, http://proto-geba.ethz.ch). All other meteorological parameters are taken from databases of WMO and NCDC (periods 1961-90 and 1996-2005). The stochastic generation of global radiation is based on a Markov chain model for daily values and an autoregressive model for hourly and minute values (Aguiar and Collares-Pereira, 1988 and 1992). The generation of temperature is based on global radiation and measured distribution of daily temperature values of approx. 5000 sites. Meteonorm generates also additional parameters like precipitation, wind speed or radiation parameters like diffuse and direct normal irradiance. Meteonorm can also be used for climate change studies. Instead of climate values, the results of IPCC AR4 results are used as input. From all 18 public models an average has been made at a resolution of 1°. The anomalies of the parameters temperature, precipitation and global radiation and the three scenarios B1, A1B and A2 have been included. With the combination of Meteonorm's current database 1961-90, the interpolation algorithms and the stochastic generation typical years can be calculated for any site, for different scenarios and for any period between 2010 and 2200. From the analysis of variations of year to year and month to month variations of temperature, precipitation and global radiation of the past ten years as well of climate model forecasts (from project prudence, http://prudence.dmi.dk) a simple autoregressive model has been formed which is used to generate realistic monthly time series of future periods. Meteonorm can therefore be used as a relatively simple method to enhance the spatial and temporal resolution instead of using complicated and time consuming downscaling methods based on regional climate models. The combination of Meteonorm, gridded historical (based on work of Luterbach et al.) and IPCC results has been used for studies of vegetation simulation between 1660 and 2600 (publication of first version based on IS92a scenario and limited time period 1950 - 2100: http://www.pbl.nl/images/H5_Part2_van%20CCE_opmaak%28def%29_tcm61-46625.pdf). It's also applicable for other adaptation studies for e.g. road surfaces or building simulation. In Meteonorm 6.1 one scenario (IS92a) and one climate model has been included (Hadley CM3). In the new Meteonorm 7 (coming spring 2011) the model averages of the three above mentioned scenarios of the IPCC AR4 will be included.
Collective behaviour in vertebrates: a sensory perspective
Collignon, Bertrand; Fernández-Juricic, Esteban
2016-01-01
Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616
Evolutionary algorithm for vehicle driving cycle generation.
Perhinschi, Mario G; Marlowe, Christopher; Tamayo, Sergio; Tu, Jun; Wayne, W Scott
2011-09-01
Modeling transit bus emissions and fuel economy requires a large amount of experimental data over wide ranges of operational conditions. Chassis dynamometer tests are typically performed using representative driving cycles defined based on vehicle instantaneous speed as sequences of "microtrips", which are intervals between consecutive vehicle stops. Overall significant parameters of the driving cycle, such as average speed, stops per mile, kinetic intensity, and others, are used as independent variables in the modeling process. Performing tests at all the necessary combinations of parameters is expensive and time consuming. In this paper, a methodology is proposed for building driving cycles at prescribed independent variable values using experimental data through the concatenation of "microtrips" isolated from a limited number of standard chassis dynamometer test cycles. The selection of the adequate "microtrips" is achieved through a customized evolutionary algorithm. The genetic representation uses microtrip definitions as genes. Specific mutation, crossover, and karyotype alteration operators have been defined. The Roulette-Wheel selection technique with elitist strategy drives the optimization process, which consists of minimizing the errors to desired overall cycle parameters. This utility is part of the Integrated Bus Information System developed at West Virginia University.
Bflinks: Reliable Bugfix Links via Bidirectional References and Tuned Heuristics
2014-01-01
Background. Data from software version archives and defect databases can be used for defect insertion circumstance analysis and defect prediction. The first step in such analyses is identifying defect-correcting changes in the version archive (bugfix commits) and enriching them with additional metadata by establishing bugfix links to corresponding entries in the defect database. Candidate bugfix commits are typically identified via heuristic string matching on the commit message. Research Questions. Which filters could be used to obtain a set of bugfix links? How to tune their parameters? What accuracy is achieved? Method. We analyze a modular set of seven independent filters, including new ones that make use of reverse links, and evaluate visual heuristics for setting cutoff parameters. For a commercial repository, a product expert manually verifies over 2500 links to validate the results with unprecedented accuracy. Results. The heuristics pick a very good parameter value for five filters and a reasonably good one for the sixth. The combined filtering, called bflinks, provides 93% precision and only 7% results loss. Conclusion. Bflinks can provide high-quality results and adapts to repositories with different properties. PMID:27433506
Crop Damage by Primates: Quantifying the Key Parameters of Crop-Raiding Events
Wallace, Graham E.; Hill, Catherine M.
2012-01-01
Human-wildlife conflict often arises from crop-raiding, and insights regarding which aspects of raiding events determine crop loss are essential when developing and evaluating deterrents. However, because accounts of crop-raiding behaviour are frequently indirect, these parameters are rarely quantified or explicitly linked to crop damage. Using systematic observations of the behaviour of non-human primates on farms in western Uganda, this research identifies number of individuals raiding and duration of raid as the primary parameters determining crop loss. Secondary factors include distance travelled onto farm, age composition of the raiding group, and whether raids are in series. Regression models accounted for greater proportions of variation in crop loss when increasingly crop and species specific. Parameter values varied across primate species, probably reflecting differences in raiding tactics or perceptions of risk, and thereby providing indices of how comfortable primates are on-farm. Median raiding-group sizes were markedly smaller than the typical sizes of social groups. The research suggests that key parameters of raiding events can be used to measure the behavioural impacts of deterrents to raiding. Furthermore, farmers will benefit most from methods that discourage raiding by multiple individuals, reduce the size of raiding groups, or decrease the amount of time primates are on-farm. This study demonstrates the importance of directly relating crop loss to the parameters of raiding events, using systematic observations of the behaviour of multiple primate species. PMID:23056378
2014-01-01
Background The levels of 19 elements (As, Be, Ca, Cd, Co, Cr, Cu, Fe, K, Mg, Mn, Na, Ni, Pb, Se, Tl, U, V, Zn) from sixteen different Argentine production sites of unifloral [eucalyptus (Eucaliptus rostrata), chilca (Baccharis salicifolia), Algarrobo (Prosopis sp.), mistol (Ziziphus mistol) and citric] and multifloral honeys were measured with the aim to test the quality of the selected samples. Typical quality parameters of honeys were also determined (pH, sugar content, moisture). Mineral elements were determined by using inductively coupled plasma mass spectrometer (ICP-MS DRC). We also evaluated the suitability of honey as a possible biomonitor of environmental pollution. Thus, the sites were classified through cluster analysis (CA) and then pattern recognition methods such as Principal Component Analysis (PCA) and discriminant analysis (DA) were applied. Results Mean values for quality parameters were: pH, 4.12 and 3.81; sugar 82.1 and 82.0 °brix; moisture, 16.90 and 17.00% for unifloral and multifloral honeys respectively. The water content showed good maturity. Likewise, the other parameters confirmed the good quality of the honeys analysed. Potassium was quantitatively the most abundant metal, accounting for 92,5% of the total metal contents with an average concentration of 832.0 and 816.2 μg g-1 for unifloral and multifloral honeys respectively. Sodium was the second most abundant major metal in honeys with a mean value of 32.16 and 33.19 μg g-1 for unifloral and multifloral honeys respectively. Mg, Ca, Fe, Mn, Zn and Cu were present at low-intermediate concentrations. For the other 11 trace elements determined in this study (As, Be, Cd, Co, Cr, Ni, Pb, Se, Tl, U and V), the mean concentrations were very low or below of the LODs. The sites were classified through CA by using elements’ and physicochemical parameters data, then DA on the PCA factors was applied. Dendrograms identified three main groups. PCA explained 52.03% of the total variability with the first two factors. Conclusions In general, there are no evidences of pollution for the analysed honeys. The analytical results obtained for the Argentine honeys indicate the products’ high quality. In fact, most of the toxic elements were below LODs. The chemometric analysis combining CA, DA and PCA showed their aptness as useful tools for honey’s classification. Eventually, this study confirms that the use of honey as biomonitor of environmental contamination is not reliable for sites with low levels of contamination. PMID:25057287
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeler, C; The University of Texas Graduate School of Biomedical Sciences at Houston, Houston, TX; Mirkovic, D
2016-06-15
Purpose: We identified patients treated for ependymoma with passive scattering proton therapy who subsequently developed treatment-related imaging changes on MRI. We sought to determine if there is any spatial correlation between imaged response, dose, and LET. Methods: A group of 14 patients treated for ependymoma were identified as having post-treatment MR imaging changes observable as T2-FLAIR hyperintensity with or without enhancement on T1 post-contrast sequences. MR images were registered with treatment planning CT images and regions of treatment-related change contoured by a practicing radiation oncologist. The contoured regions were identified as response with voxels represented as 1 while voxels withinmore » the brain outside of the response region were represented as 0. An in-house Monte Carlo system was used to recalculate treatment plans to obtain dose and LET information. Voxels were binned according to LET values in 0.3 keV µm{sup −1} bins. Dose and corresponding response value (0 or 1) for each voxel for a given LET bin were then plotted and fit with the Lyman-Kutcher-Burman dose response model to determine TD{sub 50} and m parameters for each LET value. Response parameters from all patients were then collated, and linear fits of the data were performed. Results: The response parameters TD50 and m both show trends with LET. Outliers were observed due to low numbers of response voxels in some cases. TD{sub 50} values decreased with LET while m increased with LET. The former result would indicate that for higher LET values, the dose is more effective, which is consistent with relative biological effectiveness (RBE) models for proton therapy. Conclusion: A novel method of voxel-level analysis of image biomarker-based adverse patient treatment response in proton therapy according to dose and LET has been presented. Fitted TD{sub 50} values show a decreasing trend with LET supporting the typical models of proton RBE. Funding provided by NIH Program Project Grant 2U19CA021239-35.« less
Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models
NASA Astrophysics Data System (ADS)
Field, Scott E.; Galley, Chad R.; Hesthaven, Jan S.; Kaye, Jason; Tiglio, Manuel
2014-07-01
We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform's value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mcfit) online operations, where cfit denotes the fitting function operation count and, typically, m ≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 105M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in generating new waveforms with a surrogate. As waveform generation is one of the dominant costs in parameter estimation algorithms and parameter space exploration, surrogate models offer a new and practical way to dramatically accelerate such studies without impacting accuracy. Surrogates built in this paper, as well as others, are available from GWSurrogate, a publicly available python package.
Evaluating Force-Field London Dispersion Coefficients Using the Exchange-Hole Dipole Moment Model.
Mohebifar, Mohamad; Johnson, Erin R; Rowley, Christopher N
2017-12-12
London dispersion interactions play an integral role in materials science and biophysics. Force fields for atomistic molecular simulations typically represent dispersion interactions by the 12-6 Lennard-Jones potential using empirically determined parameters. These parameters are generally underdetermined, and there is no straightforward way to test if they are physically realistic. Alternatively, the exchange-hole dipole moment (XDM) model from density-functional theory predicts atomic and molecular London dispersion coefficients from first principles, providing an innovative strategy to validate the dispersion terms of molecular-mechanical force fields. In this work, the XDM model was used to obtain the London dispersion coefficients of 88 organic molecules relevant to biochemistry and pharmaceutical chemistry and the values compared with those derived from the Lennard-Jones parameters of the CGenFF, GAFF, OPLS, and Drude polarizable force fields. The molecular dispersion coefficients for the CGenFF, GAFF, and OPLS models are systematically higher than the XDM-calculated values by a factor of roughly 1.5, likely due to neglect of higher order dispersion terms and premature truncation of the dispersion-energy summation. The XDM dispersion coefficients span a large range for some molecular-mechanical atom types, suggesting an unrecognized source of error in force-field models, which assume that atoms of the same type have the same dispersion interactions. Agreement with the XDM dispersion coefficients is even poorer for the Drude polarizable force field. Popular water models were also examined, and TIP3P was found to have dispersion coefficients similar to the experimental and XDM references, although other models employ anomalously high values. Finally, XDM-derived dispersion coefficients were used to parametrize molecular-mechanical force fields for five liquids-benzene, toluene, cyclohexane, n-pentane, and n-hexane-which resulted in improved accuracy in the computed enthalpies of vaporization despite only having to evaluate a much smaller section of the parameter space.
NASA Astrophysics Data System (ADS)
Dralle, D.; Karst, N.; Thompson, S. E.
2015-12-01
Multiple competing theories suggest that power law behavior governs the observed first-order dynamics of streamflow recessions - the important process by which catchments dry-out via the stream network, altering the availability of surface water resources and in-stream habitat. Frequently modeled as: dq/dt = -aqb, recessions typically exhibit a high degree of variability, even within a single catchment, as revealed by significant shifts in the values of "a" and "b" across recession events. One potential source of this variability lies in underlying, hard-to-observe fluctuations in how catchment water storage is partitioned amongst distinct storage elements, each having different discharge behaviors. Testing this and competing hypotheses with widely available streamflow timeseries, however, has been hindered by a power law scaling artifact that obscures meaningful covariation between the recession parameters, "a" and "b". Here we briefly outline a technique that removes this artifact, revealing intriguing new patterns in the joint distribution of recession parameters. Using long-term flow data from catchments in Northern California, we explore temporal variations, and find that the "a" parameter varies strongly with catchment wetness. Then we explore how the "b" parameter changes with "a", and find that measures of its variation are maximized at intermediate "a" values. We propose an interpretation of this pattern based on statistical mechanics, meaning "b" can be viewed as an indicator of the catchment "microstate" - i.e. the partitioning of storage - and "a" as a measure of the catchment macrostate (i.e. the total storage). In statistical mechanics, entropy (i.e. microstate variance, that is the variance of "b") is maximized for intermediate values of extensive variables (i.e. wetness, "a"), as observed in the recession data. This interpretation of "a" and "b" was supported by model runs using a multiple-reservoir catchment toy model, and lends support to the hypothesis that power law streamflow recession dynamics, and their variations, have their origin in the multiple modalities of storage partitioning.
Xi, Qing; Li, Zhao-Fu; Luo, Chuan
2014-05-01
Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.
Pedrini, Paolo; Bragalanti, Natalia; Groff, Claudio
2017-01-01
Recently-developed methods that integrate multiple data sources arising from the same ecological processes have typically utilized structured data from well-defined sampling protocols (e.g., capture-recapture and telemetry). Despite this new methodological focus, the value of opportunistic data for improving inference about spatial ecological processes is unclear and, perhaps more importantly, no procedures are available to formally test whether parameter estimates are consistent across data sources and whether they are suitable for integration. Using data collected on the reintroduced brown bear population in the Italian Alps, a population of conservation importance, we combined data from three sources: traditional spatial capture-recapture data, telemetry data, and opportunistic data. We developed a fully integrated spatial capture-recapture (SCR) model that included a model-based test for data consistency to first compare model estimates using different combinations of data, and then, by acknowledging data-type differences, evaluate parameter consistency. We demonstrate that opportunistic data lend itself naturally to integration within the SCR framework and highlight the value of opportunistic data for improving inference about space use and population size. This is particularly relevant in studies of rare or elusive species, where the number of spatial encounters is usually small and where additional observations are of high value. In addition, our results highlight the importance of testing and accounting for inconsistencies in spatial information from structured and unstructured data so as to avoid the risk of spurious or averaged estimates of space use and consequently, of population size. Our work supports the use of a single modeling framework to combine spatially-referenced data while also accounting for parameter consistency. PMID:28973034
Cloud-to-ground lightning activity in Colombia: A 14-year study using lightning location system data
NASA Astrophysics Data System (ADS)
Herrera, J.; Younes, C.; Porras, L.
2018-05-01
This paper presents the analysis of 14 years of cloud-to-ground lightning activity observation in Colombia using lightning location systems (LLS) data. The first Colombian LLS operated from 1997 to 2001. After a few years, this system was upgraded and a new LLS has been operating since 2007. Data obtained from these two systems was analyzed in order to obtain lightning parameters used in designing lightning protection systems. The flash detection efficiency was estimated using average peak current maps and some theoretical results previously published. Lightning flash multiplicity was evaluated using a stroke grouping algorithm resulting in average values of about 1.0 and 1.6 for positive and negative flashes respectively and for both LLS. The time variation of this parameter changes slightly for the years considered in this study. The first stroke peak current for negative and positive flashes shows median values close to 29 kA and 17 kA respectively for both networks showing a great dependence on the flash detection efficiency. The average percentage of negative and positive flashes shows a 74.04% and 25.95% of occurrence respectively. The daily variation shows a peak between 23 and 02 h. The monthly variation of this parameter exhibits a bimodal behavior typical of the regions located near The Equator. The lightning flash density was obtained dividing the study area in 3 × 3 km cells and resulting in maximum average values of 25 and 35 flashes km- 2 year- 1 for each network respectively. A comparison of these results with global lightning activity hotspots was performed showing good correlation. Besides, the lightning flash density variation with altitude shows an inverse relation between these two variables.
Fusé, Victoria S; Priano, M Eugenia; Williams, Karen E; Gere, José I; Guzmán, Sergio A; Gratton, Roberto; Juliarena, M Paula
2016-10-01
The global methane (CH 4 ) emission of lakes is estimated at between 6 and 16 % of total natural CH 4 emissions. However, these values have a high uncertainty due to the wide variety of lakes with important differences in their morphological, biological, and physicochemical parameters and the relatively scarse data from southern mid-latitude lakes. For these reasons, we studied CH 4 fluxes and CH 4 dissolved in water in a typical shallow lake in the Pampean Wetland, Argentina, during four periods of consecutive years (April 2011-March 2015) preceded by different rainfall conditions. Other water physicochemical parameters were measured and meteorological data were reported. We identified three different states of the lake throughout the study as the result of the irregular alternation between high and low rainfall periods, with similar water temperature values but with important variations in dissolved oxygen, chemical oxygen demand, water turbidity, electric conductivity, and water level. As a consequence, marked seasonal and interannual variations occurred in CH 4 dissolved in water and CH 4 fluxes from the lake. These temporal variations were best reflected by water temperature and depth of the Secchi disk, as a water turbidity estimation, which had a significant double correlation with CH 4 dissolved in water. The mean CH 4 fluxes values were 0.22 and 4.09 mg/m 2 /h for periods with low and high water turbidity, respectively. This work suggests that water temperature and turbidity measurements could serve as indicator parameters of the state of the lake and, therefore, of its behavior as either a CH 4 source or sink.
Specification of ISS Plasma Environment Variability
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Neergaard, Linda F.; Bui, Them H.; Mikatarian, Ronald R.; Barsamian, H.; Koontz, Steven L.
2004-01-01
Quantifying spacecraft charging risks and associated hazards for the International Space Station (ISS) requires a plasma environment specification for the natural variability of ionospheric temperature (Te) and density (Ne). Empirical ionospheric specification and forecast models such as the International Reference Ionosphere (IRI) model typically only provide long term (seasonal) mean Te and Ne values for the low Earth orbit environment. This paper describes a statistical analysis of historical ionospheric low Earth orbit plasma measurements from the AE-C, AE-D, and DE-2 satellites used to derive a model of deviations of observed data values from IRI-2001 estimates of Ne, Te parameters for each data point to provide a statistical basis for modeling the deviations of the plasma environment from the IRI model output. Application of the deviation model with the IRI-2001 output yields a method for estimating extreme environments for the ISS spacecraft charging analysis.
Pregger, Thomas; Friedrich, Rainer
2009-02-01
Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.
NASA Technical Reports Server (NTRS)
Davis, J. M.; Krieger, A. S.
1982-01-01
The properties of coronal arches located on the peripheries of active regions, observed during a sounding rocket flight on March 8, 1973, are discussed. The arches are found to overlie filament channels and their footpoints are traced on locations on the perimeters of supergranulation cells. The arches have a wide range of lengths although their widths are well approximated by the value 2.2 x 10 to the 9th cm. Comparison of the size of the chromospheric footprint with the arc width indicates that arches do not always expand as they ascend into the corona. The electron temperatures and densities of the plasma contained in the arches were measured and the pressure calculated; typical values are 2-million K, 1 x 10 to the 9th/cu cm, and 0.2 dyne/sq cm, respectively. The variation of these parameters with position along the length of the arch indicates that the arches are not in hydrostatic equilibrium.
Chroni, Christina; Kyriacou, Adamadini; Manios, Thrassyvoulos; Lasaridi, Konstantia-Ekaterini
2009-08-01
In a bid to identify suitable microbial indicators of compost stability, the process evolution during windrow composting of poultry manure (PM), green waste (GW) and biowaste was studied. Treatments were monitored with regard to abiotic factors, respiration activity (determined using the SOUR test) and functional microflora. The composting process went through typical changes in temperature, moisture content and microbial properties, despite the inherent feedstock differences. Nitrobacter and pathogen indicators varied as a monotonous function of processing time. Some microbial groups have shown a potential to serve as fingerprints of the different process stages, but still they should be examined in context with respirometric tests and abiotic parameters. Respiration activity reflected well the process stage, verifying the value of respirometric tests to access compost stability. SOUR values below 1 mg O(2)/g VS/h were achieved for the PM and the GW compost.
Piezo-optic and elasto-optic properties of monoclinic triglycine sulfate crystals.
Mytsyk, Bogdan; Demyanyshyn, Natalya; Erba, Alessandro; Shut, Viktor; Mozzharov, Sergey; Kost, Yaroslav; Mys, Oksana; Vlokh, Rostyslav
2017-12-01
For the first time, to the best of our knowledge, we have experimentally determined all of the components of the piezo-optic tensor for monoclinic crystals. This has been implemented on a specific example of triglycine sulfate crystals. Based on the results obtained, the complete elasto-optic tensor has been calculated. Acousto-optic figures of merit (AOFMs) have been estimated for the case of acousto-optic interaction occurring in the principal planes of the optical indicatrix ellipsoid and for geometries in which the highest elasto-optic coefficients are involved as effective parameters. It has been found that the highest AOFM value is equal to 6.8×10 -15 s 3 /kg for the case of isotropic acousto-optic interaction with quasi-longitudinal acoustic waves in the principal planes. This AOFM is higher than the corresponding values typical for canonic acousto-optic materials, which are transparent in the deep ultraviolet spectral range.
Mahendran, Kozhinjampara R; Lamichhane, Usha; Romero-Ruiz, Mercedes; Nussberger, Stephan; Winterhalter, Mathias
2013-01-03
The TOM protein complex facilitates the transfer of nearly all mitochondrial preproteins across outer mitochondrial membranes. Here we characterized the effect of temperature on facilitated translocation of a mitochondrial presequence peptide pF1β. Ion current fluctuations analysis through single TOM channels revealed thermodynamic and kinetic parameters of substrate binding and allowed determining the energy profile of peptide translocation. The activation energy for the on-rate and off-rate of the presequence peptide into the TOM complex was symmetric with respect to the electric field and estimated to be about 15 and 22 kT per peptide. These values are above that expected for free diffusion of ions in water (6 kT) and reflect the stronger interaction in the channel. Both values are in the range for typical enzyme kinetics and suggest one process without involving large conformational changes within the channel protein.
NASA Astrophysics Data System (ADS)
Handley, Heather K.; Turner, Simon; Afonso, Juan C.; Dosseto, Anthony; Cohen, Tim
2013-02-01
Quantifying the rates of landscape evolution in response to climate change is inhibited by the difficulty of dating the formation of continental detrital sediments. We present uranium isotope data for Cooper Creek palaeochannel sediments from the Lake Eyre Basin in semi-arid South Australia in order to attempt to determine the formation ages and hence residence times of the sediments. To calculate the amount of recoil loss of 234U, a key input parameter used in the comminution approach, we use two suggested methods (weighted geometric and surface area measurement with an incorporated fractal correction) and typical assumed input parameter values found in the literature. The calculated recoil loss factors and comminution ages are highly dependent on the method of recoil loss factor determination used and the chosen assumptions. To appraise the ramifications of the assumptions inherent in the comminution age approach and determine individual and combined comminution age uncertainties associated to each variable, Monte Carlo simulations were conducted for a synthetic sediment sample. Using a reasonable associated uncertainty for each input factor and including variations in the source rock and measured (234U/238U) ratios, the total combined uncertainty on comminution age in our simulation (for both methods of recoil loss factor estimation) can amount to ±220-280 ka. The modelling shows that small changes in assumed input values translate into large effects on absolute comminution age. To improve the accuracy of the technique and provide meaningful absolute comminution ages, much tighter constraints are required on the assumptions for input factors such as the fraction of α-recoil lost 234Th and the initial (234U/238U) ratio of the source material. In order to be able to directly compare calculated comminution ages produced by different research groups, the standardisation of pre-treatment procedures, recoil loss factor estimation and assumed input parameter values is required. We suggest a set of input parameter values for such a purpose. Additional considerations for calculating comminution ages of sediments deposited within large, semi-arid drainage basins are discussed.
Price, W D; Williams, E R
1997-11-20
Unimolecular rate constants for blackbody infrared radiative dissociation (BIRD) were calculated for the model protonated peptide (AlaGly)(n) (n = 2-32) using a variety of dissociation parameters. Combinations of dissociation threshold energies ranging from 0.8 to 1.7 eV and transition entropies corresponding to Arrhenius preexponential factors ranging from very "tight" (A(infinity) = 10(9.9) s(-1)) to "loose" (A(infinity) = 10(16.8) s(-1)) were selected to represent dissociation parameters within the experimental temperature range (300-520 K) and kinetic window (k(uni) = 0.001-0.20 s(-1)) typically used in the BIRD experiment. Arrhenius parameters were determined from the temperature dependence of these values and compared to those in the rapid energy exchange (REX) limit. In this limit, the internal energy of a population of ions is given by a Boltzmann distribution, and kinetics are the same as those in the traditional high-pressure limit. For a dissociation process to be in this limit, the rate of photon exchange between an ion and the vacuum chamber walls must be significantly greater than the dissociation rate. Kinetics rapidly approach the REX limit either as the molecular size or threshold dissociation energy increases or as the transition-state entropy or experimental temperature decreases. Under typical experimental conditions, peptide ions larger than 1.6 kDa should be in the REX limit. Smaller ions may also be in the REX limit depending on the value of the threshold dissociation energy and transition-state entropy. Either modeling or information about the dissociation mechanism must be known in order to confirm REX limit kinetics for these smaller ions. Three principal factors that lead to the size dependence of REX limit kinetics are identified. With increasing molecular size, rates of radiative absorption and emission increase, internal energy distributions become relatively narrower, and the microcanonical dissociation rate constants increase more slowly over the energy distribution of ions. Guidelines established here should make BIRD an even more reliable method to obtain information about dissociation energetics and mechanisms for intermediate size molecules.
Price, William D.
2005-01-01
Unimolecular rate constants for blackbody infrared radiative dissociation (BIRD) were calculated for the model protonated peptide (AlaGly)n (n = 2–32) using a variety of dissociation parameters. Combinations of dissociation threshold energies ranging from 0.8 to 1.7 eV and transition entropies corresponding to Arrhenius preexponential factors ranging from very “tight” (A∞ = 109.9 s−1) to “loose” (A∞ = 1016.8 s−1) were selected to represent dissociation parameters within the experimental temperature range (300–520 K) and kinetic window (kuni = 0.001–0.20 s−1) typically used in the BIRD experiment. Arrhenius parameters were determined from the temperature dependence of these values and compared to those in the rapid energy exchange (REX) limit. In this limit, the internal energy of a population of ions is given by a Boltzmann distribution, and kinetics are the same as those in the traditional high-pressure limit. For a dissociation process to be in this limit, the rate of photon exchange between an ion and the vacuum chamber walls must be significantly greater than the dissociation rate. Kinetics rapidly approach the REX limit either as the molecular size or threshold dissociation energy increases or as the transition-state entropy or experimental temperature decreases. Under typical experimental conditions, peptide ions larger than 1.6 kDa should be in the REX limit. Smaller ions may also be in the REX limit depending on the value of the threshold dissociation energy and transition-state entropy. Either modeling or information about the dissociation mechanism must be known in order to confirm REX limit kinetics for these smaller ions. Three principal factors that lead to the size dependence of REX limit kinetics are identified. With increasing molecular size, rates of radiative absorption and emission increase, internal energy distributions become relatively narrower, and the microcanonical dissociation rate constants increase more slowly over the energy distribution of ions. Guidelines established here should make BIRD an even more reliable method to obtain information about dissociation energetics and mechanisms for intermediate size molecules. PMID:16604162
Report on the study of the tax and rate treatment of renewable energy projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadley, S.W.; Hill, L.J.; Perlack, R.D.
1993-12-01
This study was conducted in response to the requirements of Section 1205 of the Energy Policy Act of 1992 (EPACT), which states: The Secretary (of Energy), in conjunction with State regulatory commissions, shall undertake a study to determine if conventional taxation and ratemaking procedures result in economic barriers to or incentives for renewable energy power plants compared to conventional power plants. The purpose of the study, therefore, is not to compare the cost-effectiveness of different types of renewable and conventional electric generating plants. Rather, it is to determine the relative impact of conventional ratemaking and taxation procedures on the selectionmore » of renewable power plants compared to conventional ones. To make this determination, we quantify the technical and financial parameters of renewable and conventional electric generating technologies, and hold them fixed throughout the study. Then, we vary taxation and ratemaking procedures to determine their effects on the financial criteria that investor-owned electric utilities (IOUs) and nonutility electricity generators (NUGs) use to make technology-adoption decisions. In the planning process of a typical utility, the opposite is usually the case. That is, utilities typically hold ratemaking and taxation procedures constant and look for the least-cost mix of resources, varying the values of engineering and financial parameters of generating plants in the process.« less
Universally Sloppy Parameter Sensitivities in Systems Biology Models
Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P
2007-01-01
Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a “sloppy” spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters. PMID:17922568
Universally sloppy parameter sensitivities in systems biology models.
Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P
2007-10-01
Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.
Gryko, Anna; Głowińska-Olszewska, Barbara; Płudowska, Katarzyna; Smithson, W Henry; Owłasiuk, Anna; Żelazowska-Rutkowska, Beata; Wojtkielewicz, Katarzyna; Milewski, Robert; Chlabicz, Sławomir
2017-01-01
In the recent years, alterations in the carbohydrate metabolism, including insulin resistance, are considered as risk factors in the development of hypertension and its complications in young age. Hypertension is associated with significant cardiovascular morbidity and mortality. The onset of pathology responsible for the development of hypertension, as well as levels of biomarkers specific for early stages of atherosclerosis are poorly understood. To compare a group of children whose parents have a history of hypertension (study group) with a group of children with normotensive parents (reference group), with consideration of typical risk factors for atherosclerosis, parameters of lipid and carbohydrate metabolism, anthropometric data and new biomarkers of early cardiovascular disease (hsCRP, adiponectin, sICAM-1). The study population consists of 84 children. Of these, 40 children (mean age 13.6±2.7 years) had a parental history of hypertension, and 44 aged 13.1±3.7 yrs were children of normotensive parents. Anthropometric measurements were taken, and measurements of blood pressure, lipid profile, glucose and insulin levels were carried out. The insulin resistance index (HOMA IR) was calculated. Levels of hsCRP, soluble cell adhesion molecules (sICAM) and adiponectin were measured. There were no statistically significant differences in anthropometric parameters (body mass, SDS BMI, skin folds) between groups. Values of systolic blood pressure were statistically significantly higher in the study group (Me 108 vs. 100 mmHg, p= 0.031), as were glycaemia (Me 80 vs. 67 mg/dl p<0.001) and insulinaemia levels (Me 8.89 vs. 5.34 µIU/ml, p=0.024). Higher, statistically significant values of HOMA IR were found in the study group (children of hypertensive parents) (Me 1.68 vs. 0.80 mmol/l × mU/l, p=0.007). Lower adiponectin levels (Me 13959.45 vs. 16822 ng/ml, p=0.020) were found in children with a family history of hypertension. No significant differences were found in the levels of sICAM, hsCRP, and parameters of lipid metabolism. Family history of hypertension is correlated with higher values of systolic blood pressure and higher values of parameters for carbohydrate metabolism in children. Hypertension in parents is a risk factor for cardiovascular disease in their children. © Polish Society for Pediatric Endocrinology and Diabetology.
Online selective kernel-based temporal difference learning.
Chen, Xingguo; Gao, Yang; Wang, Ruili
2013-12-01
In this paper, an online selective kernel-based temporal difference (OSKTD) learning algorithm is proposed to deal with large scale and/or continuous reinforcement learning problems. OSKTD includes two online procedures: online sparsification and parameter updating for the selective kernel-based value function. A new sparsification method (i.e., a kernel distance-based online sparsification method) is proposed based on selective ensemble learning, which is computationally less complex compared with other sparsification methods. With the proposed sparsification method, the sparsified dictionary of samples is constructed online by checking if a sample needs to be added to the sparsified dictionary. In addition, based on local validity, a selective kernel-based value function is proposed to select the best samples from the sample dictionary for the selective kernel-based value function approximator. The parameters of the selective kernel-based value function are iteratively updated by using the temporal difference (TD) learning algorithm combined with the gradient descent technique. The complexity of the online sparsification procedure in the OSKTD algorithm is O(n). In addition, two typical experiments (Maze and Mountain Car) are used to compare with both traditional and up-to-date O(n) algorithms (GTD, GTD2, and TDC using the kernel-based value function), and the results demonstrate the effectiveness of our proposed algorithm. In the Maze problem, OSKTD converges to an optimal policy and converges faster than both traditional and up-to-date algorithms. In the Mountain Car problem, OSKTD converges, requires less computation time compared with other sparsification methods, gets a better local optima than the traditional algorithms, and converges much faster than the up-to-date algorithms. In addition, OSKTD can reach a competitive ultimate optima compared with the up-to-date algorithms.
Deriving and Constraining 3D CME Kinematic Parameters from Multi-Viewpoint Coronagraph Images
NASA Astrophysics Data System (ADS)
Thompson, B. J.; Mei, H. F.; Barnes, D.; Colaninno, R. C.; Kwon, R.; Mays, M. L.; Mierla, M.; Moestl, C.; Richardson, I. G.; Verbeke, C.
2017-12-01
Determining the 3D properties of a coronal mass ejection using multi-viewpoint coronagraph observations can be a tremendously complicated process. There are many factors that inhibit the ability to unambiguously identify the speed, direction and shape of a CME. These factors include the need to separate the "true" CME mass from shock-associated brightenings, distinguish between non-radial or deflected trajectories, and identify asymmetric CME structures. Additionally, different measurement methods can produce different results, sometimes with great variations. Part of the reason for the wide range of values that can be reported for a single CME is due to the difficulty in determining the CME's longitude since uncertainty in the angle of the CME relative to the observing image planes results in errors in the speed and topology of the CME. Often the errors quoted in an individual study are remarkably small when compared to the range of values that are reported by different authors for the same CME. For example, two authors may report speeds of 700 +- 50 km/sec and 500+-50 km/sec for the same CME. Clearly a better understanding of the accuracy of CME measurements, and an improved assessment of the limitations of the different methods, would be of benefit. We report on a survey of CME measurements, wherein we compare the values reported by different authors and catalogs. The survey will allow us to establish typical errors for the parameters that are commonly used as inputs for CME propagation models such as ENLIL and EUHFORIA. One way modelers handle inaccuracies in CME parameters is to use an ensemble of CMEs, sampled across ranges of latitude, longitude, speed and width. The CMEs simulated in order to determine the probability of a "direct hit" and, for the cases with a "hit," derive a range of possible arrival times. Our study will provide improved guidelines for generating CME ensembles that more accurately sample across the range of plausible values.
Impact of haze-fog days to radon progeny equilibrium factor and discussion of related factors.
Hou, Changsong; Shang, Bing; Zhang, Qingzhao; Cui, Hongxing; Wu, Yunyun; Deng, Jun
2015-11-01
The equilibrium factor F between radon and its short-lived progenies is an important parameter to estimate radon exposure of humans. Therefore, indoor and outdoor concentrations of radon and its short-lived radon progeny were measured in Beijing area using a continuously measuring device, in an effort to obtain information on the F value. The results showed that the mean values of F were 0.58 ± 0.13 (0.25-0.95, n = 305) and 0.52 ± 0.12 (0.31-0.91, n = 64) for indoor and outdoor, respectively. The indoor F value during haze-fog days was higher than the typical value of 0.4 recommended by the United Nations Scientific Committee on the Effects of Atomic Radiation, and it was also higher than the values of 0.47 and 0.49 reported in the literature. A positive correlation was observed between indoor F values and PM2.5 concentrations (R (2) = 0.71). Since 2013, owing to frequent heavy haze-fog events in Beijing and surrounding areas, the number of the days with severe pollution remains at a high level. Future studies on the impact of the ambient fine particulate matter on indoor radon progeny equilibrium factor F could be important.
System and method for motor parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values formore » motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.« less
The SEGUE Stellar Parameter Pipeline. II. Validation with Galactic Globular and Open Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Y.S.; Beers, T.C.; Sivarani, T.
2007-10-01
The authors validate the performance and accuracy of the current SEGUE (Sloan Extension for Galactic Understanding and Exploration) Stellar Parameter Pipeline (SSPP), which determines stellar atmospheric parameters (effective temperature, surface gravity, and metallicity) by comparing derived overall metallicities and radial velocities from selected likely members of three globular clusters (M 13, M 15, and M 2) and two open clusters (NGC 2420 and M 67) to the literature values. Spectroscopic and photometric data obtained during the course of the original Sloan Digital Sky Survey (SDSS-1) and its first extension (SDSS-II/SEGUE) are used to determine stellar radial velocities and atmospheric parametermore » estimates for stars in these clusters. Based on the scatter in the metallicities derived for the members of each cluster, they quantify the typical uncertainty of the SSPP values, {sigma}([Fe/H]) = 0.13 dex for stars in the range of 4500 K {le} T{sub eff} {le} 7500 K and 2.0 {le} log g {le} 5.0, at least over the metallicity interval spanned by the clusters studied (-2.3 {le} [Fe/H] < 0). The surface gravities and effective temperatures derived by the SSPP are also compared with those estimated from the comparison of the color-magnitude diagrams with stellar evolution models; they find satisfactory agreement. At present, the SSPP underestimates [Fe/H] for near-solar-metallicity stars, represented by members of M 67 in this study, by {approx} 0.3 dex.« less
Röhrich, Manuel; Huang, Kristin; Schrimpf, Daniel; Albert, Nathalie L; Hielscher, Thomas; von Deimling, Andreas; Schüller, Ulrich; Dimitrakopoulou-Strauss, Antonia; Haberkorn, Uwe
2018-05-07
Dynamic 18 F-FET PET/CT is a powerful tool for the diagnosis of gliomas. 18 F-FET PET time-activity curves (TAC) allow differentiation between histological low-grade gliomas (LGG) and high-grade gliomas (HGG). Molecular methods such as epigenetic profiling are of rising importance for glioma grading and subclassification. Here, we analysed dynamic 18 F-FET PET data, and the histological and epigenetic features of 44 gliomas. Dynamic 18 F-FET PET was performed in 44 patients with newly diagnosed, untreated glioma: 10 WHO grade II glioma, 13 WHO grade III glioma and 21 glioblastoma (GBM). All patients underwent stereotactic biopsy or tumour resection after 18 F-FET PET imaging. As well as histological analysis of tissue samples, DNA was subjected to epigenetic analysis using the Illumina 850 K methylation array. TACs, standardized uptake values corrected for background uptake in healthy tissue (SUVmax/BG), time to peak (TTP) and kinetic modelling parameters were correlated with histological diagnoses and with epigenetic signatures. Multivariate analyses were performed to evaluate the diagnostic accuracy of 18 F-FET PET in relation to the tumour groups identified by histological and methylation-based analysis. Epigenetic profiling led to substantial tumour reclassification, with six grade II/III gliomas reclassified as GBM. Overlap of HGG-typical TACs and LGG-typical TACs was dramatically reduced when tumours were clustered on the basis of their methylation profile. SUVmax/BG values of GBM were higher than those of LGGs following both histological diagnosis and methylation-based diagnosis. The differences in TTP between GBMs and grade II/III gliomas were greater following methylation-based diagnosis than following histological diagnosis. Kinetic modeling showed that relative K1 and fractal dimension (FD) values significantly differed in histology- and methylation-based GBM and grade II/III glioma between those diagnosed histologically and those diagnosed by methylation analysis. Multivariate analysis revealed slightly greater diagnostic accuracy with methylation-based diagnosis. IDH-mutant gliomas and GBM subgroups tended to differ in their 18 F-FET PET kinetics. The status of dynamic 18 F-FET PET as a biologically and clinically relevant imaging modality is confirmed in the context of molecular glioma diagnosis.
The predictive consequences of parameterization
NASA Astrophysics Data System (ADS)
White, J.; Hughes, J. D.; Doherty, J. E.
2013-12-01
In numerical groundwater modeling, parameterization is the process of selecting the aspects of a computer model that will be allowed to vary during history matching. This selection process is dependent on professional judgment and is, therefore, inherently subjective. Ideally, a robust parameterization should be commensurate with the spatial and temporal resolution of the model and should include all uncertain aspects of the model. Limited computing resources typically require reducing the number of adjustable parameters so that only a subset of the uncertain model aspects are treated as estimable parameters; the remaining aspects are treated as fixed parameters during history matching. We use linear subspace theory to develop expressions for the predictive error incurred by fixing parameters. The predictive error is comprised of two terms. The first term arises directly from the sensitivity of a prediction to fixed parameters. The second term arises from prediction-sensitive adjustable parameters that are forced to compensate for fixed parameters during history matching. The compensation is accompanied by inappropriate adjustment of otherwise uninformed, null-space parameter components. Unwarranted adjustment of null-space components away from prior maximum likelihood values may produce bias if a prediction is sensitive to those components. The potential for subjective parameterization choices to corrupt predictions is examined using a synthetic model. Several strategies are evaluated, including use of piecewise constant zones, use of pilot points with Tikhonov regularization and use of the Karhunen-Loeve transformation. The best choice of parameterization (as defined by minimum error variance) is strongly dependent on the types of predictions to be made by the model.
Characterization of particle emission from laser printers.
Scungio, Mauro; Vitanza, Tania; Stabile, Luca; Buonanno, Giorgio; Morawska, Lidia
2017-05-15
Emission of particles from laser printers in office environments is claimed to have impact on human health due to likelihood of exposure to high particle concentrations in such indoor environments. In the present paper, particle emission characteristics of 110 laser printers from different manufacturers were analyzed, and estimations of their emission rates were made on the basis of measurements of total concentrations of particles emitted by the printers placed in a chamber, as well as particle size distributions. The emission rates in terms of number, surface area and mass were found to be within the ranges from 3.39×10 8 partmin -1 to 1.61×10 12 partmin -1 , 1.06×10 0 mm 2 min -1 to 1.46×10 3 mm 2 min -1 and 1.32×10 -1 μgmin -1 to 1.23×10 2 μgmin -1 , respectively, while the median mode value of the emitted particles was found equal to 34nm. In addition, the effect of laser printing emissions in terms of employees' exposure in offices was evaluated on the basis of the emission rates, by calculating the daily surface area doses (as sum of alveolar and tracheobronchial deposition fraction) received assuming a typical printing scenario. In such typical printing conditions, a relatively low total surface area dose (2.7mm 2 ) was estimated for office employees with respect to other indoor microenvironments including both workplaces and homes. Nonetheless, for severe exposure conditions, characterized by operating parameters falling beyond the typical values (i.e. smaller office, lower ventilation, printer located on the desk, closer to the person, higher printing frequency etc.), significantly higher doses are expected. Copyright © 2017 Elsevier B.V. All rights reserved.
Thermal motion in proteins: Large effects on the time-averaged interaction energies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goethe, Martin, E-mail: martingoethe@ub.edu; Rubi, J. Miguel; Fita, Ignacio
As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothingmore » effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.« less
Thermal motion in proteins: Large effects on the time-averaged interaction energies
NASA Astrophysics Data System (ADS)
Goethe, Martin; Fita, Ignacio; Rubi, J. Miguel
2016-03-01
As a consequence of thermal motion, inter-atomic distances in proteins fluctuate strongly around their average values, and hence, also interaction energies (i.e. the pair-potentials evaluated at the fluctuating distances) are not constant in time but exhibit pronounced fluctuations. These fluctuations cause that time-averaged interaction energies do generally not coincide with the energy values obtained by evaluating the pair-potentials at the average distances. More precisely, time-averaged interaction energies behave typically smoother in terms of the average distance than the corresponding pair-potentials. This averaging effect is referred to as the thermal smoothing effect. Here, we estimate the strength of the thermal smoothing effect on the Lennard-Jones pair-potential for globular proteins at ambient conditions using x-ray diffraction and simulation data of a representative set of proteins. For specific atom species, we find a significant smoothing effect where the time-averaged interaction energy of a single atom pair can differ by various tens of cal/mol from the Lennard-Jones potential at the average distance. Importantly, we observe a dependency of the effect on the local environment of the involved atoms. The effect is typically weaker for bulky backbone atoms in beta sheets than for side-chain atoms belonging to other secondary structure on the surface of the protein. The results of this work have important practical implications for protein software relying on free energy expressions. We show that the accuracy of free energy expressions can largely be increased by introducing environment specific Lennard-Jones parameters accounting for the fact that the typical thermal motion of protein atoms depends strongly on their local environment.
Muir, B; Rogers, D; McEwen, M
2012-07-01
When current dosimetry protocols were written, electron beam data were limited and had uncertainties that were unacceptable for reference dosimetry. Protocols for high-energy reference dosimetry are currently being updated leading to considerable interest in accurate electron beam data. To this end, Monte Carlo simulations using the EGSnrc user-code egs_chamber are performed to extract relevant data for reference beam dosimetry. Calculations of the absorbed dose to water and the absorbed dose to the gas in realistic ion chamber models are performed as a function of depth in water for cobalt-60 and high-energy electron beams between 4 and 22 MeV. These calculations are used to extract several of the parameters required for electron beam dosimetry - the beam quality specifier, R 50 , beam quality conversion factors, k Q and k R50 , the electron quality conversion factor, k' R50 , the photon-electron conversion factor, k ecal , and ion chamber perturbation factors, P Q . The method used has the advantage that many important parameters can be extracted as a function of depth instead of determination at only the reference depth as has typically been done. Results obtained here are in good agreement with measured and other calculated results. The photon-electron conversion factors obtained for a Farmer-type NE2571 and plane-parallel PTW Roos, IBA NACP-02 and Exradin A11 chambers are 0.903, 0.896, 0.894 and 0.906, respectively. These typically differ by less than 0.7% from the contentious TG-51 values but have much smaller systematic uncertainties. These results are valuable for reference dosimetry of high-energy electron beams. © 2012 American Association of Physicists in Medicine.
Micro-mechanics of hydro-mechanical coupled processes during hydraulic fracturing in sandstone
NASA Astrophysics Data System (ADS)
Caulk, R.; Tomac, I.
2017-12-01
This contribution presents micro-mechanical study of hydraulic fracture initiation and propagation in sandstone. The Discrete Element Method (DEM) Yade software is used as a tool to model fully coupled hydro-mechanical behavior of the saturated sandstone under pressures typical for deep geo-reservoirs. Heterogeneity of sandstone strength tensile and shear parameters are introduced using statistical representation of cathodoluminiscence (CL) sandstone rock images. Weibull distribution of statistical parameter values was determined as a best match of the CL scans of sandstone grains and cement between grains. Results of hydraulic fracturing stimulation from the well bore indicate significant difference between models with the bond strengths informed from CL scans and uniform homogeneous representation of sandstone parameters. Micro-mechanical insight reveals formed hydraulic fracture typical for mode I or tensile cracking in both cases. However, the shear micro-cracks are abundant in the CL informed model while they are absent in the standard model with uniform strength distribution. Most of the mode II cracks, or shear micro-cracks, are not part of the main hydraulic fracture and occur in the near-tip and near-fracture areas. The position and occurrence of the shear micro-cracks is characterized as secondary effect which dissipates the hydraulic fracturing energy. Additionally, the shear micro-crack locations qualitatively resemble acoustic emission cloud of shear cracks frequently observed in hydraulic fracturing, and sometimes interpreted as re-activation of existing fractures. Clearly, our model does not contain pre-existing cracks and has continuous nature prior to fracturing. This observation is novel and interesting and is quantified in the paper. The shear particle contact forces field reveals significant relaxation compared to the model with uniform strength distribution.
The calculations of small molecular conformation energy differences by density functional method
NASA Astrophysics Data System (ADS)
Topol, I. A.; Burt, S. K.
1993-03-01
The differences in the conformational energies for the gauche (G) and trans(T) conformers of 1,2-difluoroethane and for myo-and scyllo-conformer of inositol have been calculated by local density functional method (LDF approximation) with geometry optimization using different sets of calculation parameters. It is shown that in the contrast to Hartree—Fock methods, density functional calculations reproduce the correct sign and value of the gauche effect for 1,2-difluoroethane and energy difference for both conformers of inositol. The results of normal vibrational analysis for1,2-difluoroethane showed that harmonic frequencies calculated in LDF approximation agree with experimental data with the accuracy typical for scaled large basis set Hartree—Fock calculations.
Type-I superconductivity in YbSb2 single crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Liang L.; Lausberg, Stefan; Kim, Hyunsoo
2012-06-25
We present evidence of type-I superconductivity in YbSb2 single crystals from dc and ac magnetization, heat capacity, and resistivity measurements. The critical temperature and critical field are determined to be Tc≈ 1.3 K and Hc≈ 55 Oe. A small Ginzburg-Landau parameter κ= 0.05, together with typical magnetization isotherms of type-I superconductors, small critical field values, a strong differential paramagnetic effect signal, and a field-induced change from second- to first-order phase transition, confirms the type-I nature of the superconductivity in YbSb2. A possible second superconducting state is observed in the radio-frequency susceptibility measurements, with Tc(2)≈ 0.41 K and Hc(2)≈ 430 Oe.
Thermal conductivity of electrospun polyethylene nanofibers.
Ma, Jian; Zhang, Qian; Mayo, Anthony; Ni, Zhonghua; Yi, Hong; Chen, Yunfei; Mu, Richard; Bellan, Leon M; Li, Deyu
2015-10-28
We report on the structure-thermal transport property relation of individual polyethylene nanofibers fabricated by electrospinning with different deposition parameters. Measurement results show that the nanofiber thermal conductivity depends on the electric field used in the electrospinning process, with a general trend of higher thermal conductivity for fibers prepared with stronger electric field. Nanofibers produced at a 45 kV electrospinning voltage and a 150 mm needle-collector distance could have a thermal conductivity of up to 9.3 W m(-1) K(-1), over 20 times higher than the typical bulk value. Micro-Raman characterization suggests that the enhanced thermal conductivity is due to the highly oriented polymer chains and enhanced crystallinity in the electrospun nanofibers.
Radial Basis Function Neural Network Application to Power System Restoration Studies
Sadeghkhani, Iman; Ketabi, Abbas; Feuillet, Rene
2012-01-01
One of the most important issues in power system restoration is overvoltages caused by transformer switching. These overvoltages might damage some equipment and delay power system restoration. This paper presents a radial basis function neural network (RBFNN) to study transformer switching overvoltages. To achieve good generalization capability for developed RBFNN, equivalent parameters of the network are added to RBFNN inputs. The developed RBFNN is trained with the worst-case scenario of switching angle and remanent flux and tested for typical cases. The simulated results for a partial of 39-bus New England test system show that the proposed technique can estimate the peak values and duration of switching overvoltages with good accuracy. PMID:22792093
NASA Technical Reports Server (NTRS)
Seltzer, S. M.
1976-01-01
The problem discussed is to design a digital controller for a typical satellite. The controlled plant is considered to be a rigid body acting in a plane. The controller is assumed to be a digital computer which, when combined with the proposed control algorithm, can be represented as a sampled-data system. The objective is to present a design strategy and technique for selecting numerical values for the control gains (assuming position, integral, and derivative feedback) and the sample rate. The technique is based on the parameter plane method and requires that the system be amenable to z-transform analysis.
High-Q resonant cavities for terahertz quantum cascade lasers.
Campa, A; Consolino, L; Ravaro, M; Mazzotti, D; Vitiello, M S; Bartalini, S; De Natale, P
2015-02-09
We report on the realization and characterization of two different designs for resonant THz cavities, based on wire-grid polarizers as input/output couplers, and injected by a continuous-wave quantum cascade laser (QCL) emitting at 2.55 THz. A comparison between the measured resonators parameters and the expected theoretical values is reported. With achieved quality factor Q ≈ 2.5 × 10(5), these cavities show resonant peaks as narrow as few MHz, comparable with the typical Doppler linewidth of THz molecular transitions and slightly broader than the free-running QCL emission spectrum. The effects of the optical feedback from one cavity to the QCL are examined by using the other cavity as a frequency reference.
The spontaneous emission factor for lasers with gain induced waveguiding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newstein, M.
1984-11-01
The expression for the spontaneous emission factor for lasers with gain induced waveguiding has a factor K, called by Petermann ''the astigmatism parameter.'' This factor has been invoked to explain spectral and dynamic characteristics of this class of lasers. We contend that the widely accepted form of the K factor is based on a derivation which is not appropriate for the typical laser situation where the spontaneous emission factor is much smaller than unity. An alternative derivation is presented which leads to a different form for the K factor. The new expression predicts much smaller values under conditions where themore » previous theory gave values large compared to unity. Petermann's form for the K factor is shown to be relevant to large gain linear amplifiers where the power is amplified spontaneous emission noise. The expression for the power output has Petermann's value of K as a factor. The difference in the two situations is that in the laser oscillator the typical atom of interest couples a small portion of its incoherent spontaneous emission into the dominant mode, whereas in the amplifier only the atoms at the input end are important as sources and their output is converted to a greater degree into the dominant mode through the propagation process. In this analysis the authors use a classical model of radiating point dipoles in a continuous medium characterized by a complex permittivity. Since uncritical use of this model will lead to infinite radiation resistance they address the problem of its self-consistency.« less
Shahan, Timothy A; Craig, Andrew R
2017-08-01
Resurgence is typically defined as an increase in a previously extinguished target behavior when a more recently reinforced alternative behavior is later extinguished. Some treatments of the phenomenon have suggested that it might also extend to circumstances where either the historic or more recently reinforced behavior is reduced by other non-extinction related means (e.g., punishment, decreases in reinforcement rate, satiation, etc.). Here we present a theory of resurgence suggesting that the phenomenon results from the same basic processes governing choice. In its most general form, the theory suggests that resurgence results from changes in the allocation of target behavior driven by changes in the values of the target and alternative options across time. Specifically, resurgence occurs when there is an increase in the relative value of an historically effective target option as a result of a subsequent devaluation of a more recently effective alternative option. We develop a more specific quantitative model of how extinction of the target and alternative responses in a typical resurgence paradigm might produce such changes in relative value across time using a temporal weighting rule. The example model does a good job in accounting for the effects of reinforcement rate and related manipulations on resurgence in simple schedules where Behavioral Momentum Theory has failed. We also discuss how the general theory might be extended to other parameters of reinforcement (e.g., magnitude, quality), other means to suppress target or alternative behavior (e.g., satiation, punishment, differential reinforcement of other behavior), and other factors (e.g., non- contingent versus contingent alternative reinforcement, serial alternative reinforcement, and multiple schedules). Copyright © 2016 Elsevier B.V. All rights reserved.
Wave behaviour of sporadic E-layer variations at the latitudes 30-70N
NASA Astrophysics Data System (ADS)
Ryabchenko, E. Yu.; Sherstyukov, O. N.
A wave behaviour of sporadic E-layer variations was investigated by analysing time series of twenty European ionosonde stations (30°N--80°N, 15°W--45°E) for 1985-1988. Wavelet transform was used to explore 3-30 periodicities in variations of Es-layer relative electron density δ NEs defined here as (foEs2--foE2)/foE2. Such compound parameter allowed us to partly exclude solar ionisation factor and concentrate on meteorological nature of Es-layer synoptical oscillations. A typical synoptical atmospheric 3-30 day oscillations were discovered in foEs and also in δ NEs. Due to nonorthgonal wavelet transform used in this work, it is advisable to divide frequency domain into several optimal intervals. Five periods 4,6,10,16 and 24 day were chosen which cover 3-5, 5-7, 8-12, 13-20 and 20-30 day intervals. Low value of oscillation amplitude not greater than 1.5 is typical for most of European ionospheric stations in January-March and September-December. A higher values were observed at latitudes higher than 60°N. A wave vortex were discovered during the analysis of dynamics of δ NEs spatio-temporal variations in summer for each period interval. In May and June we observed wave penetration from north and south into the middle latitudes 45°N--55°N with amplitudes up to 5.0 for the most of considered years. In Jule and August all amplitudes reach their average values.
Moore, G W K; Semple, J L
2011-01-01
Cold injury is an acknowledged risk factor for those who venture into high altitude regions. There is, however, little quantitative information on this risk that can be used to implement mitigation strategies. Here we provide the first characterization of the risk of cold injury near the summit of Mount Everest. This is accomplished through the application of a meteorological dataset that has been demonstrated to characterize conditions in the region as inputs to new parameterizations of wind chill equivalent temperature (WCT) and facial frostbite time (FFT). Throughout the year, the typical WCT near the summit of Everest is always <-30°C, and the typical FFT is always less than 20 min. During the spring climbing season, WCTs of -50°C and FFTs of 5 min are typical; during severe storms, they approach -60°C and 1 min, respectively; values typically found during the winter. Further, we show that the summit barometric pressure is an excellent predictor of summit WCT and FFT. Our results provide the first quantitative characterization of the risk of cold injury on Mount Everest and also allow for the possibility of using barometric pressure, an easily observed parameter, in real time to characterize this risk and to implement mitigation strategies. The results also provide additional confirmation as to the extreme environment experienced by those attempting to summit Mount Everest and other high mountains.
Juckett, D A; Rosenberg, B
1992-04-21
The distributions for human disease-specific mortality exhibit two striking characteristics: survivorship curves that intersect near the longevity limit; and, the clustering of best-fitting Weibull shape parameter values into groups centered on integers. Correspondingly, we have hypothesized that the distribution intersections result from either competitive processes or population partitioning and the integral clustering in the shape parameter results from the occurrence of a small number of rare, rate-limiting events in disease progression. In this report we initiate a theoretical examination of these questions by exploring serial chain model dynamics and parameteric competing risks theory. The links in our chain models are composed of more than one bond, where the number of bonds in a link are denoted the link size and are the number of events necessary to break the link and, hence, the chain. We explored chains with all links of the same size or with segments of the chain composed of different size links (competition). Simulations showed that chain breakage dynamics depended on the weakest-link principle and followed kinetics of extreme-values which were very similar to human mortality kinetics. In particular, failure distributions for simple chains were Weibull-type extreme-value distributions with shape parameter values that were identifiable with the integral link size in the limit of infinite chain length. Furthermore, for chains composed of several segments of differing link size, the survival distributions for the various segments converged at a point in the S(t) tails indistinguishable from human data. This was also predicted by parameteric competing risks theory using Weibull underlying distributions. In both the competitive chain simulations and the parametric competing risks theory, however, the shape values for the intersecting distributions deviated from the integer values typical of human data. We conclude that rare events can be the source of integral shapes in human mortality, that convergence is a salient feature of multiple endpoints, but that pure competition may not be the best explanation for the exact type of convergence observable in human mortality. Finally, while the chain models were not motivated by any specific biological structures, interesting biological correlates to them may be useful in gerontological research.
NASA Astrophysics Data System (ADS)
Hill, M. C.; Jakeman, J.; Razavi, S.; Tolson, B.
2015-12-01
For many environmental systems model runtimes have remained very long as more capable computers have been used to add more processes and more time and space discretization. Scientists have also added more parameters and kinds of observations, and many model runs are needed to explore the models. Computational demand equals run time multiplied by number of model runs divided by parallelization opportunities. Model exploration is conducted using sensitivity analysis, optimization, and uncertainty quantification. Sensitivity analysis is used to reveal consequences of what may be very complex simulated relations, optimization is used to identify parameter values that fit the data best, or at least better, and uncertainty quantification is used to evaluate the precision of simulated results. The long execution times make such analyses a challenge. Methods for addressing this challenges include computationally frugal analysis of the demanding original model and a number of ingenious surrogate modeling methods. Both commonly use about 50-100 runs of the demanding original model. In this talk we consider the tradeoffs between (1) original model development decisions, (2) computationally frugal analysis of the original model, and (3) using many model runs of the fast surrogate model. Some questions of interest are as follows. If the added processes and discretization invested in (1) are compared with the restrictions and approximations in model analysis produced by long model execution times, is there a net benefit related of the goals of the model? Are there changes to the numerical methods that could reduce the computational demands while giving up less fidelity than is compromised by using computationally frugal methods or surrogate models for model analysis? Both the computationally frugal methods and surrogate models require that the solution of interest be a smooth function of the parameters or interest. How does the information obtained from the local methods typical of (2) and the global averaged methods typical of (3) compare for typical systems? The discussion will use examples of response of the Greenland glacier to global warming and surface and groundwater modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Charley; Kamboj, Sunita; Wang, Cheng
2015-09-01
This handbook is an update of the 1993 version of the Data Collection Handbook and the Radionuclide Transfer Factors Report to support modeling the impact of radioactive material in soil. Many new parameters have been added to the RESRAD Family of Codes, and new measurement methodologies are available. A detailed review of available parameter databases was conducted in preparation of this new handbook. This handbook is a companion document to the user manuals when using the RESRAD (onsite) and RESRAD-OFFSITE code. It can also be used for RESRAD-BUILD code because some of the building-related parameters are included in this handbook.more » The RESRAD (onsite) has been developed for implementing U.S. Department of Energy Residual Radioactive Material Guidelines. Hydrogeological, meteorological, geochemical, geometrical (size, area, depth), crops and livestock, human intake, source characteristic, and building characteristic parameters are used in the RESRAD (onsite) code. The RESRAD-OFFSITE code is an extension of the RESRAD (onsite) code and can also model the transport of radionuclides to locations outside the footprint of the primary contamination. This handbook discusses parameter definitions, typical ranges, variations, and measurement methodologies. It also provides references for sources of additional information. Although this handbook was developed primarily to support the application of RESRAD Family of Codes, the discussions and values are valid for use of other pathway analysis models and codes.« less
Demonstration of a vectorial optical field generator with adaptive close loop control.
Chen, Jian; Kong, Lingjiang; Zhan, Qiwen
2017-12-01
We experimentally demonstrate a vectorial optical field generator (VOF-Gen) with an adaptive close loop control. The close loop control capability is illustrated with the calibration of polarization modulation of the system. To calibrate the polarization ratio modulation, we generate 45° linearly polarized beam and make it propagate through a linear analyzer whose transmission axis is orthogonal to the incident beam. For the retardation calibration, circularly polarized beam is employed and a circular polarization analyzer with the opposite chirality is placed in front of the CCD as the detector. In both cases, the close loop control automatically changes the value of the corresponding calibration parameters in the pre-set ranges to generate the phase patterns applied to the spatial light modulators and records the intensity distribution of the output beam by the CCD camera. The optimized calibration parameters are determined corresponding to the minimum total intensity in each case. Several typical kinds of vectorial optical beams are created with and without the obtained calibration parameters, and the full Stokes parameter measurements are carried out to quantitatively analyze the polarization distribution of the generated beams. The comparisons among these results clearly show that the obtained calibration parameters could remarkably improve the accuracy of the polarization modulation of the VOF-Gen, especially for generating elliptically polarized beam with large ellipticity, indicating the significance of the presented close loop in enhancing the performance of the VOF-Gen.
Level density parameter behaviour at high excitation energy
NASA Astrophysics Data System (ADS)
D'Arrigo, A.; Giardina, G.; Taccone, A.
1991-06-01
We present a formalism to calculate the intrinsic (without collective effects) and effective (with collective effects) level density parameters over a wide range of excitation energy up to 180 MeV. The behaviour of aint and aeff as an energy function is shown for several typical nuclei (115Cd, 129Te, 148Pm, 173Yb, 192Ir and 248Cm). Moreover, local systematics of the parameter aeff as a function of the neutron number N, also for nuclei extremely far from the β-line, is shown for some typical nuclei (Rb, Pd, Sn, Ba and Hg) at excitation energies of 15, 80 and 150 MeV.
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Branch, Oliver; Attinger, Sabine; Thober, Stephan
2016-09-01
Land surface models incorporate a large number of process descriptions, containing a multitude of parameters. These parameters are typically read from tabulated input files. Some of these parameters might be fixed numbers in the computer code though, which hinder model agility during calibration. Here we identified 139 hard-coded parameters in the model code of the Noah land surface model with multiple process options (Noah-MP). We performed a Sobol' global sensitivity analysis of Noah-MP for a specific set of process options, which includes 42 out of the 71 standard parameters and 75 out of the 139 hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated at 12 catchments within the United States with very different hydrometeorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its applicable standard parameters (i.e., Sobol' indexes above 1%). The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for direct evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities because of their tight coupling via the water balance. A calibration of Noah-MP against either of these fluxes should therefore give comparable results. Moreover, these fluxes are sensitive to both plant and soil parameters. Calibrating, for example, only soil parameters hence limit the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.
NASA Astrophysics Data System (ADS)
Wang, Haoqi; Chen, Jun; Brownjohn, James M. W.
2017-12-01
The spring-mass-damper (SMD) model with a pair of internal biomechanical forces is the simplest model for a walking pedestrian to represent his/her mechanical properties, and thus can be used in human-structure-interaction analysis in the vertical direction. However, the values of SMD stiffness and damping, though very important, are typically taken as those measured from stationary people due to lack of a parameter identification methods for a walking pedestrian. This study adopts a step-by-step system identification approach known as particle filter to simultaneously identify the stiffness, damping coefficient, and coefficients of the SMD model's biomechanical forces by ground reaction force (GRF) records. After a brief introduction of the SMD model, the proposed identification approach is explained in detail, with a focus on the theory of particle filter and its integration with the SMD model. A numerical example is first provided to verify the feasibility of the proposed approach which is then applied to several experimental GRF records. Identification results demonstrate that natural frequency and the damping ratio of a walking pedestrian are not constant but have a dependence of mean value and distribution on pacing frequency. The mean value first-order coefficient of the biomechanical force, which is expressed by the Fourier series function, also has a linear relationship with pacing frequency. Higher order coefficients do not show a clear relationship with pacing frequency but follow a logarithmic normal distribution.
Veselá, S; Kingma, B R M; Frijns, A J H
2017-03-01
Local thermal sensation modeling gained importance due to developments in personalized and locally applied heating and cooling systems in office environments. The accuracy of these models depends on skin temperature prediction by thermophysiological models, which in turn rely on accurate environmental and personal input data. Environmental parameters are measured or prescribed, but personal factors such as clothing properties and metabolic rates have to be estimated. Data for estimating the overall values of clothing properties and metabolic rates are available in several papers and standards. However, local values are more difficult to retrieve. For local clothing, this study revealed that full and consistent data sets are not available in the published literature for typical office clothing sets. Furthermore, the values for local heat production were not verified for characteristic office activities, but were adapted empirically. Further analyses showed that variations in input parameters can lead to local skin temperature differences (∆T skin,loc = 0.4-4.4°C). These differences can affect the local sensation output, where ∆T skin,loc = 1°C is approximately one step on a 9-point thermal sensation scale. In conclusion, future research should include a systematic study of local clothing properties and the development of feasible methods for measuring and validating local heat production. © 2016 The Authors. Indoor Air published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Braud, A.; Girard, S.; Doualan, J. L.; Thuau, M.; Moncorgé, R.; Tkachuk, A. M.
2000-02-01
Energy-transfer processes have been quantitatively studied in various Tm:Yb-doped fluoride crystals. A comparison between the three host crystals which have been examined (KY3F10, LiYF4, and BaY2F8) shows clearly that the efficiency of the Yb-->Tm energy transfers is larger in KY3F10 than in LiYF4 or BaY2F8. The dependence of the energy-transfer parameters upon the codopant concentrations has been experimentally measured and compared with the results calculated on the basis of migration-assisted energy-transfer models. Using these energy-transfer parameters and a rate equation model, we have performed a theoretical calculation of the laser thresholds for the 3H4-->3F4 and 3H4-->3H5 laser transitions of the Tm ion around 1.5 and 2.3 μm, respectively. Laser experiments performed at 1.5 μm in Yb:Tm:LiYF4 then led to laser threshold values in good agreement with those derived theoretically. Based on these results, optimized values for the Yb and Tm dopant concentrations for typical values of laser cavity and pump modes were finally derived to minimize the threshold pump powers for the laser transitions around 1.5 and 2.3 μm.
System identification of timber masonry walls using shaking table test
NASA Astrophysics Data System (ADS)
Roy, Timir B.; Guerreiro, Luis; Bagchi, Ashutosh
2017-04-01
Dynamic study is important in order to design, repair and rehabilitation of structures. It has played an important role in the behavior characterization of structures; such as: bridges, dams, high rise buildings etc. There had been substantial development in this area over the last few decades, especially in the field of dynamic identification techniques of structural systems. Frequency Domain Decomposition (FDD) and Time Domain Decomposition are most commonly used methods to identify modal parameters; such as: natural frequency, modal damping and mode shape. The focus of the present research is to study the dynamic characteristics of typical timber masonry walls commonly used in Portugal. For that purpose, a multi-storey structural prototype of such wall has been tested on a seismic shake table at the National Laboratory for Civil Engineering, Portugal (LNEC). Signal processing has been performed of the output response, which is collected from the shaking table experiment of the prototype using accelerometers. In the present work signal processing of the output response, based on the input response has been done in two ways: FDD and Stochastic Subspace Identification (SSI). In order to estimate the values of the modal parameters, algorithms for FDD are formulated and parametric functions for the SSI are computed. Finally, estimated values from both the methods are compared to measure the accuracy of both the techniques.
NASA Astrophysics Data System (ADS)
Šimkovic, Fedor; Liu, Xuan-Wen; Deng, Youjin; Kozik, Evgeny
2016-08-01
We obtain a complete and numerically exact in the weak-coupling limit (U →0 ) ground-state phase diagram of the repulsive fermionic Hubbard model on the square lattice for filling factors 0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barkan, A.; Hunt, T.K.
1998-07-01
Upcoming designs for AMTEC modules capable of delivering as much as 150 watts will see the introduction of higher voltages into sodium vapor at pressures spanning a wide range. In theory, with any value for two out of three parameters: voltage, pressure, and electrode geometry, a value exists for the third parameter where DC electrical breakdown can occur; due to its low ionization energy, sodium vapor may be particularly susceptible to breakdown. This destructive event is not desirable in AMTEC modules, and sets a limit on the maximum voltage that can be built-up within any single enclosed module. An experimentalmore » cell was fabricated with representative electrode configurations and a separately heated sodium reservoir to test conditions typically expected during start-up, operation, and shutdown of AMTEC cells. Breakdown voltages were investigated in both sodium vapor and, for comparison, argon gas. The dependence on electrode material and polarity was also investigated. Additional information about leakage currents and the insulating properties of {alpha}-alumina in the presence of sodium vapor was collected, revealing a reversible tendency for conductive sodium films to build up under certain conditions, electrically shorting-out previously isolated components. In conclusion, safe operating limits on voltages, temperatures, and pressures are discussed.« less
Comparative energetics of the 5 fish classes on the basis of dynamic energy budgets
NASA Astrophysics Data System (ADS)
Kooijman, Sebastiaan A. L. M.; Lika, Konstadia
2014-11-01
The eco-physiology of taxa in an evolutionary context can best be studied by a comparison of parameter values of the energy budget that accounts for the inter-relationships of all endpoints of energy allocation. To this end, the parameters of the standard Dynamic Energy Budget (DEB) model have been estimated for 64 fish species from all 5 fish classes. The values are compared with those of the whole collection of over 300 species from most large animal phyla. The goodness of fit was very high, but the data were rather incomplete, compared with the energy balance for full life cycles. Metabolic acceleration, where maximum specific assimilation and energy conductance increase with length between birth and metabolic metamorphosis, seems to be confined, among fish, to some species of ray-finned fish and seems to have evolved independently several times in this taxon. We introduce a new altriciality index, i.e. the ratio of the maturity levels at puberty and birth, and conclude that ray-finned fish are more altricial, and cartilaginous fish are more precocial than typical animals. Fish allocate more to reproduction than typical animals. Parameter estimates show that 66% of the fish species considered invest less in reproduction than the value that would maximize the reproduction rate of fully grown individuals. By comparison, 85% of all the animal species in the collection do so. Consistent with theoretical expectations, allocation to reproduction and maturity at birth increase with cubed (ultimate structural) length, and reserve capacity with length for non-ray-finned fish, with the consequence that reproduction rate decreases with length. Ray-finned fish, however, have a maturity at birth and a reserve capacity almost independent of length, and a reproduction rate that increases with cubed length. Reserve capacity tends to increase with ultimate length for non-accelerating ray-finned fish, but not for accelerating species. Reproduction rate decreases inter-specifically with length in non-ray-finned fish, as expected, but increases with cubed length in ray-finned fish. This pattern follows naturally from the patterns of size at birth and reserve capacity and can be seen as adaptation to the predation of prey of ray-finned fish on their tiny neonates. Both the von Bertalanffy growth rate and the specific allocation to reproduction in fully grown adults correlate positively with specific somatic maintenance among fish species. These observations support the recently proposed waste-to-hurry hypothesis. Determinatesness increases in the sequence: fish, amphibians, reptiles, mammals and birds.
Fractal scaling laws of black carbon aerosol and their influence on spectral radiative properties
NASA Astrophysics Data System (ADS)
Tiwari, S.; Chakrabarty, R. K.; Heinson, W.
2016-12-01
Current estimates of the direct radiative forcing for Black Carbon (BC) aerosol span over a poorly constrained range between 0.2 and 1 W.m-2. To improve this large uncertainty, tighter constraints need to be placed on BC's key wavelength-dependent optical properties, namely, the absorption (MAC) and scattering (MSC) cross sections per unit mass and hemispherical upscatter fraction (β; a dimensionless scattering directionality parameter). These parameters are very sensitive to changes in particle morphology and complex refractive index nindex. Their interplay determines the magnitude of net positive or negative radiative forcing efficiencies. The current approach among climate modelers for estimating MAC and MSC values of BC is from their optical cross-sections calculated assuming spherical particle morphology with homogeneous, constant-valued refractive index in the visible solar spectrum. The β values are typically assumed to be a constant across this spectrum. This approach, while being computationally inexpensive and convenient, ignores the inherent fractal morphology of BC and its scaling behaviors, and resulting optical properties. In this talk, I will present recent results from my laboratory on determination of the fractal scaling laws of BC aggregate packing density and its complex refractive index for size spanning across three orders of magnitude, and their effects on spectral (Visible-infrared wavelength) scaling of MAC, MSC, and β values. Our experiments synergistically combined novel BC generation techniques, aggregation models, contact-free multi-wavelength optical measurements, and electron microscopy analysis. The scale dependence of nindex on aggregate size followed power-law exponents of -1.4 and -0.5 for sub- and super-micron size aggregates, respectively. The spherical Rayleigh-optics approximation limits, used by climate models for spectral extrapolation of BC optical cross-sections and deconvolution of multi-species mixing ratios, are redefined using the concept of phase shift parameter. I will highlight the importance of size-dependent β values and its role in offsetting the strong light absorbing nature of BC. Finally, the errors introduced in forcing efficiency calculations of BC by assuming spherical homogeneous morphology will be evaluated.
Developing and Implementing the Data Mining Algorithms in RAVEN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea
The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantificationmore » analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.« less
Soil moisture data as a constraint for groundwater recharge estimation
NASA Astrophysics Data System (ADS)
Mathias, Simon A.; Sorensen, James P. R.; Butler, Adrian P.
2017-09-01
Estimating groundwater recharge rates is important for water resource management studies. Modeling approaches to forecast groundwater recharge typically require observed historic data to assist calibration. It is generally not possible to observe groundwater recharge rates directly. Therefore, in the past, much effort has been invested to record soil moisture content (SMC) data, which can be used in a water balance calculation to estimate groundwater recharge. In this context, SMC data is measured at different depths and then typically integrated with respect to depth to obtain a single set of aggregated SMC values, which are used as an estimate of the total water stored within a given soil profile. This article seeks to investigate the value of such aggregated SMC data for conditioning groundwater recharge models in this respect. A simple modeling approach is adopted, which utilizes an emulation of Richards' equation in conjunction with a soil texture pedotransfer function. The only unknown parameters are soil texture. Monte Carlo simulation is performed for four different SMC monitoring sites. The model is used to estimate both aggregated SMC and groundwater recharge. The impact of conditioning the model to the aggregated SMC data is then explored in terms of its ability to reduce the uncertainty associated with recharge estimation. Whilst uncertainty in soil texture can lead to significant uncertainty in groundwater recharge estimation, it is found that aggregated SMC is virtually insensitive to soil texture.
Kinetic freeze-out conditions for the production of resonances, hadronic molecules, and light nuclei
NASA Astrophysics Data System (ADS)
Cho, Sungtae; Song, Taesoo; Lee, Su Houng
2018-02-01
We investigate the freeze-out conditions of a particle in an expanding system of interacting particles in order to understand the productions of resonances, hadronic molecules, and light nuclei in heavy-ion collisions. Applying the kinetic freeze-out condition with explicit hydrodynamic calculations for the expanding hadronic phase to the daughter particles of K* mesons, we find that the larger suppression of the yield ratio of K*/K at the Large Hadron Collider (LHC) than at the Relativisitic Heavy Ion Collider (RHIC) compared to the expectations from the statistical hadronization model based on chemical freeze-out parameters reflects the lower kinetic freeze-out temperature at LHC than at RHIC. Furthermore, we point out that for the light nuclei or hadronic molecules that are bound, the freeze-out condition should be applied to the respective particle in the hadronic matter. It is then shown through the rate equation that when the nucleon and pion numbers are kept constant at the chemical freeze-out value during the hadronic phase, the deuteron number quickly approaches an asymptotic value that is close to the statistical model prediction at the chemical freeze-out point. We argue that the reduction seen in K* numbers is a typical result for a particle that has a large natural decay width decaying into daughter particles, while that for deuteron is typical for a stable hadronic bound state.
Hur, Jin; Cho, Jinwoo
2012-01-01
The development of a real-time monitoring tool for the estimation of water quality is essential for efficient management of river pollution in urban areas. The Gap River in Korea is a typical urban river, which is affected by the effluent of a wastewater treatment plant (WWTP) and various anthropogenic activities. In this study, fluorescence excitation-emission matrices (EEM) with parallel factor analysis (PARAFAC) and UV absorption values at 220 nm and 254 nm were applied to evaluate the estimation capabilities for biochemical oxygen demand (BOD), chemical oxygen demand (COD), and total nitrogen (TN) concentrations of the river samples. Three components were successfully identified by the PARAFAC modeling from the fluorescence EEM data, in which each fluorophore group represents microbial humic-like (C1), terrestrial humic-like organic substances (C2), and protein-like organic substances (C3), and UV absorption indices (UV(220) and UV(254)), and the score values of the three PARAFAC components were selected as the estimation parameters for the nitrogen and the organic pollution of the river samples. Among the selected indices, UV(220), C3 and C1 exhibited the highest correlation coefficients with BOD, COD, and TN concentrations, respectively. Multiple regression analysis using UV(220) and C3 demonstrated the enhancement of the prediction capability for TN.
Analysis of a long drought in Piedmont, Italy - Autumn 2001
NASA Astrophysics Data System (ADS)
Gandini, D.; Marchisio, C.; Paesano, G.; Pelosini, P.
2003-04-01
A long period of drought and cold temperatures has characterised the seasons of Autumn 2001 and Winter 2001-2002 on the regions of the southern Alpine chain. The analysis of precipitation's data, collected by the Regional Monitoring network of Piedmont Region (on the south-west side of Alps), shows that they are far below the mean values and very close to the historical minimum of the last century. The six months accumulated precipitation in Turin (Piedmont chief town), from June to December 2001, has reached the historical minimum value of 206 mm in comparison with a mean value of 540 mm. The drought has been remarkable also in the mountain areas with the lack of snowfalls and critical consequences for water reservoirs. At the same time, the number of days with daily averaged temperature below or close to 0°C in December 2001 has been the greatest value of the last 50 years, much higher than the 50 years average, for the whole Piedmont region. This study contains a detailed analysis of observed data to characterise the drought episode, associated with a climatological analysis of meteorological parameters in order to detect the typical large scale pattern of the drought periods and their persistency's features.
Nørrelykke, Simon F; Flyvbjerg, Henrik
2010-07-01
Optical tweezers and atomic force microscope (AFM) cantilevers are often calibrated by fitting their experimental power spectra of Brownian motion. We demonstrate here that if this is done with typical weighted least-squares methods, the result is a bias of relative size between -2/n and +1/n on the value of the fitted diffusion coefficient. Here, n is the number of power spectra averaged over, so typical calibrations contain 10%-20% bias. Both the sign and the size of the bias depend on the weighting scheme applied. Hence, so do length-scale calibrations based on the diffusion coefficient. The fitted value for the characteristic frequency is not affected by this bias. For the AFM then, force measurements are not affected provided an independent length-scale calibration is available. For optical tweezers there is no such luck, since the spring constant is found as the ratio of the characteristic frequency and the diffusion coefficient. We give analytical results for the weight-dependent bias for the wide class of systems whose dynamics is described by a linear (integro)differential equation with additive noise, white or colored. Examples are optical tweezers with hydrodynamic self-interaction and aliasing, calibration of Ornstein-Uhlenbeck models in finance, models for cell migration in biology, etc. Because the bias takes the form of a simple multiplicative factor on the fitted amplitude (e.g. the diffusion coefficient), it is straightforward to remove and the user will need minimal modifications to his or her favorite least-squares fitting programs. Results are demonstrated and illustrated using synthetic data, so we can compare fits with known true values. We also fit some commonly occurring power spectra once-and-for-all in the sense that we give their parameter values and associated error bars as explicit functions of experimental power-spectral values.
Operations analysis (study 2.1): Program manual and users guide for the LOVES computer code
NASA Technical Reports Server (NTRS)
Wray, S. T., Jr.
1975-01-01
Information is provided necessary to use the LOVES Computer Program in its existing state, or to modify the program to include studies not properly handled by the basic model. The Users Guide defines the basic elements assembled together to form the model for servicing satellites in orbit. As the program is a simulation, the method of attack is to disassemble the problem into a sequence of events, each occurring instantaneously and each creating one or more other events in the future. The main driving force of the simulation is the deterministic launch schedule of satellites and the subsequent failure of the various modules which make up the satellites. The LOVES Computer Program uses a random number generator to simulate the failure of module elements and therefore operates over a long span of time typically 10 to 15 years. The sequence of events is varied by making several runs in succession with different random numbers resulting in a Monte Carlo technique to determine statistical parameters of minimum value, average value, and maximum value.
Evaluation of punching shear strength of flat slabs supported on rectangular columns
NASA Astrophysics Data System (ADS)
Filatov, Valery
2018-03-01
The article presents the methodology and results of an analytical study of structural parameters influence on the value of punching force for the joint of columns and flat reinforced concrete slab. This design solution is typical for monolithic reinforced concrete girderless frames, which have a wide application in the construction of high-rise buildings. As the results of earlier studies show the punching shear strength of slabs at rectangular columns can be lower than at square columns with a similar length of the control perimeter. The influence of two structural parameters on the punching strength of the plate is investigated - the ratio of the side of the column cross-section to the effective depth of slab C/d and the ratio of the sides of the rectangular column Cmax/Cmin. According to the results of the study, graphs of reduction the control perimeter depending on the structural parameters are presented for columns square and rectangular cross-sections. Comparison of results obtained by proposed approach and MC2010 simplified method are shown, that proposed approach gives a more conservative estimate of the influence of the structural parameters. A significant influence of the considered structural parameters on punching shear strength of reinforced concrete slabs is confirmed by the results of experimental studies. The results of the study confirm the necessity of taking into account the considered structural parameters when calculating the punching shear strength of flat reinforced concrete slabs and further development of code design methods.
Williams, Loriann; Jackson, Carl P T; Choe, Noreen; Pelland, Lucie; Scott, Stephen H; Reynolds, James N
2014-01-01
Fetal alcohol spectrum disorder (FASD) is associated with a large number of cognitive and sensory-motor deficits. In particular, the accurate assessment of sensory-motor deficits in children with FASD is not always simple and relies on clinical assessment tools that may be coarse and subjective. Here we present a new approach: using robotic technology to accurately and objectively assess motor deficits of children with FASD in a center-out reaching task. A total of 152 typically developing children and 31 children with FASD, all aged between 5 and 18 were assessed using a robotic exoskeleton device coupled with a virtual reality projection system. Children made reaching movements to 8 peripheral targets in a random order. Reach trajectories were subsequently analyzed to extract 12 parameters that had been previously determined to be good descriptors of a reaching movement, and these parameters were compared for each child with FASD to a normative model derived from the performance of the typically developing population. Compared with typically developing children, the children with FASD were found to be significantly impaired on most of the parameters measured, with the greatest deficits found in initial movement direction error. Also, children with FASD tended to fail more parameters than typically developing children: 95% of typically developing children failed fewer than 3 parameters compared with 69% of children with FASD. These results were particularly pronounced for younger children. The current study has shown that robotic technology is a sensitive and powerful tool that provides increased specificity regarding the type of motor problems exhibited by children with FASD. The high frequency of motor deficits in children with FASD suggests that interventions aimed at stimulating and/or improving motor development should routinely be considered for this population. Copyright © 2013 by the Research Society on Alcoholism.
Zhang, Y-X; Liu, Y; Xue, Y; Yang, L-Y; Song, G-D; Zhao, L
2016-06-01
We explored the relationship between atmospheric concentrations of fine particulate matter and children cough variant asthma. 48 children all diagnosed with cough variant asthma were placed in the cough asthma group while 50 children suffering from typical asthma were place in typical asthma group. We also had 50 cases of chronic pneumonia (the pneumonia group) and 50 cases of healthy children (the control group). We calculated the average PM 2.5 and temperature values during spring, summer, autumn and winter and monitored serum lymphocyte ratio, CD4+/CD8+T, immunoglobulin IgE, ventilatory index and high-sensitivity C-reactive protein (hs-CRP) levels. Our results showed that PM 2.5 values in spring and winter were remarkably higher compared to other seasons. Correlated analysis demonstrated that the onset of cough asthma group was happening in spring. The onset of typical asthma group happened mostly in winter, followed by spring. We established a positive correlation between the onset of asthma of cough asthma group and PM 2.5 value (r = 0.623, p = 0.017), and there was also a positive correlation between the onset of asthma of typical asthma group and PM 2.5 value (r = 0.714, p = 0.015). Our results showed that lymphocyte ratio and IgE level in the cough asthma group and the typical asthma group were significantly higher. CD4+/CD8+T was significantly lower in the cough asthma group and the typical asthma group. The hs-CRP level in cough asthma, typical asthma and pneumonia groups were significantly higher than that of the control group. The FEV1/predicted value, FEV1/FVC and MMEF/predicted value in the cough asthma group and the typical asthma group were significantly lower than those in other groups, however when comparing between two groups respectively, the difference was not statistically significant. Our findings showed that PM2.5 was related to the onset of children cough variant asthma. PM2.5 reduced immune regulation and ventilatory function.
NASA Technical Reports Server (NTRS)
Carey, Lawrence D.; Petersen, Walter A.
2011-01-01
The estimation of rain drop size distribution (DSD) parameters from polarimetric radar observations is accomplished by first establishing a relationship between differential reflectivity (Z(sub dr)) and the central tendency of the rain DSD such as the median volume diameter (D0). Since Z(sub dr) does not provide a direct measurement of DSD central tendency, the relationship is typically derived empirically from rain drop and radar scattering models (e.g., D0 = F[Z (sub dr)] ). Past studies have explored the general sensitivity of these models to temperature, radar wavelength, the drop shape vs. size relation, and DSD variability. Much progress has been made in recent years in measuring the drop shape and DSD variability using surface-based disdrometers, such as the 2D Video disdrometer (2DVD), and documenting their impact on polarimetric radar techniques. In addition to measuring drop shape, another advantage of the 2DVD over earlier impact type disdrometers is its ability to resolve drop diameters in excess of 5 mm. Despite this improvement, the sampling limitations of a disdrometer, including the 2DVD, make it very difficult to adequately measure the maximum drop diameter (D(sub max)) present in a typical radar resolution volume. As a result, D(sub max) must still be assumed in the drop and radar models from which D0 = F[Z(sub dr)] is derived. Since scattering resonance at C-band wavelengths begins to occur in drop diameters larger than about 5 mm, modeled C-band radar parameters, particularly Z(sub dr), can be sensitive to D(sub max) assumptions. In past C-band radar studies, a variety of D(sub max) assumptions have been made, including the actual disdrometer estimate of D(sub max) during a typical sampling period (e.g., 1-3 minutes), D(sub max) = C (where C is constant at values from 5 to 8 mm), and D(sub max) = M*D0 (where the constant multiple, M, is fixed at values ranging from 2.5 to 3.5). The overall objective of this NASA Global Precipitation Measurement Mission (GPM/PMM Science Team)-funded study is to document the sensitivity of DSD measurements, including estimates of D0, from C-band Z(sub dr) and reflectivity to this range of D(sub max) assumptions. For this study, GPM Ground Validation 2DVD's were operated under the scanning domain of the UAHuntsville ARMOR C-band dual-polarimetric radar. Approximately 7500 minutes of DSD data were collected and processed to create gamma size distribution parameters using a truncated method of moments approach. After creating the gamma parameter datasets the DSD's were then used as input to a T-matrix model for computation of polarimetric radar moments at C-band. All necessary model parameterizations, such as temperature, drop shape, and drop fall mode, were fixed at typically accepted values while the D(sub max) assumption was allowed to vary in sensitivity tests. By hypothesizing a DSD model with D(sub max) (fit) from which the empirical fit to D0 = F[Z(sub dr)] was derived via non-linear least squares regression and a separate reference DSD model with D(sub max) (truth), bias and standard error in D0 retrievals were estimated in the presence of Z(sub dr) measurement error and hypothesized mismatch in D(sub max) assumptions. Although the normalized standard error for D0 = F[Z(sub dr)r] can increase slightly (as much as from 11% to 16% for all 7500 DSDs) when the D(sub max) (fit) does not match D(sub max) (truth), the primary impact of uncertainty in D(sub max) is a potential increase in normalized bias error in D0 (from 0% to as much as 10% over all 7500 DSDs, depending on the extent of the mismatch between D(sub max) (fit) and D(sub max) (truth)). For DSDs characterized by large Z(sub dr) (Z(sub dr) > 1.5 to 2.0 dB), the normalized bias error for D0 estimation at C-band is sometimes unacceptably large (> 10%), again depending on the extent of the hypothesized D(sub max) mismatch. Modeled errors in D0 retrievals from Z(sub dr) at C-band are demonstrated in detail and comparedo similar modeled retrieval errors at S-band and X-band where the sensitivity to D(sub max) is expected to be less. The impact of D(sub max) assumptions to the retrieval of other DSD parameters such as Nw, the liquid water content normalized intercept parameter, are also explored. Likely implications for DSD retrievals using C-band polarimetric radar for GPM are assessed by considering current community knowledge regarding D(sub max) and quantifying the statistical distribution of Z(sub dr) from ARMOR over a large variety of meteorological conditions. Based on these results and the prevalence of C-band polarimetric radars worldwide, a call for more emphasis on constraining our observational estimate of D(sub max) within a typical radar resolution volume is made
A study of numerical methods for hyperbolic conservation laws with stiff source terms
NASA Technical Reports Server (NTRS)
Leveque, R. J.; Yee, H. C.
1988-01-01
The proper modeling of nonequilibrium gas dynamics is required in certain regimes of hypersonic flow. For inviscid flow this gives a system of conservation laws coupled with source terms representing the chemistry. Often a wide range of time scales is present in the problem, leading to numerical difficulties as in stiff systems of ordinary differential equations. Stability can be achieved by using implicit methods, but other numerical difficulties are observed. The behavior of typical numerical methods on a simple advection equation with a parameter-dependent source term was studied. Two approaches to incorporate the source term were utilized: MacCormack type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. Various comparisons over a wide range of parameter values were made. In the stiff case where the solution contains discontinuities, incorrect numerical propagation speeds are observed with all of the methods considered. This phenomenon is studied and explained.
Manufacturing complexity analysis
NASA Technical Reports Server (NTRS)
Delionback, L. M.
1977-01-01
The analysis of the complexity of a typical system is presented. Starting with the subsystems of an example system, the step-by-step procedure for analysis of the complexity of an overall system is given. The learning curves for the various subsystems are determined as well as the concurrent numbers of relevant design parameters. Then trend curves are plotted for the learning curve slopes versus the various design-oriented parameters, e.g. number of parts versus slope of learning curve, or number of fasteners versus slope of learning curve, etc. Representative cuts are taken from each trend curve, and a figure-of-merit analysis is made for each of the subsystems. Based on these values, a characteristic curve is plotted which is indicative of the complexity of the particular subsystem. Each such characteristic curve is based on a universe of trend curve data taken from data points observed for the subsystem in question. Thus, a characteristic curve is developed for each of the subsystems in the overall system.
Fine-scale patterns of population stratification confound rare variant association tests.
O'Connor, Timothy D; Kiezun, Adam; Bamshad, Michael; Rich, Stephen S; Smith, Joshua D; Turner, Emily; Leal, Suzanne M; Akey, Joshua M
2013-01-01
Advances in next-generation sequencing technology have enabled systematic exploration of the contribution of rare variation to Mendelian and complex diseases. Although it is well known that population stratification can generate spurious associations with common alleles, its impact on rare variant association methods remains poorly understood. Here, we performed exhaustive coalescent simulations with demographic parameters calibrated from exome sequence data to evaluate the performance of nine rare variant association methods in the presence of fine-scale population structure. We find that all methods have an inflated spurious association rate for parameter values that are consistent with levels of differentiation typical of European populations. For example, at a nominal significance level of 5%, some test statistics have a spurious association rate as high as 40%. Finally, we empirically assess the impact of population stratification in a large data set of 4,298 European American exomes. Our results have important implications for the design, analysis, and interpretation of rare variant genome-wide association studies.
Competing s-wave orders from Einstein-Gauss-Bonnet gravity
NASA Astrophysics Data System (ADS)
Li, Zhi-Hong; Fu, Yun-Chang; Nie, Zhang-Yu
2018-01-01
In this paper, the holographic superconductor model with two s-wave orders from 4 + 1 dimensional Einstein-Gauss-Bonnet gravity is explored in the probe limit. At different values of the Gauss-Bonnet coefficient α, we study the influence of tuning the mass and charge parameters of the bulk scalar field on the free energy curve of condensed solution with signal s-wave order, and compare the difference of tuning the two different parameters while the changes of the critical temperature are the same. Based on the above results, it is indicated that the two free energy curves of different s-wave orders can have one or two intersection points, where two typical phase transition behaviors of the s + s coexistent phase, including the reentrant phase transition near the Chern-Simons limit α = 0.25, can be found. We also give an explanation to the nontrivial behavior of the Tc- α curves near the Chern-Simons limit, which might be heuristic to understand the origin of the reentrant behavior near the Chern-Simons limit.
Array magnetics modal analysis for the DIII-D tokamak based on localized time-series modelling
Olofsson, K. Erik J.; Hanson, Jeremy M.; Shiraki, Daisuke; ...
2014-07-14
Here, time-series analysis of magnetics data in tokamaks is typically done using block-based fast Fourier transform methods. This work presents the development and deployment of a new set of algorithms for magnetic probe array analysis. The method is based on an estimation technique known as stochastic subspace identification (SSI). Compared with the standard coherence approach or the direct singular value decomposition approach, the new technique exhibits several beneficial properties. For example, the SSI method does not require that frequencies are orthogonal with respect to the timeframe used in the analysis. Frequencies are obtained directly as parameters of localized time-series models.more » The parameters are extracted by solving small-scale eigenvalue problems. Applications include maximum-likelihood regularized eigenmode pattern estimation, detection of neoclassical tearing modes, including locked mode precursors, and automatic clustering of modes, and magnetics-pattern characterization of sawtooth pre- and postcursors, edge harmonic oscillations and fishbones.« less
NASA Technical Reports Server (NTRS)
Collis, R. T. H.
1969-01-01
Lidar is an optical radar technique employing laser energy. Variations in signal intensity as a function of range provide information on atmospheric constituents, even when these are too tenuous to be normally visible. The theoretical and technical basis of the technique is described and typical values of the atmospheric optical parameters given. The significance of these parameters to atmospheric and meteorological problems is discussed. While the basic technique can provide valuable information about clouds and other material in the atmosphere, it is not possible to determine particle size and number concentrations precisely. There are also inherent difficulties in evaluating lidar observations. Nevertheless, lidar can provide much useful information as is shown by illustrations. These include lidar observations of: cirrus cloud, showing mountain wave motions; stratification in clear air due to the thermal profile near the ground; determinations of low cloud and visibility along an air-field approach path; and finally the motion and internal structure of clouds of tracer materials (insecticide spray and explosion-caused dust) which demonstrate the use of lidar for studying transport and diffusion processes.
TAIR- TRANSONIC AIRFOIL ANALYSIS COMPUTER CODE
NASA Technical Reports Server (NTRS)
Dougherty, F. C.
1994-01-01
The Transonic Airfoil analysis computer code, TAIR, was developed to employ a fast, fully implicit algorithm to solve the conservative full-potential equation for the steady transonic flow field about an arbitrary airfoil immersed in a subsonic free stream. The full-potential formulation is considered exact under the assumptions of irrotational, isentropic, and inviscid flow. These assumptions are valid for a wide range of practical transonic flows typical of modern aircraft cruise conditions. The primary features of TAIR include: a new fully implicit iteration scheme which is typically many times faster than classical successive line overrelaxation algorithms; a new, reliable artifical density spatial differencing scheme treating the conservative form of the full-potential equation; and a numerical mapping procedure capable of generating curvilinear, body-fitted finite-difference grids about arbitrary airfoil geometries. Three aspects emphasized during the development of the TAIR code were reliability, simplicity, and speed. The reliability of TAIR comes from two sources: the new algorithm employed and the implementation of effective convergence monitoring logic. TAIR achieves ease of use by employing a "default mode" that greatly simplifies code operation, especially by inexperienced users, and many useful options including: several airfoil-geometry input options, flexible user controls over program output, and a multiple solution capability. The speed of the TAIR code is attributed to the new algorithm and the manner in which it has been implemented. Input to the TAIR program consists of airfoil coordinates, aerodynamic and flow-field convergence parameters, and geometric and grid convergence parameters. The airfoil coordinates for many airfoil shapes can be generated in TAIR from just a few input parameters. Most of the other input parameters have default values which allow the user to run an analysis in the default mode by specifing only a few input parameters. Output from TAIR may include aerodynamic coefficients, the airfoil surface solution, convergence histories, and printer plots of Mach number and density contour maps. The TAIR program is written in FORTRAN IV for batch execution and has been implemented on a CDC 7600 computer with a central memory requirement of approximately 155K (octal) of 60 bit words. The TAIR program was developed in 1981.
Cope, Davis; Blakeslee, Barbara; McCourt, Mark E
2013-05-01
The difference-of-Gaussians (DOG) filter is a widely used model for the receptive field of neurons in the retina and lateral geniculate nucleus (LGN) and is a potential model in general for responses modulated by an excitatory center with an inhibitory surrounding region. A DOG filter is defined by three standard parameters: the center and surround sigmas (which define the variance of the radially symmetric Gaussians) and the balance (which defines the linear combination of the two Gaussians). These parameters are not directly observable and are typically determined by nonlinear parameter estimation methods applied to the frequency response function. DOG filters show both low-pass (optimal response at zero frequency) and bandpass (optimal response at a nonzero frequency) behavior. This paper reformulates the DOG filter in terms of a directly observable parameter, the zero-crossing radius, and two new (but not directly observable) parameters. In the two-dimensional parameter space, the exact region corresponding to bandpass behavior is determined. A detailed description of the frequency response characteristics of the DOG filter is obtained. It is also found that the directly observable optimal frequency and optimal gain (the ratio of the response at optimal frequency to the response at zero frequency) provide an alternate coordinate system for the bandpass region. Altogether, the DOG filter and its three standard implicit parameters can be determined by three directly observable values. The two-dimensional bandpass region is a potential tool for the analysis of populations of DOG filters (for example, populations of neurons in the retina or LGN), because the clustering of points in this parameter space may indicate an underlying organizational principle. This paper concentrates on circular Gaussians, but the results generalize to multidimensional radially symmetric Gaussians and are given as an appendix.
Family of columns isospectral to gravity-loaded columns with tip force: A discrete approach
NASA Astrophysics Data System (ADS)
Ramachandran, Nirmal; Ganguli, Ranjan
2018-06-01
A discrete model is introduced to analyze transverse vibration of straight, clamped-free (CF) columns of variable cross-sectional geometry under the influence of gravity and a constant axial force at the tip. The discrete model is used to determine critical combinations of loading parameters - a gravity parameter and a tip force parameter - that cause onset of dynamic instability in the CF column. A methodology, based on matrix-factorization, is described to transform the discrete model into a family of models corresponding to weightless and unloaded clamped-free (WUCF) columns, each with a transverse vibration spectrum isospectral to the original model. Characteristics of models in this isospectral family are dependent on three transformation parameters. A procedure is discussed to convert the isospectral discrete model description into geometric description of realistic columns i.e. from the discrete model, we construct isospectral WUCF columns with rectangular cross-sections varying in width and depth. As part of numerical studies to demonstrate efficacy of techniques presented, frequency parameters of a uniform column and three types of tapered CF columns under different combinations of loading parameters are obtained from the discrete model. Critical combinations of these parameters for a typical tapered column are derived. These results match with published results. Example CF columns, under arbitrarily-chosen combinations of loading parameters are considered and for each combination, isospectral WUCF columns are constructed. Role of transformation parameters in determining characteristics of isospectral columns is discussed and optimum values are deduced. Natural frequencies of these WUCF columns computed using Finite Element Method (FEM) match well with those of the given gravity-loaded CF column with tip force, hence confirming isospectrality.
Adaptive Value Normalization in the Prefrontal Cortex Is Reduced by Memory Load.
Holper, L; Van Brussel, L D; Schmidt, L; Schulthess, S; Burke, C J; Louie, K; Seifritz, E; Tobler, P N
2017-01-01
Adaptation facilitates neural representation of a wide range of diverse inputs, including reward values. Adaptive value coding typically relies on contextual information either obtained from the environment or retrieved from and maintained in memory. However, it is unknown whether having to retrieve and maintain context information modulates the brain's capacity for value adaptation. To address this issue, we measured hemodynamic responses of the prefrontal cortex (PFC) in two studies on risky decision-making. In each trial, healthy human subjects chose between a risky and a safe alternative; half of the participants had to remember the risky alternatives, whereas for the other half they were presented visually. The value of safe alternatives varied across trials. PFC responses adapted to contextual risk information, with steeper coding of safe alternative value in lower-risk contexts. Importantly, this adaptation depended on working memory load, such that response functions relating PFC activity to safe values were steeper with presented versus remembered risk. An independent second study replicated the findings of the first study and showed that similar slope reductions also arose when memory maintenance demands were increased with a secondary working memory task. Formal model comparison showed that a divisive normalization model fitted effects of both risk context and working memory demands on PFC activity better than alternative models of value adaptation, and revealed that reduced suppression of background activity was the critical parameter impairing normalization with increased memory maintenance demand. Our findings suggest that mnemonic processes can constrain normalization of neural value representations.
Reduction-Triggered Self-Assembly of Nanoscale Molybdenum Oxide Molecular Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, Panchao; Wu, Bin; Li, Tao
A 2.9 nm molybdenum oxide cluster {Mo 132} (Formula: [Mo VI 72Mo V 60O 372(CH 3COO) 30(H 2O) 72] 42-) can be obtained by reducing ammonium molybdate with hydrazine sulfate in weakly acidic CH 3COOH/CH 3COO- buffers. This reaction has been monitored by time-resolved UV-Vis, 1H-NMR, small angle X-ray/neutron scattering, and X-ray absorption near edge structure spectroscopy. The growth of {Mo 132} cluster shows a typical sigmoid curve, suggesting a multi-step assembly mechanism for this reaction. The reaction starts with a lag phase period when partial MoVI centers of molybdate precursors are reduced to form {MoV2(acetate)} structures under the coordinationmore » effect of the acetate groups. Once the concentration of {Mo V 2(acetate)} reaches a critical value, it triggers the assembly of Mo V and Mo VI species into {Mo 132} clusters. Parameters such as the type and amount of reducing agent, the pH, the type of cation, and the type of organic ligand in the reaction buffer, have been studied for the roles they play in the formation of the target clusters.Understanding the formation mechanism of giant molecular clusters is essential for rational design and synthesis of cluster-based nanomaterials with required morphologies and functionalities. Here, typical synthetic reactions of a 2.9 nm spherical molybdenum oxide cluster, {Mo 132} (formula: [Mo VI 72Mo V 60O 372(CH 3COO) 30(H 2O) 72] 42), with systematically varied reaction parameters have been fully explored to determine the morphologies and concentration of products, reduction of metal centers, and chemical environments of the organic ligands. The growth of these clusters shows a typical sigmoid curve, suggesting a general multistep self-assembly mechanism for the formation of giant molecular clusters. The reaction starts with a lag phase period when partial MoVI centers of molybdate precursors are reduced to form {Mo V 2(acetate)} structures under the coordination effect of the acetate groups. Once the concentration of {MoV2(acetate)} reaches a critical value, it triggers the co-assembly of Mo V and Mo VI species into the giant clusters.« less
Reduction-Triggered Self-Assembly of Nanoscale Molybdenum Oxide Molecular Clusters
Yin, Panchao; Wu, Bin; Li, Tao; ...
2016-07-26
A 2.9 nm molybdenum oxide cluster {Mo 132} (Formula: [Mo VI 72Mo V 60O 372(CH 3COO) 30(H 2O) 72] 42-) can be obtained by reducing ammonium molybdate with hydrazine sulfate in weakly acidic CH 3COOH/CH 3COO- buffers. This reaction has been monitored by time-resolved UV-Vis, 1H-NMR, small angle X-ray/neutron scattering, and X-ray absorption near edge structure spectroscopy. The growth of {Mo 132} cluster shows a typical sigmoid curve, suggesting a multi-step assembly mechanism for this reaction. The reaction starts with a lag phase period when partial MoVI centers of molybdate precursors are reduced to form {MoV2(acetate)} structures under the coordinationmore » effect of the acetate groups. Once the concentration of {Mo V 2(acetate)} reaches a critical value, it triggers the assembly of Mo V and Mo VI species into {Mo 132} clusters. Parameters such as the type and amount of reducing agent, the pH, the type of cation, and the type of organic ligand in the reaction buffer, have been studied for the roles they play in the formation of the target clusters.Understanding the formation mechanism of giant molecular clusters is essential for rational design and synthesis of cluster-based nanomaterials with required morphologies and functionalities. Here, typical synthetic reactions of a 2.9 nm spherical molybdenum oxide cluster, {Mo 132} (formula: [Mo VI 72Mo V 60O 372(CH 3COO) 30(H 2O) 72] 42), with systematically varied reaction parameters have been fully explored to determine the morphologies and concentration of products, reduction of metal centers, and chemical environments of the organic ligands. The growth of these clusters shows a typical sigmoid curve, suggesting a general multistep self-assembly mechanism for the formation of giant molecular clusters. The reaction starts with a lag phase period when partial MoVI centers of molybdate precursors are reduced to form {Mo V 2(acetate)} structures under the coordination effect of the acetate groups. Once the concentration of {MoV2(acetate)} reaches a critical value, it triggers the co-assembly of Mo V and Mo VI species into the giant clusters.« less
NASA Astrophysics Data System (ADS)
Veneziano, D.; Langousis, A.; Lepore, C.
2009-12-01
The annual maximum of the average rainfall intensity in a period of duration d, Iyear(d), is typically assumed to have generalized extreme value (GEV) distribution. The shape parameter k of that distribution is especially difficult to estimate from either at-site or regional data, making it important to constraint k using theoretical arguments. In the context of multifractal representations of rainfall, we observe that standard theoretical estimates of k from extreme value (EV) and extreme excess (EE) theories do not apply, while estimates from large deviation (LD) theory hold only for very small d. We then propose a new theoretical estimator based on fitting GEV models to the numerically calculated distribution of Iyear(d). A standard result from EV and EE theories is that k depends on the tail behavior of the average rainfall in d, I(d). This result holds if Iyear(d) is the maximum of a sufficiently large number n of variables, all distributed like I(d); therefore its applicability hinges on whether n = 1yr/d is large enough and the tail of I(d) is sufficiently well known. One typically assumes that at least for small d the former condition is met, but poor knowledge of the upper tail of I(d) remains an obstacle for all d. In fact, in the case of multifractal rainfall, also the first condition is not met because, irrespective of d, 1yr/d is too small (Veneziano et al., 2009, WRR, in press). Applying large deviation (LD) theory to this multifractal case, we find that, as d → 0, Iyear(d) approaches a GEV distribution whose shape parameter kLD depends on a region of the distribution of I(d) well below the upper tail, is always positive (in the EV2 range), is much larger than the value predicted by EV and EE theories, and can be readily found from the scaling properties of I(d). The scaling properties of rainfall can be inferred also from short records, but the limitation remains that the result holds under d → 0 not for finite d. Therefore, for different reasons, none of the above asymptotic theories applies to Iyear(d). In practice, one is interested in the distribution of Iyear(d) over a finite range of averaging durations d and return periods T. Using multifractal representations of rainfall, we have numerically calculated the distribution of Iyear(d) and found that, although not GEV, the distribution can be accurately approximated by a GEV model. The best-fitting parameter k depends on d, but is insensitive to the scaling properties of rainfall and the range of return periods T used for fitting. We have obtained a default expression for k(d) and compared it with estimates from historical rainfall records. The theoretical function tracks well the empirical dependence on d, although it generally overestimates the empirical k values, possibly due to deviations of rainfall from perfect scaling. This issue is under investigation.
Finding Top-kappa Unexplained Activities in Video
2012-03-09
parameters that define an UAP instance affect the running time by varying the values of each parameter while keeping the others fixed to a default...value. Runtime of Top-k TUA. Table 1 reports the values we considered for each parameter along with the corresponding default value. Parameter Values...Default value k 1, 2, 5, All All τ 0.4, 0.6, 0.8 0.6 L 160, 200, 240, 280 200 # worlds 7 E+04, 4 E+05, 2 E+07 2 E+07 TABLE 1: Parameter values used in
Modelling of Cosmic Molecular Masers: Introduction to a Computation Cookbook
NASA Astrophysics Data System (ADS)
Sobolev, Andrej M.; Gray, Malcolm D.
2012-07-01
Numerical modeling of molecular masers is necessary in order to understand their nature and diagnostic capabilities. Model construction requires elaboration of a basic description which allows computation, that is a definition of the parameter space and basic physical relations. Usually, this requires additional thorough studies that can consist of the following stages/parts: relevant molecular spectroscopy and collisional rate coefficients; conditions in and around the masing region (that part of space where population inversion is realized); geometry and size of the masing region (including the question of whether maser spots are discrete clumps or line-of-sight correlations in a much bigger region) and propagation of maser radiation. Output of the maser computer modeling can have the following forms: exploration of parameter space (where do inversions appear in particular maser transitions and their combinations, which parameter values describe a `typical' source, and so on); modeling of individual sources (line flux ratios, spectra, images and their variability); analysis of the pumping mechanism; predictions (new maser transitions, correlations in variability of different maser transitions, and the like). Described schemes (constituents and hierarchy) of the model input and output are based mainly on the experience of the authors and make no claim to be dogmatic.
Halford, Alexa J.; Fraser, Brian J; Morley, Steven Karl; ...
2016-06-08
As electromagnetic ion cyclotron (EMIC) waves may play an important role in radiation belt dynamics, there has been a push to better include them into global simulations. How to best include EMIC wave effects is still an open question. Recently many studies have attempted to parameterize EMIC waves and their characteristics by geomagnetic indices. However, this does not fully take into account important physics related to the phase of a geomagnetic storm. In this paper we first consider how EMIC wave occurrence varies with the phase of a geomagnetic storm and the SYM-H, AE, and Kp indices. Here we showmore » that the storm phase plays an important role in the occurrence probability of EMIC waves. The occurrence rates for a given value of a geomagnetic index change based on the geomagnetic condition. Then in this study we also describe the typical plasma and wave parameters observed in L and magnetic local time for quiet, storm, and storm phase. These results are given in a tabular format in the supporting information so that more accurate statistics of EMIC wave parameters can be incorporated into modeling efforts.« less
High-Level Performance Modeling of SAR Systems
NASA Technical Reports Server (NTRS)
Chen, Curtis
2006-01-01
SAUSAGE (Still Another Utility for SAR Analysis that s General and Extensible) is a computer program for modeling (see figure) the performance of synthetic- aperture radar (SAR) or interferometric synthetic-aperture radar (InSAR or IFSAR) systems. The user is assumed to be familiar with the basic principles of SAR imaging and interferometry. Given design parameters (e.g., altitude, power, and bandwidth) that characterize a radar system, the software predicts various performance metrics (e.g., signal-to-noise ratio and resolution). SAUSAGE is intended to be a general software tool for quick, high-level evaluation of radar designs; it is not meant to capture all the subtleties, nuances, and particulars of specific systems. SAUSAGE was written to facilitate the exploration of engineering tradeoffs within the multidimensional space of design parameters. Typically, this space is examined through an iterative process of adjusting the values of the design parameters and examining the effects of the adjustments on the overall performance of the system at each iteration. The software is designed to be modular and extensible to enable consideration of a variety of operating modes and antenna beam patterns, including, for example, strip-map and spotlight SAR acquisitions, polarimetry, burst modes, and squinted geometries.
Extraction of Modal Parameters from Spacecraft Flight Data
NASA Technical Reports Server (NTRS)
James, George H.; Cao, Timothy T.; Fogt, Vincent A.; Wilson, Robert L.; Bartkowicz, Theodore J.
2010-01-01
The modeled response of spacecraft systems must be validated using flight data as ground tests cannot adequately represent the flight. Tools from the field of operational modal analysis would typically be brought to bear on such structures. However, spacecraft systems have several complicated issues: 1. High amplitudes of loads; 2. Compressive loads on the vehicle in flight; 3. Lack of generous time-synchronized flight data; 4. Changing properties during the flight; and 5. Major vehicle changes due to staging. A particularly vexing parameter to extract is modal damping. Damping estimation has become a more critical issue as new mass-driven vehicle designs seek to use the highest damping value possible. The paper will focus on recent efforts to utilize spacecraft flight data to extract system parameters, with a special interest on modal damping. This work utilizes the analysis of correlation functions derived from a sliding window technique applied to the time record. Four different case studies are reported in the sequence that drove the authors understanding. The insights derived from these four exercises are preliminary conclusions for the general state-of-the-art, but may be of specific utility to similar problems approached with similar tools.
NASA Astrophysics Data System (ADS)
Jie, M.; Zhang, J.; Guo, B. B.
2017-12-01
As a typical distributed hydrological model, the SWAT model also has a challenge in calibrating parameters and analysis their uncertainty. This paper chooses the Chaohe River Basin China as the study area, through the establishment of the SWAT model, loading the DEM data of the Chaohe river basin, the watershed is automatically divided into several sub-basins. Analyzing the land use, soil and slope which are on the basis of the sub-basins and calculating the hydrological response unit (HRU) of the study area, after running SWAT model, the runoff simulation values in the watershed are obtained. On this basis, using weather data, known daily runoff of three hydrological stations, combined with the SWAT-CUP automatic program and the manual adjustment method are used to analyze the multi-site calibration of the model parameters. Furthermore, the GLUE algorithm is used to analyze the parameters uncertainty of the SWAT model. Through the sensitivity analysis, calibration and uncertainty study of SWAT, the results indicate that the parameterization of the hydrological characteristics of the Chaohe river is successful and feasible which can be used to simulate the Chaohe river basin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halford, Alexa J.; Fraser, Brian J; Morley, Steven Karl
As electromagnetic ion cyclotron (EMIC) waves may play an important role in radiation belt dynamics, there has been a push to better include them into global simulations. How to best include EMIC wave effects is still an open question. Recently many studies have attempted to parameterize EMIC waves and their characteristics by geomagnetic indices. However, this does not fully take into account important physics related to the phase of a geomagnetic storm. In this paper we first consider how EMIC wave occurrence varies with the phase of a geomagnetic storm and the SYM-H, AE, and Kp indices. Here we showmore » that the storm phase plays an important role in the occurrence probability of EMIC waves. The occurrence rates for a given value of a geomagnetic index change based on the geomagnetic condition. Then in this study we also describe the typical plasma and wave parameters observed in L and magnetic local time for quiet, storm, and storm phase. These results are given in a tabular format in the supporting information so that more accurate statistics of EMIC wave parameters can be incorporated into modeling efforts.« less
Semeniuk, Janusz; Kaczmarski, Maciej
2007-10-14
To assess values of 24-h esophageal pH-monitoring parameters with dual-channel probe (distal and proximal channel) in children suspected of gastroesophageal reflux disease (GERD). 264 children suspected of gastroesophageal reflux (GER) were enrolled in a study (mean age c=20.78+/-17.23 mo). The outcomes of this study, immunoallerrgological tests and positive result of oral food challenge test with a potentially noxious nutrient, enabled to qualify children into particular study groups. 32 (12.1%) infants (group 1) had physiological GER diagnosed. Pathological acid GER was confirmed in 138 (52.3%) children. Primary GER was diagnosed in 76 (28.8%) children (group 2) and GER secondary to allergy to cow milk protein and/or other food (CMA/FA) in 62 (23.5%) children (group 3). 32 (12.1%) of them had CMA/FA (group 4-reference group), and in remaining 62 (23.5%) children neither GER nor CMA/FA was confirmed (group 5). Mean values of pH monitoring parameters measured in distal and proximal channel were analyzed in individual groups. This analysis showed statistically significant differentiation of mean values in the case of: number of episodes of acid GER, episodes of acid GER lasting >5 min, duration of the longest episode of acid GER in both channels, acid GER index total and supine in proximal channel. Statistically significant differences of mean values among examined groups, especially between group 2 and 3 in the case of total acid GER index (only distal channel) were confirmed. 24-h esophageal pH monitoring confirmed pathological acid GER in 52.3% of children with typical and atypical symptoms of GERD. The similar pH-monitoring values obtained in group 2 and 3 confirm the necessity of implementation of differential diagnosis for primary vs secondary cause of GER.
Kwak, Dai Soon; Tao, Quang Bang; Todo, Mitsugu; Jeon, Insu
2012-05-01
Knee joint implants developed by western companies have been imported to Korea and used for Korean patients. However, many clinical problems occur in knee joints of Korean patients after total knee joint replacement owing to the geometric mismatch between the western implants and Korean knee joint structures. To solve these problems, a method to determine the representative dimension parameter values of Korean knee joints is introduced to aid in the design of knee joint implants appropriate for Korean patients. Measurements of the dimension parameters of 88 male Korean knee joint subjects were carried out. The distribution of the subjects versus each measured parameter value was investigated. The measured dimension parameter values of each parameter were grouped by suitable intervals called the "size group," and average values of the size groups were calculated. The knee joint subjects were grouped as the "patient group" based on "size group numbers" of each parameter. From the iterative calculations to decrease the errors between the average dimension parameter values of each "patient group" and the dimension parameter values of the subjects, the average dimension parameter values that give less than the error criterion were determined to be the representative dimension parameter values for designing knee joint implants for Korean patients.
Number of independent parameters in the potentiometric titration of humic substances.
Lenoir, Thomas; Manceau, Alain
2010-03-16
With the advent of high-precision automatic titrators operating in pH stat mode, measuring the mass balance of protons in solid-solution mixtures against the pH of natural and synthetic polyelectrolytes is now routine. However, titration curves of complex molecules typically lack obvious inflection points, which complicates their analysis despite the high-precision measurements. The calculation of site densities and median proton affinity constants (pK) from such data can lead to considerable covariance between fit parameters. Knowing the number of independent parameters that can be freely varied during the least-squares minimization of a model fit to titration data is necessary to improve the model's applicability. This number was calculated for natural organic matter by applying principal component analysis (PCA) to a reference data set of 47 independent titration curves from fulvic and humic acids measured at I = 0.1 M. The complete data set was reconstructed statistically from pH 3.5 to 9.8 with only six parameters, compared to seven or eight generally adjusted with common semi-empirical speciation models for organic matter, and explains correlations that occur with the higher number of parameters. Existing proton-binding models are not necessarily overparametrized, but instead titration data lack the sensitivity needed to quantify the full set of binding properties of humic materials. Model-independent conditional pK values can be obtained directly from the derivative of titration data, and this approach is the most conservative. The apparent proton-binding constants of the 23 fulvic acids (FA) and 24 humic acids (HA) derived from a high-quality polynomial parametrization of the data set are pK(H,COOH)(FA) = 4.18 +/- 0.21, pK(H,Ph-OH)(FA) = 9.29 +/- 0.33, pK(H,COOH)(HA) = 4.49 +/- 0.18, and pK(H,Ph-OH)(HA) = 9.29 +/- 0.38. Their values at other ionic strengths are more reliably calculated with the empirical Davies equation than any existing model fit.
Carvajal, Guido; Roser, David J; Sisson, Scott A; Keegan, Alexandra; Khan, Stuart J
2015-11-15
Risk management for wastewater treatment and reuse have led to growing interest in understanding and optimising pathogen reduction during biological treatment processes. However, modelling pathogen reduction is often limited by poor characterization of the relationships between variables and incomplete knowledge of removal mechanisms. The aim of this paper was to assess the applicability of Bayesian belief network models to represent associations between pathogen reduction, and operating conditions and monitoring parameters and predict AS performance. Naïve Bayes and semi-naïve Bayes networks were constructed from an activated sludge dataset including operating and monitoring parameters, and removal efficiencies for two pathogens (native Giardia lamblia and seeded Cryptosporidium parvum) and five native microbial indicators (F-RNA bacteriophage, Clostridium perfringens, Escherichia coli, coliforms and enterococci). First we defined the Bayesian network structures for the two pathogen log10 reduction values (LRVs) class nodes discretized into two states (< and ≥ 1 LRV) using two different learning algorithms. Eight metrics, such as Prediction Accuracy (PA) and Area Under the receiver operating Curve (AUC), provided a comparison of model prediction performance, certainty and goodness of fit. This comparison was used to select the optimum models. The optimum Tree Augmented naïve models predicted removal efficiency with high AUC when all system parameters were used simultaneously (AUCs for C. parvum and G. lamblia LRVs of 0.95 and 0.87 respectively). However, metrics for individual system parameters showed only the C. parvum model was reliable. By contrast individual parameters for G. lamblia LRV prediction typically obtained low AUC scores (AUC < 0.81). Useful predictors for C. parvum LRV included solids retention time, turbidity and total coliform LRV. The methodology developed appears applicable for predicting pathogen removal efficiency in water treatment systems generally. Copyright © 2015 Elsevier Ltd. All rights reserved.
Range Performance of Bombers Powered by Turbine-Propeller Power Plants
NASA Technical Reports Server (NTRS)
Cline, Charles W.
1950-01-01
Calculations have been made to find range? attainable by bombers of gross weights from l40,000 to 300,000 pounds powered by turbine-propeller power plants. Only conventional configurations were considered and emphasis was placed upon using data for structural and aerodynamic characteristics which are typical of modern military airplanes. An effort was made to limit the various parameters invoked in the airplane configuration to practical values. Therefore, extremely high wing loadings, large amounts of sweepback, and very high aspect ratios have not been considered. Power-plant performance was based upon the performance of a typical turbine-propeller engine equipped with propellers designed to maintain high efficiencies at high-subsonic speeds. Results indicated, in general, that the greatest range, for a given gross weight, is obtained by airplanes of high wing loading, unless the higher cruising speeds associated with the high-wing-loading airplanes require-the use of thinner wing sections. Further results showed the effect of cruising at-high speeds, of operation at very high altitudes, and of carrying large bomb loads.
Stochastic dosimetry model for radon progeny in the rat lung.
Winkler-HeiI, R; Hofmann, W; Hussain, M
2014-07-01
The stochastic dosimetry model presented here considers the distinctly asymmetric, stochastic branching pattern reported in morphometric measurements. This monopodial structure suggests that an airway diameter is a more appropriate morphometric parameter to classify bronchial dose distributions for inhaled radon progeny than the commonly assigned airway generation numbers. Bronchial doses were calculated for the typical exposure conditions reported for the Pacific Northwest National Laboratory rat inhalation studies, yielding an average bronchial dose of 7.75 mGy WLM(-1). If plotted as functions of airway generations, the resulting dose distributions are highest in the central bronchial airways, while significantly decreasing towards peripheral generations. However, if plotted as functions of airway diameters, doses are much more uniformly distributed among bronchial airways. The comparison between rat and human lungs indicates that dose conversion coefficients for the rat lung are higher than the corresponding values for the human lung by a factor of 1.34 for the experimental PNNL exposure conditions, and of 1.25 for typical human indoor conditions. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
da Silva Marques, Rogério; Prado, Adilson Ribeiro; da Costa Antunes, Paulo Fernando; de Brito André, Paulo Sérgio; Ribeiro, Moisés R. N.; Frizera-Neto, Anselmo; Pontes, Maria José
2015-01-01
This article presents a corrosion resistant, maneuverable, and intrinsically safe fiber Bragg grating (FBG)-based temperature optical sensor. Temperature monitoring is a critical activity for the oil and gas industry. It typically involves acquiring the desired parameters in a hazardous and corrosive environment. The use of polytetrafluoroethylene (PTFE) was proposed as a means of simultaneously isolating the optical fiber from the corrosive environment and avoiding undesirable mechanical tensions on the FBGs. The presented sensor head is based on multiple FBGs inscribed in a lengthy single mode fiber. The sensor presents an average thermal sensitivity of 8.82 ± 0.09 pm/°C, resulting in a typical temperature resolution of ~0.1 °C and an average time constant value of 6.25 ± 0.08 s. Corrosion and degradation resistance were verified by infrared spectroscopy and scanning electron microscopy during 90 days exposure to high salinity crude oil samples. The developed sensor was tested in a field pilot test, mimicking the operation of an inland crude tank, demonstrating its abilities to dynamically monitor temperature profile. PMID:26690166
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
Marques, Rogério da Silva; Prado, Adilson Ribeiro; Antunes, Paulo Fernando da Costa; André, Paulo Sérgio de Brito; Ribeiro, Moisés R N; Frizera-Neto, Anselmo; Pontes, Maria José
2015-12-05
This article presents a corrosion resistant, maneuverable, and intrinsically safe fiber Bragg grating (FBG)-based temperature optical sensor. Temperature monitoring is a critical activity for the oil and gas industry. It typically involves acquiring the desired parameters in a hazardous and corrosive environment. The use of polytetrafluoroethylene (PTFE) was proposed as a means of simultaneously isolating the optical fiber from the corrosive environment and avoiding undesirable mechanical tensions on the FBGs. The presented sensor head is based on multiple FBGs inscribed in a lengthy single mode fiber. The sensor presents an average thermal sensitivity of 8.82 ± 0.09 pm/°C, resulting in a typical temperature resolution of ~0.1 °C and an average time constant value of 6.25 ± 0.08 s. Corrosion and degradation resistance were verified by infrared spectroscopy and scanning electron microscopy during 90 days exposure to high salinity crude oil samples. The developed sensor was tested in a field pilot test, mimicking the operation of an inland crude tank, demonstrating its abilities to dynamically monitor temperature profile.
Amichai, Taly; Eylon, Sharon; Berger, Itai; Katz-Leurer, Michal
2018-02-06
To describe the immediate effect of breathing rate on heart rate (HR) and heart rate variability (HRV) in children with cerebral palsy (CP) and a control group of typically developed (TD) age and gender-matched children. Twenty children with CP at gross motor function classification system levels I-III and 20 TD children aged 6-11 participated in the study. HR was monitored at rest and during paced breathing with biofeedback. Respiratory measures were assessed by KoKo spirometry. Children with CP have lower spirometry and HRV values at rest compared to TD children. The mean reduction of breathing rate during paced breathing among children with CP was significantly smaller. Nonetheless, while practicing paced breathing, both groups reduced their breathing rate and increased their HRV. The results of the current work present the immediate effect of paced breathing on HRV parameters in CP and TD children. Further studies are needed to investigate the effect of long-term treatment focusing on paced breathing for children with CP.
Stratified Magnetically Driven Accretion-Disk Winds and Their Relations To Jets
NASA Technical Reports Server (NTRS)
Fukumura, Keigo; Tombesi, Francesco; Kazanas, Demosthenes; Shrader, Chris; Behar, Ehud; Contopoulos, Ioannis
2013-01-01
We explore the poloidal structure of two-dimensional magnetohydrodynamic (MHD) winds in relation to their potential association with the X-ray warm absorbers (WAs) and the highly ionized ultra-fast outflows (UFOs) in active galactic nuclei (AGNs), in a single unifying approach. We present the density n(r, theta), ionization parameter xi(r, theta), and velocity structure v(r, theta) of such ionized winds for typical values of their fluid-to-magnetic flux ratio, F, and specific angular momentum, H, for which wind solutions become super-Alfvenic. We explore the geometrical shape of winds for different values of these parameters and delineate the values that produce the widest and narrowest opening angles of these winds, quantities necessary in the determination of the statistics of AGN obscuration. We find that winds with smaller H show a poloidal geometry of narrower opening angles with their Alfv´en surface at lower inclination angles and therefore they produce the highest line of sight (LoS) velocities for observers at higher latitudes with the respect to the disk plane. We further note a physical and spatial correlation between the X-ray WAs and UFOs that form along the same LoS to the observer but at different radii, r, and distinct values of n, xi, and v consistent with the latest spectroscopic data of radio-quiet Seyfert galaxies. We also show that, at least in the case of 3C 111, the winds' pressure is sufficient to contain the relativistic plasma responsible for its radio emission. Stratified MHD disk winds could therefore serve as a unique means to understand and unify the diverse AGN outflows.
Stratified Magnetically Driven Accretion-disk Winds and Their Relations to Jets
NASA Astrophysics Data System (ADS)
Fukumura, Keigo; Tombesi, Francesco; Kazanas, Demosthenes; Shrader, Chris; Behar, Ehud; Contopoulos, Ioannis
2014-01-01
We explore the poloidal structure of two-dimensional magnetohydrodynamic (MHD) winds in relation to their potential association with the X-ray warm absorbers (WAs) and the highly ionized ultra-fast outflows (UFOs) in active galactic nuclei (AGNs), in a single unifying approach. We present the density n(r, θ), ionization parameter ξ(r, θ), and velocity structure v(r, θ) of such ionized winds for typical values of their fluid-to-magnetic flux ratio, F, and specific angular momentum, H, for which wind solutions become super-Alfvénic. We explore the geometrical shape of winds for different values of these parameters and delineate the values that produce the widest and narrowest opening angles of these winds, quantities necessary in the determination of the statistics of AGN obscuration. We find that winds with smaller H show a poloidal geometry of narrower opening angles with their Alfvén surface at lower inclination angles and therefore they produce the highest line of sight (LoS) velocities for observers at higher latitudes with the respect to the disk plane. We further note a physical and spatial correlation between the X-ray WAs and UFOs that form along the same LoS to the observer but at different radii, r, and distinct values of n, ξ, and v consistent with the latest spectroscopic data of radio-quiet Seyfert galaxies. We also show that, at least in the case of 3C 111, the winds' pressure is sufficient to contain the relativistic plasma responsible for its radio emission. Stratified MHD disk winds could therefore serve as a unique means to understand and unify the diverse AGN outflows.
Reason, emotion and decision-making: risk and reward computation with feeling.
Quartz, Steven R
2009-05-01
Many models of judgment and decision-making posit distinct cognitive and emotional contributions to decision-making under uncertainty. Cognitive processes typically involve exact computations according to a cost-benefit calculus, whereas emotional processes typically involve approximate, heuristic processes that deliver rapid evaluations without mental effort. However, it remains largely unknown what specific parameters of uncertain decision the brain encodes, the extent to which these parameters correspond to various decision-making frameworks, and their correspondence to emotional and rational processes. Here, I review research suggesting that emotional processes encode in a precise quantitative manner the basic parameters of financial decision theory, indicating a reorientation of emotional and cognitive contributions to risky choice.
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
Concurrently adjusting interrelated control parameters to achieve optimal engine performance
Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna
2015-12-01
Methods and systems for real-time engine control optimization are provided. A value of an engine performance variable is determined, a value of a first operating condition and a value of a second operating condition of a vehicle engine are detected, and initial values for a first engine control parameter and a second engine control parameter are determined based on the detected first operating condition and the detected second operating condition. The initial values for the first engine control parameter and the second engine control parameter are adjusted based on the determined value of the engine performance variable to cause the engine performance variable to approach a target engine performance variable. In order to cause the engine performance variable to approach the target engine performance variable, adjusting the initial value for the first engine control parameter necessitates a corresponding adjustment of the initial value for the second engine control parameter.
NASA Astrophysics Data System (ADS)
Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes
2018-06-01
Organic materials with a high index of refraction (RI) are attracting considerable interest due to their potential application in optic and optoelectronic devices. However, most of these applications require an RI value of 1.7 or larger, while typical carbon-based polymers only exhibit values in the range of 1.3-1.5. This paper introduces an efficient computational protocol for the accurate prediction of RI values in polymers to facilitate in silico studies that can guide the discovery and design of next-generation high-RI materials. Our protocol is based on the Lorentz-Lorenz equation and is parametrized by the polarizability and number density values of a given candidate compound. In the proposed scheme, we compute the former using first-principles electronic structure theory and the latter using an approximation based on van der Waals volumes. The critical parameter in the number density approximation is the packing fraction of the bulk polymer, for which we have devised a machine learning model. We demonstrate the performance of the proposed RI protocol by testing its predictions against the experimentally known RI values of 112 optical polymers. Our approach to combine first-principles and data modeling emerges as both a successful and a highly economical path to determining the RI values for a wide range of organic polymers.
Fundamental properties of nearby single early B-type stars
NASA Astrophysics Data System (ADS)
Nieva, María-Fernanda; Przybilla, Norbert
2014-06-01
Aims: Fundamental parameters of a sample of 26 apparently slowly-rotating single early B-type stars in OB associations and in the field within a distance of ≲400 pc from the Sun are presented and compared to high-precision data from detached eclipsing binaries (DEBs). Together with surface abundances for light elements the data are used to discuss the evolutionary status of the stars in context of the most recent Geneva grid of models for core hydrogen-burning stars in the mass-range ~6 to 18 M⊙ at metallicity Z = 0.014. Methods: The fundamental parameters are derived on the basis of accurate and precise atmospheric parameters determined earlier by us from non-LTE analyses of high-quality spectra of the sample stars, utilising the new Geneva stellar evolution models. Results: Evolutionary masses plus radii and luminosities are determined to better than typically 5%, 10%, and 20% uncertainty, respectively, facilitating the mass-radius and mass-luminosity relationships to be recovered for single core hydrogen-burning objects with a similar precision as derived from DEBs. Good agreement between evolutionary and spectroscopic masses is found. Absolute visual and bolometric magnitudes are derived to typically ~0.15-0.20 mag uncertainty. Metallicities are constrained to better than 15-20% uncertainty and tight constraints on evolutionary ages of the stars are provided. Overall, the spectroscopic distances and ages of individual sample stars agree with independently derived values for the host OB associations. Signatures of mixing with CN-cycled material are found in 1/3 of the sample stars. Typically, these are consistent with the amount predicted by the new Geneva models with rotation. The presence of magnetic fields appears to augment the mixing efficiency. In addition, a few objects are possibly the product of binary evolution. In particular, the unusual characteristics of τ Sco point to a blue straggler nature, due to a binary merger. Conclusions: The accuracy and precision achieved in the determination of fundamental stellar parameters from the quantitative spectroscopy of single early B-type stars comes close (within a factor 2-4) to data derived from DEBs. While our fundamental parameters are in good agreement with those derived from DEBs as a function of spectral type, significant systematic differences with data from the astrophysical reference literature are found. Masses are ~10-20% and radii ~25% lower then the recommended values for luminosity class V, resulting in the stars being systematically fainter than assumed usually, by ~0.5 mag in absolute visual and bolometric magnitude. Our sample of giants is too small to derive firm conclusions, but similar trends as for the dwarfs are indicated. Based on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck Institut für Astronomie and the Instituto de Astrofísica de Andalucía (CSIC), proposals H2001-2.2-011 and H2005-2.2-016. Based on observations collected at the European Southern Observatory, Chile, ESO 074.B-0455(A). Based on spectral data retrieved from the ELODIE archive at Observatoire de Haute-Provence (OHP). Based on observations made with the Nordic Optical Telescope, operated on the island of La Palma jointly by Denmark, Finland, Iceland, Norway, and Sweden, in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias.
Jing, Nan; Li, Chuang; Chong, Yaqin
2017-01-20
An estimation method for indirectly observable parameters for a typical low dynamic vehicle (LDV) is presented. The estimation method utilizes apparent magnitude, azimuth angle, and elevation angle to estimate the position and velocity of a typical LDV, such as a high altitude balloon (HAB). In order to validate the accuracy of the estimated parameters gained from an unscented Kalman filter, two sets of experiments are carried out to obtain the nonresolved photometric and astrometric data. In the experiments, a HAB launch is planned; models of the HAB dynamics and kinematics and observation models are built to use as time update and measurement update functions, respectively. When the HAB is launched, a ground-based optoelectronic detector is used to capture the object images, which are processed using aperture photometry technology to obtain the time-varying apparent magnitude of the HAB. Two sets of actual and estimated parameters are given to clearly indicate the parameter differences. Two sets of errors between the actual and estimated parameters are also given to show how the estimated position and velocity differ with respect to the observation time. The similar distribution curve results from the two scenarios, which agree within 3σ, verify that nonresolved photometric and astrometric data can be used to estimate the indirectly observable state parameters (position and velocity) for a typical LDV. This technique can be applied to small and dim space objects in the future.
Inference of R 0 and Transmission Heterogeneity from the Size Distribution of Stuttering Chains
Blumberg, Seth; Lloyd-Smith, James O.
2013-01-01
For many infectious disease processes such as emerging zoonoses and vaccine-preventable diseases, and infections occur as self-limited stuttering transmission chains. A mechanistic understanding of transmission is essential for characterizing the risk of emerging diseases and monitoring spatio-temporal dynamics. Thus methods for inferring and the degree of heterogeneity in transmission from stuttering chain data have important applications in disease surveillance and management. Previous researchers have used chain size distributions to infer , but estimation of the degree of individual-level variation in infectiousness (as quantified by the dispersion parameter, ) has typically required contact tracing data. Utilizing branching process theory along with a negative binomial offspring distribution, we demonstrate how maximum likelihood estimation can be applied to chain size data to infer both and the dispersion parameter that characterizes heterogeneity. While the maximum likelihood value for is a simple function of the average chain size, the associated confidence intervals are dependent on the inferred degree of transmission heterogeneity. As demonstrated for monkeypox data from the Democratic Republic of Congo, this impacts when a statistically significant change in is detectable. In addition, by allowing for superspreading events, inference of shifts the threshold above which a transmission chain should be considered anomalously large for a given value of (thus reducing the probability of false alarms about pathogen adaptation). Our analysis of monkeypox also clarifies the various ways that imperfect observation can impact inference of transmission parameters, and highlights the need to quantitatively evaluate whether observation is likely to significantly bias results. PMID:23658504
Parameter Estimation and Model Selection in Computational Biology
Lillacci, Gabriele; Khammash, Mustafa
2010-01-01
A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262
Ecophysiological parameters for Pacific Northwest trees.
Amy E. Hessl; Cristina Milesi; Michael A. White; David L. Peterson; Robert E. Keane
2004-01-01
We developed a species- and location-specific database of published ecophysiological variables typically used as input parameters for biogeochemical models of coniferous and deciduous forested ecosystems in the Western United States. Parameters are based on the requirements of Biome-BGC, a widely used biogeochemical model that was originally parameterized for the...
A global sensitivity analysis of crop virtual water content
NASA Astrophysics Data System (ADS)
Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.
2015-12-01
The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for other crops. The sensitivity to the reference evapotranspiration is highly variable with the considered crop and ranges from positive values (for soybean), to negative values (for rice and maize) and near-zero values for wheat. This variability reflects the different yield response factors of crops, which expresses their tolerance to water stress.
Morphology and solubility of multiple crystal forms of Taka-amylase A
NASA Astrophysics Data System (ADS)
Ninomiya, Kumiko; Yamamoto, Tenyu; Oheda, Tadashi; Sato, Kiyotaka; Sazaki, Gen; Matsuura, Yoshiki
2001-01-01
An α-amylase originating from a mold Aspergillus oryzae, Taka-amylase A (Mr of 52 kDa, pI of 3.8), has been purified to an electrophoretically single band grade. Crystallization behaviors were investigated using ammonium sulfate and polyethleneglycol 8000 as precipitants. The variations in the morphology of the crystals obtained with changing crystallization parameters are described. Five apparently different crystal forms were obtained, and their morphology and crystallographic data have been determined. Solubility values of four typical forms were measured using a Michelson-type two-beam interferometer. The results of these experiments showed that this protein can be a potentially interesting and useful model for crystal growth study with a gram-amount availability of pure protein sample.
Evaluate error correction ability of magnetorheological finishing by smoothing spectral function
NASA Astrophysics Data System (ADS)
Wang, Jia; Fan, Bin; Wan, Yongjian; Shi, Chunyan; Zhuo, Bin
2014-08-01
Power Spectral Density (PSD) has been entrenched in optics design and manufacturing as a characterization of mid-high spatial frequency (MHSF) errors. Smoothing Spectral Function (SSF) is a newly proposed parameter that based on PSD to evaluate error correction ability of computer controlled optical surfacing (CCOS) technologies. As a typical deterministic and sub-aperture finishing technology based on CCOS, magnetorheological finishing (MRF) leads to MHSF errors inevitably. SSF is employed to research different spatial frequency error correction ability of MRF process. The surface figures and PSD curves of work-piece machined by MRF are presented. By calculating SSF curve, the correction ability of MRF for different spatial frequency errors will be indicated as a normalized numerical value.
NASA Astrophysics Data System (ADS)
Drury, Luke O.'C.; Strong, Andrew W.
2017-01-01
We make quantitative estimates of the power supplied to the Galactic cosmic ray population by second-order Fermi acceleration in the interstellar medium, or as it is usually termed in cosmic ray propagation studies, diffusive reacceleration. Using recent results on the local interstellar spectrum, following Voyager 1's crossing of the heliopause, we show that for parameter values, in particular the Alfvén speed, typically used in propagation codes such as GALPROP to fit the B/C ratio, the power contributed by diffusive reacceleration is significant and can be of order 50% of the total Galactic cosmic ray power. The implications for the damping of interstellar turbulence are briefly considered.
Inverse gas chromatographic determination of solubility parameters of excipients.
Adamska, Katarzyna; Voelkel, Adam
2005-11-04
The principle aim of this work was an application of inverse gas chromatography (IGC) for the estimation of solubility parameter for pharmaceutical excipients. The retention data of number of test solutes were used to calculate Flory-Huggins interaction parameter (chi1,2infinity) and than solubility parameter (delta2), corrected solubility parameter (deltaT) and its components (deltad, deltap, deltah) by using different procedures. The influence of different values of test solutes solubility parameter (delta1) over calculated values was estimated. The solubility parameter values obtained for all excipients from the slope, from Guillet and co-workers' procedure are higher than that obtained from components according Voelkel and Janas procedure. It was found that solubility parameter's value of the test solutes influences, but not significantly, values of solubility parameter of excipients.
Simulation of MAD Cow Disease Propagation
NASA Astrophysics Data System (ADS)
Magdoń-Maksymowicz, M. S.; Maksymowicz, A. Z.; Gołdasz, J.
Computer simulation of dynamic of BSE disease is presented. Both vertical (to baby) and horizontal (to neighbor) mechanisms of the disease spread are considered. The game takes place on a two-dimensional square lattice Nx×Ny = 1000×1000 with initial population randomly distributed on the net. The disease may be introduced either with the initial population or by a spontaneous development of BSE in an item, at a small frequency. Main results show a critical probability of the BSE transmission above which the disease is present in the population. This value is vulnerable to possible spatial clustering of the population and it also depends on the mechanism responsible for the disease onset, evolution and propagation. A threshold birth rate below which the population is extinct is seen. Above this threshold the population is disease free at equilibrium until another birth rate value is reached when the disease is present in population. For typical model parameters used for the simulation, which may correspond to the mad cow disease, we are close to the BSE-free case.
Plasma chemistry in booted eagle (Hieraaetus pennatus) during breeding season.
Casado, Eva; Balbontin, Javier; Ferrer, Miguel
2002-02-01
Most studies that have examined raptor plasma chemistry have been conducted on birds living in captivity. In this study, we describe typical plasma chemistry values indicators of body condition in free-living Booted Eagles, Hieraaetus pennatus, from Doñana National Park (Spain). Values are compared with those of other raptors. Mean concentrations of creatinine, uric acid and urea were lower in adults than in nestlings, while glucose, DAT and AAT were lower in nestlings than in adults. Interactions of age/sex affected plasma mean levels of creatine kinase, glucose, AAT, uric acid and urea. Adult females showed significantly lower levels of creatine kinase, uric acid and urea than adult males and nestlings. Adult males had significantly higher levels of AAT than the other groups. The lowest levels of glucose and the highest levels of uric acid were found in nestling females. We think the differences in blood parameters can be explained by differences in size of species, of individuals (because of both body condition and sexual dimorphism) and diet.
The variance of the locally measured Hubble parameter explained with different estimators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odderskov, Io; Hannestad, Steen; Brandbyge, Jacob, E-mail: isho07@phys.au.dk, E-mail: sth@phys.au.dk, E-mail: jacobb@phys.au.dk
We study the expected variance of measurements of the Hubble constant, H {sub 0}, as calculated in either linear perturbation theory or using non-linear velocity power spectra derived from N -body simulations. We compare the variance with that obtained by carrying out mock observations in the N-body simulations, and show that the estimator typically used for the local Hubble constant in studies based on perturbation theory is different from the one used in studies based on N-body simulations. The latter gives larger weight to distant sources, which explains why studies based on N-body simulations tend to obtain a smaller variancemore » than that found from studies based on the power spectrum. Although both approaches result in a variance too small to explain the discrepancy between the value of H {sub 0} from CMB measurements and the value measured in the local universe, these considerations are important in light of the percent determination of the Hubble constant in the local universe.« less
Nonlinear Dynamics of Silicon Nanowire Resonator Considering Nonlocal Effect.
Jin, Leisheng; Li, Lijie
2017-12-01
In this work, nonlinear dynamics of silicon nanowire resonator considering nonlocal effect has been investigated. For the first time, dynamical parameters (e.g., resonant frequency, Duffing coefficient, and the damping ratio) that directly influence the nonlinear dynamics of the nanostructure have been derived. Subsequently, by calculating their response with the varied nonlocal coefficient, it is unveiled that the nonlocal effect makes more obvious impacts at the starting range (from zero to a small value), while the impact of nonlocal effect becomes weaker when the nonlocal term reaches to a certain threshold value. Furthermore, to characterize the role played by nonlocal effect in exerting influence on nonlinear behaviors such as bifurcation and chaos (typical phenomena in nonlinear dynamics of nanoscale devices), we have calculated the Lyapunov exponents and bifurcation diagram with and without nonlocal effect, and results shows the nonlocal effect causes the most significant effect as the device is at resonance. This work advances the development of nanowire resonators that are working beyond linear regime.
Food-service establishment wastewater characterization.
Lesikar, B J; Garza, O A; Persyn, R A; Kenimer, A L; Anderson, M T
2006-08-01
Food-service establishments that use on-site wastewater treatment systems are experiencing pretreatment system and/or drain field hydraulic and/or organic overloading. This study included characterization of four wastewater parameters (five-day biochemical oxygen demand [BOD5]; total suspended solids [TSS]; food, oil, and grease [FOG]; and flow) from 28 restaurants located in Texas during June, July, and August 2002. The field sampling methodology included taking a grab sample from each restaurant for 6 consecutive days at approximately the same time each day, followed by a 2-week break, and then sampling again for another 6 consecutive days, for a total of 12 samples per restaurant and 336 total observations. The analysis indicates higher organic (BOD5) and hydraulic values for restaurants than those typically found in the literature. The design values for this study for BOD5, TSS, FOG, and flow were 1523, 664, and 197 mg/L, and 96 L/day-seat respectively, which captured over 80% of the data collected.
Twin jet shielding. [for aircraft noise reduction
NASA Technical Reports Server (NTRS)
Parthasarathy, S. P.; Cuffel, R. F.; Massier, P. F.
1979-01-01
For an over-the-wing/under-the-wing engine configuration on an airplane, the noise produced by the upper jet flow is partially reflected by the lower jet. An analysis has been performed which can be used to predict the distribution of perceived noise levels along the ground plane at take-off for an airplane which is designed to take advantage of the over/under shielding concept. Typical contours of PNL, the shielding benefit in the shadow zone, and the EPNL values at 3.5 nautical miles from brake release as well as EPNL values at sideline at 0.35 nautical miles have been calculated. This has been done for a range of flow parameters characteristic of engines producing inverted velocity profile jets suitable for use in a supersonic cruise vehicle. Reductions up to 6.0 EPNdB in community noise levels can be realized when the over engines are operated at higher thrust and the lower engines simultaneously operated with reduced thrust keeping the total thrust constant.
Determination of meteor parameters using laboratory simulation techniques
NASA Technical Reports Server (NTRS)
Friichtenicht, J. F.; Becker, D. G.
1973-01-01
Atmospheric entry of meteoritic bodies is conveniently and accurately simulated in the laboratory by techniques which employ the charging and electrostatic acceleration of macroscopic solid particles. Velocities from below 10 to above 50 km/s are achieved for particle materials which are elemental meteoroid constituents or mineral compounds with characteristics similar to those of meteoritic stone. The velocity, mass, and kinetic energy of each particle are measured nondestructively, after which the particle enters a target gas region. Because of the small particle size, free molecule flow is obtained. At typical operating pressures (0.1 to 0.5 torr), complete particle ablation occurs over distances of 25 to 50 cm; the spatial extent of the atmospheric interaction phenomena is correspondingly small. Procedures have been developed for measuring the spectrum of light from luminous trails and the values of fundamental quantities defined in meteor theory. It is shown that laboratory values for iron are in excellent agreement with those for 9 to 11 km/s artificial meteors produced by rocket injection of iron bodies into the atmosphere.
Soil-to-plant halogens transfer studies 2. Root uptake of radiochlorine by plants.
Kashparov, V; Colle, C; Zvarich, S; Yoschenko, V; Levchuk, S; Lundin, S
2005-01-01
Long-term field experiments have been carried out in the Chernobyl exclusion zone in order to determine the parameters governing radiochlorine (36Cl) transfer to plants from four types of soil, namely, podzoluvisol, greyzem, and typical and meadow chernozem. Radiochlorine concentration ratios (CR) in radish roots (15+/-10), lettuce leaves (30+/-15), bean pods (15+/-11) and wheat seed (23+/-11) and straw (210+/-110) for fresh weight of plants were obtained. These values correlate well with stable chlorine values for the same plants. One year after injection, 36Cl reached a quasi-equilibrium with stable chlorine in the agricultural soils and its behavior in the soil-plant system mimicked the behavior of stable chlorine (this behavior was determined by soil moisture transport in the investigated soils). In the absence of intensive vertical migration, more than half of 36Cl activity in arable layer of soil passes into the radish, lettuce and the aboveground parts of wheat during a single vegetation period.
Measurement of biochemical oxygen demand of the leachates.
Fulazzaky, Mohamad Ali
2013-06-01
Biochemical oxygen demand (BOD) of the leachates originally from the different types of landfill sites was studied based on the data measured using the two manometric methods. The measurements of BOD using the dilution method were carried out to assess the typical physicochemical and biological characteristics of the leachates together with some other parameters. The linear regression analysis was used to predict rate constants for biochemical reactions and ultimate BOD values of the different leachates. The rate of a biochemical reaction implicated in microbial biodegradation of pollutants depends on the leachate characteristics, mass of contaminant in the leachate, and nature of the leachate. Character of leachate samples for BOD analysis of using the different methods may differ significantly during the experimental period, resulting in different BOD values. This work intends to verify effect of the different dilutions for the manometric method tests on the BOD concentrations of the leachate samples to contribute to the assessment of reaction rate and microbial consumption of oxygen.
Dynamics of a neuron model in different two-dimensional parameter-spaces
NASA Astrophysics Data System (ADS)
Rech, Paulo C.
2011-03-01
We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades.
Melo, Roberta Michelon; Mota, Helena Bolli; Berti, Larissa Cristina
2017-06-08
This study used acoustic and articulatory analyses to characterize the contrast between alveolar and velar stops with typical speech data, comparing the parameters (acoustic and articulatory) of adults and children with typical speech development. The sample consisted of 20 adults and 15 children with typical speech development. The analyzed corpus was organized through five repetitions of each target-word (/'kap ə/, /'tapə/, /'galo/ e /'daɾə/). These words were inserted into a carrier phrase and the participant was asked to name them spontaneously. Simultaneous audio and video data were recorded (tongue ultrasound images). The data was submitted to acoustic analyses (voice onset time; spectral peak and burst spectral moments; vowel/consonant transition and relative duration measures) and articulatory analyses (proportion of significant axes of the anterior and posterior tongue regions and description of tongue curves). Acoustic and articulatory parameters were effective to indicate the contrast between alveolar and velar stops, mainly in the adult group. Both speech analyses showed statistically significant differences between the two groups. The acoustic and articulatory parameters provided signals to characterize the phonic contrast of speech. One of the main findings in the comparison between adult and child speech was evidence of articulatory refinement/maturation even after the period of segment acquisition.
Rivard, Mark J; Davis, Stephen D; DeWerd, Larry A; Rusch, Thomas W; Axelrod, Steve
2006-11-01
A new x-ray source, the model S700 Axxent X-Ray Source (Source), has been developed by Xoft Inc. for electronic brachytherapy. Unlike brachytherapy sources containing radionuclides, this Source may be turned on and off at will and may be operated at variable currents and voltages to change the dose rate and penetration properties. The in-water dosimetry parameters for this electronic brachytherapy source have been determined from measurements and calculations at 40, 45, and 50 kV settings. Monte Carlo simulations of radiation transport utilized the MCNP5 code and the EPDL97-based mcplib04 cross-section library. Inter-tube consistency was assessed for 20 different Sources, measured with a PTW 34013 ionization chamber. As the Source is intended to be used for a maximum of ten treatment fractions, tube stability was also assessed. Photon spectra were measured using a high-purity germanium (HPGe) detector, and calculated using MCNP. Parameters used in the two-dimensional (2D) brachytherapy dosimetry formalism were determined. While the Source was characterized as a point due to the small anode size, < 1 mm, use of the one-dimensional (1D) brachytherapy dosimetry formalism is not recommended due to polar anisotropy. Consequently, 1D brachytherapy dosimetry parameters were not sought. Calculated point-source model radial dose functions at gP(5) were 0.20, 0.24, and 0.29 for the 40, 45, and 50 kV voltage settings, respectively. For 1
Improving the representation of Arctic photosynthesis in Earth System Models
NASA Astrophysics Data System (ADS)
Rogers, A.; Serbin, S.; Sloan, V. L.; Norby, R. J.; Wullschleger, S. D.
2014-12-01
The primary goal of Earth System Models (ESMs) is to improve understanding and projection of future global change. In order to do this models must accurately represent the terrestrial carbon cycle. Although Arctic carbon fluxes are small relative to global carbon fluxes, uncertainty is large. Photosynthetic CO2 uptake is well described by the Farquhar, von Caemmerer and Berry (FvCB) model of photosynthesis and most ESMs use a derivation of the FvCB model to calculate gross primary productivity. Two key parameters required by the FvCB model are an estimate of the maximum rate of carboxylation by the enzyme Rubisco (Vc,max) and the maximum rate of electron transport (Jmax). In ESMs the parameter Vc,max is typically fixed for a given plant functional type (PFT). Only four ESMs currently have an explicit Arctic PFT and the data used to derive Vc,max in these models relies on small data sets and unjustified assumptions. We examined the derivation of Vc,max and Jmax in current Arctic PFTs and estimated Vc,max and Jmax for a range of Arctic PFTs growing on the Barrow Environmental Observatory, Barrow, AK. We found that the values of Vc,max currently used to represent Arctic plants in ESMs are 70% lower than the values we measured, and contemporary temperature response functions for Vc,max also appear to underestimate Vc,max at low temperature. ESMs typically use a single multiplier (JVratio) to convert Vc,max to Jmax, however we found that the JVratio of Arctic plants is higher than current estimates suggesting that Arctic PFTs will be more responsive to rising carbon dioxide than currently projected. In addition we are exploring remotely sensed methods to scale up key biochemical (e.g. leaf N, leaf mass area) and physiological (e.g. Vc,max and Jmax) properties that drive model representation of photosynthesis in the Arctic. Our data suggest that the Arctic tundra has a much greater capacity for CO2 uptake, particularly at low temperature, and will be more CO2 responsive than is currently represented in ESMs. As we build robust relationships between physiology and spectral signatures we hope to provide spatially and temporally resolved trait maps of key model parameters that can be ingested by new model frameworks, or used to validate emergent model properties.
Dynamics of massive black holes as a possible candidate of Galactic dark matter
NASA Technical Reports Server (NTRS)
Xu, Guohong; Ostriker, Jeremiah P.
1994-01-01
If the dark halo of the Galaxy is comprised of massive black holes (MBHs), then those within approximately 1 kpc will spiral to the center, where they will interact with one another, forming binaries which contract, owing to further dynamical friction, and then possibly merge to become more massive objects by emission of gravitational radiation. If successive mergers would invariably lead, as has been proposed by various authors, to the formation of a very massive nucleus of 10(exp 8) solar mass, then the idea of MBHs as a dark matter candidate could be excluded on observational grounds, since the observed limit (or value) for a Galactic central black hole is approximately 10(exp 6.5) solar mass. But, if successive mergers are delayed or prevented by other processes, such as the gravitational slingshot or rocket effect of gravitational radiation, then a large mass accumulation will not occur. In order to resolve this issue, we perform detailed N-body simulations using a modfied Aarseth code to explore the dynamical behavior of the MBHs, and we find that for a 'best estimate' model of the Galaxy a runaway does not occur. The code treates the MBHs as subject to the primary gravitational forces of one another and to the smooth stellar distribution, as well as the secondary perturbations in their orbits due to another and to the smooth stellar distribution, as well as the secondary perturbations in their orbits due to dynamical friction and gravitational radiation. Instead of a runaway, three-body interactions between hard binaries and single MBHs eject massive objects before accumulation of more than a few units, so that typically the center will contain zero, one, or two MBHs. We study how the situation depends in detail on the mass per MBH, the rotation of the halo, the mass distribution within the Galaxy, and other parameters. A runaway will most sensitively depend on the ratio of initial (spheroid/halo) central mass densities and secondarily on the typical values for the mass per MBH, with the rough dividing line, using Galactic parameters, being M(sub BH) less than or = 10(exp 6.5) solar mass. Using parameters from Lacey & Ostriker (1985) and our most accurate model for Galaxy, no runaway occurs.
NASA Astrophysics Data System (ADS)
Kouroussis, G.; Verlinden, O.; Conti, C.
2012-04-01
A study is performed on the influence of some typical railway vehicle and track parameters on the level of ground vibrations induced in the neighbourhood. The results are obtained from a previously validated simulation framework considering in a first step the vehicle/track subsystem and, in a second step, the response of the soil to the forces resulting from the first analysis. The vehicle is reduced to a simple vertical 3-dof model, corresponding to the superposition of the wheelset, the bogie and the car body. The rail is modelled as a succession of beam elements elastically supported by the sleepers, lying themselves on a flexible foundation representing the ballast and the subgrade. The connection between the wheels and the rails is realised through a non-linear Hertzian contact. The soil motion is obtained from a finite/infinite element model. The investigated vehicle parameters are its type (urban, high speed, freight, etc.) and its speed. For the track, the rail flexural stiffness, the railpad stiffness, the spacing between sleepers and the rail and sleeper masses are considered. In all cases, the parameter value range is defined from a bibliographic browsing. At the end, the paper proposes a table summarising the influence of each studied parameter on three indicators: the vehicle acceleration, the rail velocity and the soil velocity. It namely turns out that the vehicle has a serious influence on the vibration level and should be considered in prediction models.
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
Metal Standards for Waveguide Characterization of Materials
NASA Technical Reports Server (NTRS)
Lambert, Kevin M.; Kory, Carol L.
2009-01-01
Rectangular-waveguide inserts that are made of non-ferromagnetic metals and are sized and shaped to function as notch filters have been conceived as reference standards for use in the rectangular- waveguide method of characterizing materials with respect to such constitutive electromagnetic properties as permittivity and permeability. Such standards are needed for determining the accuracy of measurements used in the method, as described below. In this method, a specimen of a material to be characterized is cut to a prescribed size and shape and inserted in a rectangular- waveguide test fixture, wherein the specimen is irradiated with a known source signal and detectors are used to measure the signals reflected by, and transmitted through, the specimen. Scattering parameters [also known as "S" parameters (S11, S12, S21, and S22)] are computed from ratios between the transmitted and reflected signals and the source signal. Then the permeability and permittivity of the specimen material are derived from the scattering parameters. Theoretically, the technique for calculating the permeability and permittivity from the scattering parameters is exact, but the accuracy of the results depends on the accuracy of the measurements from which the scattering parameters are obtained. To determine whether the measurements are accurate, it is necessary to perform comparable measurements on reference standards, which are essentially specimens that have known scattering parameters. To be most useful, reference standards should provide the full range of scattering-parameter values that can be obtained from material specimens. Specifically, measurements of the backscattering parameter (S11) from no reflection to total reflection and of the forward-transmission parameter (S21) from no transmission to total transmission are needed. A reference standard that functions as a notch (band-stop) filter can satisfy this need because as the signal frequency is varied across the frequency range for which the filter is designed, the scattering parameters vary over the ranges of values between the extremes of total reflection and total transmission. A notch-filter reference standard in the form of a rectangular-waveguide insert that has a size and shape similar to that of a material specimen is advantageous because the measurement configuration used for the reference standard can be the same as that for a material specimen. Typically a specimen is a block of material that fills a waveguide cross-section but occupies only a small fraction of the length of the waveguide. A reference standard of the present type (see figure) is a metal block that fills part of a waveguide cross section and contains a slot, the long dimension of which can be chosen to tailor the notch frequency to a desired value. The scattering parameters and notch frequency can be estimated with high accuracy by use of commercially available electromagnetic-field-simulating software. The block can be fabricated to the requisite precision by wire electrical-discharge machining. In use, the accuracy of measurements is determined by comparison of (1) the scattering parameters calculated from the measurements with (2) the scattering parameters calculated by the aforementioned software.
Uncertainties in Galactic Chemical Evolution Models
Cote, Benoit; Ritter, Christian; Oshea, Brian W.; ...
2016-06-15
Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cote, Benoit; Ritter, Christian; Oshea, Brian W.
Here we use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number ofmore » SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model does not include uncertainties relating to stellar yields, star formation and merger histories, and modeling assumptions.« less
Summary of groundwater-recharge estimates for Pennsylvania
Stuart O. Reese,; Risser, Dennis W.
2010-01-01
Groundwater recharge is water that infiltrates through the subsurface to the zone of saturation beneath the water table. Because recharge is a difficult parameter to quantify, it is typically estimated from measurements of other parameters like streamflow and precipitation. This report provides a general overview of processes affecting recharge in Pennsylvania and presents estimates of recharge rates from studies at various scales.The most common method for estimating recharge in Pennsylvania has been to estimate base flow from measurements of streamflow and assume that base flow (expressed in inches over the basin) approximates recharge. Statewide estimates of mean annual groundwater recharge were developed by relating base flow to basin characteristics of HUC10 watersheds (a fifth-level classification that uses 10 digits to define unique hydrologic units) using a regression equation. The regression analysis indicated that mean annual precipitation, average daily maximum temperature, percent of sand in soil, percent of carbonate rock in the watershed, and average stream-channel slope were significant factors in the explaining the variability of groundwater recharge across the Commonwealth.Several maps are included in this report to illustrate the principal factors affecting recharge and provide additional information about the spatial distribution of recharge in Pennsylvania. The maps portray the patterns of precipitation, temperature, prevailing winds across Pennsylvania’s varied physiography; illustrate the error associated with recharge estimates; and show the spatial variability of recharge as a percent of precipitation. National, statewide, regional, and local values of recharge, based on numerous studies, are compiled to allow comparison of estimates from various sources. Together these plates provide a synopsis of groundwater-recharge estimations and factors in Pennsylvania.Areas that receive the most recharge are typically those that get the most rainfall, have favorable surface conditions for infiltration, and are less susceptible to the influences of high temperatures, and thus, evapotranspiration. Areas that have less recharge in Pennsylvania are typically those with less precipitation, less permeable soils, and higher temperatures that are conducive to greater rates of evapotranspiration.
A GUI-based Tool for Bridging the Gap between Models and Process-Oriented Studies
NASA Astrophysics Data System (ADS)
Kornfeld, A.; Van der Tol, C.; Berry, J. A.
2014-12-01
Models used for simulation of photosynthesis and transpiration by canopies of terrestrial plants typically have subroutines such as STOMATA.F90, PHOSIB.F90 or BIOCHEM.m that solve for photosynthesis and associated processes. Key parameters such as the Vmax for Rubisco and temperature response parameters are required by these subroutines. These are often taken from the literature or determined by separate analysis of gas exchange experiments. It is useful to note however that subroutines can be extracted and run as standalone models to simulate leaf responses collected in gas exchange experiments. Furthermore, there are excellent non-linear fitting tools that can be used to optimize the parameter values in these models to fit the observations. Ideally the Vmax fit in this way should be the same as that determined by a separate analysis, but it may not because of interactions with other kinetic constants and the temperature dependence of these in the full subroutine. We submit that it is more useful to fit the complete model to the calibration experiments rather as disaggregated constants. We designed a graphical user interface (GUI) based tool that uses gas exchange photosynthesis data to directly estimate model parameters in the SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) model and, at the same time, allow researchers to change parameters interactively to visualize how variation in model parameters affect predicted outcomes such as photosynthetic rates, electron transport, and chlorophyll fluorescence. We have also ported some of this functionality to an Excel spreadsheet, which could be used as a teaching tool to help integrate process-oriented and model-oriented studies.
User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Coleman, Kayla; Gilkey, Lindsay N.
Sandia’s Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a physics-based computational model. This can lend efficiency and rigor to manual parameter perturbation studies already being conducted by analysts. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, riskmore » analysis, and quantification of margins and uncertainty with such models. It directly supports verification and validation activities. Dakota algorithms enrich complex science and engineering models, enabling an analyst to answer crucial questions of - Sensitivity: Which are the most important input factors or parameters entering the simulation, and how do they influence key outputs?; Uncertainty: What is the uncertainty or variability in simulation output, given uncertainties in input parameters? How safe, reliable, robust, or variable is my system? (Quantification of margins and uncertainty, QMU); Optimization: What parameter values yield the best performing design or operating condition, given constraints? Calibration: What models and/or parameters best match experimental data? In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers.« less
Robust characterization of small grating boxes using rotating stage Mueller matrix polarimeter
NASA Astrophysics Data System (ADS)
Foldyna, M.; De Martino, A.; Licitra, C.; Foucher, J.
2010-03-01
In this paper we demonstrate the robustness of the Mueller matrix polarimetry used in multiple-azimuth configuration. We first demonstrate the efficiency of the method for the characterization of small pitch gratings filling 250 μm wide square boxes. We used a Mueller matrix polarimeter directly installed in the clean room has motorized rotating stage allowing the access to arbitrary conical grating configurations. The projected beam spot size could be reduced to 60x25 μm, but for the measurements reported here this size was 100x100 μm. The optimal values of parameters of a trapezoidal profile model, acquired for each azimuthal angle separately using a non-linear least-square minimization algorithm, are shown for a typical grating. Further statistical analysis of the azimuth-dependent dimensional parameters provided realistic estimates of the confidence interval giving direct information about the accuracy of the results. The mean values and the standard deviations were calculated for 21 different grating boxes featuring in total 399 measured spectra and fits. The results for all boxes are summarized in a table which compares the optical method to the 3D-AFM. The essential conclusion of our work is that the 3D-AFM values always fall into the confidence intervals provided by the optical method, which means that we have successfully estimated the accuracy of our results without using direct comparison with another, non-optical, method. Moreover, this approach may provide a way to improve the accuracy of grating profile modeling by minimizing the standard deviations evaluated from multiple-azimuths results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, King C.; Liu, Da-Jiang; Thiel, Patricia A.
Diffusion coefficients, D N, for 2D vacancy nanopits are compared with those for 2D homoepitaxial adatom nanoislands on metal(100) surfaces, focusing on the variation of DN with size, N. Here, N is measured in missing atoms for pits and adatoms for islands. Analysis of D N is based on kinetic Monte Carlo simulations of a tailored stochastic lattice-gas model, where pit and island diffusion are mediated by periphery diffusion, i.e., by edge atom hopping. Precise determination of D N versus N for typical parameters reveals a cyclical variation with an overall decrease in magnitude for increasing moderate O(10 2) ≤more » N ≤ O(10 3). Monotonic decay, D N~ N -β, is found for N ≥ O(10 2) with effective exponents, β = β eff, for both pits and islands, both well below the macroscopic value of βmacro = 3/2. D N values for vacancy pits are significantly lower (higher) than for adatom islands for moderate N in the case of low (high) kink rounding barrier. However, D N values for pits and islands slowly merge, and β eff → 3/2 for sufficiently large N. The latter feature is expected from continuum Langevin formulations appropriate for large sizes. Finally, we compare predictions from our model incorporating appropriate energetic parameters for Ag(100) with different sets of experimental data for diffusivity at 300 K, including assessment of β eff, for experimentally observed sizes N from ~100 to ~1000.« less
Pulsating Hydrodynamic Instability in a Dynamic Model of Liquid-Propellant Combustion
NASA Technical Reports Server (NTRS)
Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)
1999-01-01
Hydrodynamic (Landau) instability in combustion is typically associated with the onset of wrinkling of a flame surface, corresponding to the formation of steady cellular structures as the stability threshold is crossed. In the context of liquid-propellant combustion, such instability has recently been shown to occur for critical values of the pressure sensitivity of the burning rate and the disturbance wavenumber, significantly generalizing previous classical results for this problem that assumed a constant normal burning rate. Additionally, however, a pulsating form of hydrodynamic instability has been shown to occur as well, corresponding to the onset of temporal oscillations in the location of the liquid/gas interface. In the present work, we consider the realistic influence of a nonzero temperature sensitivity in the local burning rate on both types of stability thresholds. It is found that for sufficiently small values of this parameter, there exists a stable range of pressure sensitivities for steady, planar burning such that the classical cellular form of hydrodynamic instability and the more recent pulsating form of hydrodynamic instability can each occur as the corresponding stability threshold is crossed. For larger thermal sensitivities, however, the pulsating stability boundary evolves into a C-shaped curve in the disturbance-wavenumber/ pressure-sensitivity plane, indicating loss of stability to pulsating perturbations for all sufficiently large disturbance wavelengths. It is thus concluded, based on characteristic parameter values, that an equally likely form of hydrodynamic instability in liquid-propellant combustion is of a nonsteady, long-wave nature, distinct from the steady, cellular form originally predicted by Landau.
NASA Technical Reports Server (NTRS)
Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)
1999-01-01
Hydrodynamic (Landau) instability in combustion is typically associated with the onset of wrinkling of a flame surface, corresponding to the formation of steady cellular structures as the stability threshold is crossed. In the context of liquid-propellant combustion, such instability has recently been shown to occur for critical values of the pressure sensitivity of the burning rate and the disturbance wavenumber, significantly generalizing previous classical results for this problem that assumed a constant normal burning rate. Additionally, however, a pulsating form of hydrodynamic instability has been shown to occur as well, corresponding to the onset of temporal oscillations in the location of the liquid/gas interface. In the present work, we consider the realistic influence of a non-zero temperature sensitivity in the local burning rate on both types of stability thresholds. It is found that for sufficiently small values of this parameter, there exists a stable range of pressure sensitivities for steady, planar burning such that the classical cellular form of hydrodynamic instability and the more recent pulsating form of hydrodynamic instability can each occur as the corresponding stability threshold is crossed. For larger thermal sensitivities, however, the pulsating stability boundary evolves into a C-shaped curve in the (disturbance-wavenumber, pressure-sensitivity) plane, indicating loss of stability to pulsating perturbations for all sufficiently large disturbance wavelengths. It is thus concluded, based on characteristic parameter values, that an equally likely form of hydrodynamic instability in liquid-propellant combustion is of a non-steady, long-wave nature, distinct from the steady, cellular form originally predicted by Landau.
Lai, King C.; Liu, Da-Jiang; Thiel, Patricia A.; ...
2018-02-22
Diffusion coefficients, D N, for 2D vacancy nanopits are compared with those for 2D homoepitaxial adatom nanoislands on metal(100) surfaces, focusing on the variation of DN with size, N. Here, N is measured in missing atoms for pits and adatoms for islands. Analysis of D N is based on kinetic Monte Carlo simulations of a tailored stochastic lattice-gas model, where pit and island diffusion are mediated by periphery diffusion, i.e., by edge atom hopping. Precise determination of D N versus N for typical parameters reveals a cyclical variation with an overall decrease in magnitude for increasing moderate O(10 2) ≤more » N ≤ O(10 3). Monotonic decay, D N~ N -β, is found for N ≥ O(10 2) with effective exponents, β = β eff, for both pits and islands, both well below the macroscopic value of βmacro = 3/2. D N values for vacancy pits are significantly lower (higher) than for adatom islands for moderate N in the case of low (high) kink rounding barrier. However, D N values for pits and islands slowly merge, and β eff → 3/2 for sufficiently large N. The latter feature is expected from continuum Langevin formulations appropriate for large sizes. Finally, we compare predictions from our model incorporating appropriate energetic parameters for Ag(100) with different sets of experimental data for diffusivity at 300 K, including assessment of β eff, for experimentally observed sizes N from ~100 to ~1000.« less
Decadal water quality variations at three typical basins of Mekong, Murray and Yukon
NASA Astrophysics Data System (ADS)
Khan, Afed U.; Jiang, Jiping; Wang, Peng
2018-02-01
Decadal distribution of water quality parameters is essential for surface water management. Decadal distribution analysis was conducted to assess decadal variations in water quality parameters at three typical watersheds of Murray, Mekong and Yukon. Right distribution shifts were observed for phosphorous and nitrogen parameters at the Mekong watershed monitoring sites while left shifts were noted at the Murray and Yukon monitoring sites. Nutrients pollution increases with time at the Mekong watershed while decreases at the Murray and Yukon watershed monitoring stations. The results implied that watershed located in densely populated developing area has higher risk of water quality deterioration in comparison to thinly populated developed area. The present study suggests best management practices at watershed scale to modulate water pollution.
Asymptotic solutions for the case of nearly symmetric gravitational lens systems
NASA Astrophysics Data System (ADS)
Wertz, O.; Pelgrims, V.; Surdej, J.
2012-08-01
Gravitational lensing provides a powerful tool to determine the Hubble parameter H0 from the measurement of the time delay Δt between two lensed images of a background variable source. Nevertheless, knowledge of the deflector mass distribution constitutes a hurdle. We propose in the present work interesting solutions for the case of nearly symmetric gravitational lens systems. For the case of a small misalignment between the source, the deflector and the observer, we first consider power-law (ɛ) axially symmetric models for which we derive an analytical relation between the amplification ratio and source position which is independent of the power-law slope ɛ. According to this relation, we deduce an expression for H0 also irrespective of the value ɛ. Secondly, we consider the power-law axially symmetric lens models with an external large-scale gravitational field, the shear γ, resulting in the so-called ɛ-γ models, for which we deduce simple first-order equations linking the model parameters and the lensed image positions, the latter being observable quantities. We also deduce simple relations between H0 and observables quantities only. From these equations, we may estimate the value of the Hubble parameter in a robust way. Nevertheless, comparison between the ɛ-γ and singular isothermal ellipsoid (SIE) models leads to the conclusion that these models remain most often distinct. Therefore, even for the case of a small misalignment, use of the first-order equations and precise astrometric measurements of the positions of the lensed images with respect to the centre of the deflector enables one to discriminate between these two families of models. Finally, we confront the models with numerical simulations to evaluate the intrinsic error of the first-order expressions used when deriving the model parameters under the assumption of a quasi-alignment between the source, the deflector and the observer. From these same simulations, we estimate for the case of the ɛ-γ family of models that the standard deviation affecting H0 is ? which merely reflects the adopted astrometric uncertainties on the relative image positions, typically ? arcsec. In conclusions, we stress the importance of getting very accurate measurements of the relative positions of the multiple lensed images and of the time delays for the case of nearly symmetric gravitational lens systems, in order to derive robust and precise values of the Hubble parameter.
Boudreau, Mathieu; Pike, G Bruce
2018-05-07
To develop and validate a regularization approach of optimizing B 1 insensitivity of the quantitative magnetization transfer (qMT) pool-size ratio (F). An expression describing the impact of B 1 inaccuracies on qMT fitting parameters was derived using a sensitivity analysis. To simultaneously optimize for robustness against noise and B 1 inaccuracies, the optimization condition was defined as the Cramér-Rao lower bound (CRLB) regularized by the B 1 -sensitivity expression for the parameter of interest (F). The qMT protocols were iteratively optimized from an initial search space, with and without B 1 regularization. Three 10-point qMT protocols (Uniform, CRLB, CRLB+B 1 regularization) were compared using Monte Carlo simulations for a wide range of conditions (e.g., SNR, B 1 inaccuracies, tissues). The B 1 -regularized CRLB optimization protocol resulted in the best robustness of F against B 1 errors, for a wide range of SNR and for both white matter and gray matter tissues. For SNR = 100, this protocol resulted in errors of less than 1% in mean F values for B 1 errors ranging between -10 and 20%, the range of B 1 values typically observed in vivo in the human head at field strengths of 3 T and less. Both CRLB-optimized protocols resulted in the lowest σ F values for all SNRs and did not increase in the presence of B 1 inaccuracies. This work demonstrates a regularized optimization approach for improving the robustness of auxiliary measurements (e.g., B 1 ) sensitivity of qMT parameters, particularly the pool-size ratio (F). Predicting substantially less B 1 sensitivity using protocols optimized with this method, B 1 mapping could even be omitted for qMT studies primarily interested in F. © 2018 International Society for Magnetic Resonance in Medicine.
Regan, R. Steven; Markstrom, Steven L.; Hay, Lauren E.; Viger, Roland J.; Norton, Parker A.; Driscoll, Jessica M.; LaFontaine, Jacob H.
2018-01-08
This report documents several components of the U.S. Geological Survey National Hydrologic Model of the conterminous United States for use with the Precipitation-Runoff Modeling System (PRMS). It provides descriptions of the (1) National Hydrologic Model, (2) Geospatial Fabric for National Hydrologic Modeling, (3) PRMS hydrologic simulation code, (4) parameters and estimation methods used to compute spatially and temporally distributed default values as required by PRMS, (5) National Hydrologic Model Parameter Database, and (6) model extraction tool named Bandit. The National Hydrologic Model Parameter Database contains values for all PRMS parameters used in the National Hydrologic Model. The methods and national datasets used to estimate all the PRMS parameters are described. Some parameter values are derived from characteristics of topography, land cover, soils, geology, and hydrography using traditional Geographic Information System methods. Other parameters are set to long-established default values and computation of initial values. Additionally, methods (statistical, sensitivity, calibration, and algebraic) were developed to compute parameter values on the basis of a variety of nationally-consistent datasets. Values in the National Hydrologic Model Parameter Database can periodically be updated on the basis of new parameter estimation methods and as additional national datasets become available. A companion ScienceBase resource provides a set of static parameter values as well as images of spatially-distributed parameters associated with PRMS states and fluxes for each Hydrologic Response Unit across the conterminuous United States.
Multi-scale curvature for automated identification of glaciated mountain landscapes
NASA Astrophysics Data System (ADS)
Prasicek, Günther; Otto, Jan-Christoph; Montgomery, David R.; Schrott, Lothar
2014-03-01
Erosion by glacial and fluvial processes shapes mountain landscapes in a long-recognized and characteristic way. Upland valleys incised by fluvial processes typically have a V-shaped cross-section with uniform and moderately steep slopes, whereas glacial valleys tend to have a U-shaped profile with a changing slope gradient. We present a novel regional approach to automatically differentiate between fluvial and glacial mountain landscapes based on the relation of multi-scale curvature and drainage area. Sample catchments are delineated and multiple moving window sizes are used to calculate per-cell curvature over a variety of scales ranging from the vicinity of the flow path at the valley bottom to catchment sections fully including valley sides. Single-scale curvature can take similar values for glaciated and non-glaciated catchments but a comparison of multi-scale curvature leads to different results according to the typical cross-sectional shapes. To adapt these differences for automated classification of mountain landscapes into areas with V- and U-shaped valleys, curvature values are correlated with drainage area and a new and simple morphometric parameter, the Difference of Minimum Curvature (DMC), is developed. At three study sites in the western United States the DMC thresholds determined from catchment analysis are used to automatically identify 5 × 5 km quadrats of glaciated and non-glaciated landscapes and the distinctions are validated by field-based geological and geomorphological maps. Our results demonstrate that DMC is a good predictor of glacial imprint, allowing automated delineation of glacially and fluvially incised mountain landscapes.
Population pharmacokinetics and pharmacodynamics of bivalirudin in young healthy Chinese volunteers.
Zhang, Dong-mei; Wang, Kun; Zhao, Xia; Li, Yun-fei; Zheng, Qing-shan; Wang, Zi-ning; Cui, Yi-min
2012-11-01
To investigate the population pharmacokinetics (PK) and pharmacodynamics (PD) of bivalirudin, a synthetic bivalent direct thrombin inhibitor, in young healthy Chinese subjects. Thirty-six young healthy volunteers were randomly assigned into 4 groups received bivalirudin 0.5 mg/kg, 0.75 mg/kg, and 1.05 mg/kg intravenous bolus, 0.75 mg/kg intravenous bolus followed by 1.75 mg/kg intravenous infusion per hour for 4 h. Blood samples were collected to measure bivalirudin plasma concentration and activated clotting time (ACT). Population PK-PD analysis was performed using the nonlinear mixed-effects model software NONMEM. The final models were validated with bootstrap and prediction-corrected visual predictive check (pcVPC) approaches. The final PK model was a two-compartment model without covariates. The typical PK population values of clearance (CL), apparent distribution volume of the central-compartment (V(1)), inter-compartmental clearance (Q) and apparent distribution volume of the peripheral compartment (V(2)) were 0.323 L·h(-1)·kg(-1), 0.086 L/kg, 0.0957 L·h(-1)·kg(-1), and 0.0554 L/kg, respectively. The inter-individual variabilities of these parameters were 14.8%, 24.2%, fixed to 0% and 15.6%, respectively. The final PK-PD model was a sigmoid E(max) model without the Hill coefficient. In this model, a covariate, red blood cell count (RBC(*)), had a significant effect on the EC(50) value. The typical PD population values of maximum effect (E(max)), EC(50), baseline ACT value (E(0)) and the coefficient of RBC(*) on EC(50) were 318 s, 2.44 mg/L, 134 s and 1.70, respectively. The inter-individual variabilities of E(max), EC(50), and E(0) were 6.80%, 46.4%, and 4.10%, respectively. Population PK-PD models of bivalirudin in healthy young Chinese subjects have been developed, which may provide a reference for future use of bivalirudin in China.
Crack curving in a ductile pressurized fuselage
NASA Astrophysics Data System (ADS)
Lam, Paul W.
Moire interferometry was used to study crack tip displacement fields of a biaxially loaded cruciform type 0.8mm thick 2024-T3 aluminum specimen with various tearstrap reinforcement configurations: Unreinforced, Bonded, Bonded+Riveted, and Machined Pad-up. A program was developed using the commercially available code Matlab to derive strain, stress, and integral parameters from the experimental displacements. An FEM model of the crack tip area, with experimental displacements as boundary conditions, was used to validate FEM calculations of crack tip parameters. The results indicate that T*-integral parameter reaches a value of approximately 120 MPa-m0.5 during stable crack propagation which agrees with previously published values for straight cracks in the same material. The approximate computation method employed in this study uses a partial contour around the crack tip that neglects the contribution from the portion behind the crack tip where there is significant unloading. Strain distributions around the crack tip were obtained from experimental displacements and indicate that Maximum Principal Strain or Equivalent Strain can predict the direction of crack propagation, and is generally comparable with predictions using the Erdogan-Sih and Kosai-Ramulu-Kobayashi criteria. The biaxial tests to failure showed that the Machined Pad-up specimen carried the highest load, with the Bonded specimen next, at 78% of the Machined Pad-up value. The Bonded+Riveted specimen carried a lower load than the Bonded, at 67% of the Machined Pad-up value, which was the same as that carried by the Unreinforced specimen. The tearstraps of the bonded specimens remained intact after the specimen failed while the integrally machined reinforcement broke with the specimen. FEM studies were also made of skin flapping in typical Narrow and Wide-body fuselage sections, both containing the same crack path from a full-scale fatigue test of a Narrow-body fuselage. Results indicate that the magnitude of CTOA and CTOD depends on the structural geometry, and including plasticity increases the crack tip displacements. An estimate of the strain in the skin flaps at the crack tip may indicate the tendency for flapping. Out-of-plane effects become significant as the crack propagates and curves.
Parameters of triggered-lightning flashes in Florida and Alabama
NASA Astrophysics Data System (ADS)
Fisher, R. J.; Schnetzer, G. H.; Thottappillil, R.; Rakov, V. A.; Uman, M. A.; Goldberg, J. D.
1993-12-01
Channel base currents from triggered lightning were measured at the NASA Kennedy Space Center, Florida, during summer 1990 and at Fort McClellan, Alabama, during summer 1991. Additionally, 16-mm cinematic records with 3- or 5-ms resolution were obtained for all flashes, and streak camera records were obtained for three of the Florida flashes. The 17 flashes analyzed here contained 69 strokes, all lowering negative charge from cloud to ground. Statistics on interstroke interval, no-current interstroke interval, total stroke duration, total stroke charge, total stroke action integral (∫ i2dt), return stroke current wave front characteristics, time to half peak value, and return stroke peak current are presented. Return stroke current pulses, characterized by rise times of the order of a few microseconds or less and peak values in the range of 4 to 38 kA, were found not to occur until after any preceding current at the bottom of the lightning channel fell below the noise level of less than 2 A. Current pulses associated with M components, characterized by slower rise times (typically tens to hundreds of microseconds) and peak values generally smaller than those of the return stroke pulses, occurred during established channel current flow of some tens to some hundreds of amperes. A relatively strong positive correlation was found between return stroke current average rate of rise and current peak. There was essentially no correlation between return stroke current peak and 10-90% rise time or between return stroke peak and the width of the current waveform at half of its peak value. Parameters of the lightning flashes triggered in Florida and Alabama are similar to each other but are different from those of triggered lightning recorded in New Mexico during the 1981 Thunderstorm Research International Program. Continuing currents that follow return stroke current peaks and last for more than 10 ms exhibit a variety of wave shapes that we have subdivided into four categories. All such continuing currents appear to start with a current pulse presumably associated with an M component. A brief summary of lightning parameters important for lightning protection, in a form convenient for practical use, is presented in an appendix.
NASA Astrophysics Data System (ADS)
Williams, Q. C.; Manghnani, M. H.
2017-12-01
The convective style of planetary cores is critically dependent on the thermal properties of iron alloys. In particular, the relation between the adiabatic gradient and the melting curve governs whether planetary cores solidify from their top down (when the adiabat is steeper than the melting curve) or the bottom up (the converse). Molten iron alloys, in general, have large, ambient pressure thermal expansions: values in excess of 1.2 x 10^-4/K are dictated by data derived from levitated and sessile drop techniques. These high values of the thermal expansion imply that the adiabatic gradients within early planetesimals and present day moons that have comparatively low-pressure, iron-rich cores are steep (typically greater than 35 K/GPa at low pressures): values, at low pressures, that are greater than the slope of the melting curve, and hence show that the cores of small solar system objects probably crystallize from the top-down. Here, we deploy a different manifestation of these large values of thermal expansion to determine the pressure dependence of thermal expansion in iron-rich liquids: a difficult parameter to experimentally measure, and critical for determining the size range of cores in which top-down core solidification predominates. In particular, the difference between the adiabatic and isothermal bulk moduli of iron liquids is in the 20-30% range at the melting temperature, and scales as the product of the thermal expansion, the Grüneisen parameter, and the temperature. Hence, ultrasonic (and adiabatic) moduli of iron alloy liquids, when coupled with isothermal sink-float measurements, can yield quantitative constraints on the pressure dependence of thermal expansion. For liquid iron alloys containing 17 wt% Si, we find that the thermal expansion is reduced by 50% over the first 8 GPa of compression. This "squeezing out" of the anomalously high low-pressure thermal expansion of iron-rich alloys at relatively modest conditions likely limits the size range over which top-down crystallizing cores are anticipated within planetary bodies.
NOMINAL VALUES FOR SELECTED SOLAR AND PLANETARY QUANTITIES: IAU 2015 RESOLUTION B3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prša, Andrej; Harmanec, Petr; Torres, Guillermo
In this brief communication we provide the rationale for and the outcome of the International Astronomical Union (IAU) resolution vote at the XXIXth General Assembly in Honolulu, Hawaii, in 2015, on recommended nominal conversion constants for selected solar and planetary properties. The problem addressed by the resolution is a lack of established conversion constants between solar and planetary values and SI units: a missing standard has caused a proliferation of solar values (e.g., solar radius, solar irradiance, solar luminosity, solar effective temperature, and solar mass parameter) in the literature, with cited solar values typically based on best estimates at the timemore » of paper writing. As precision of observations increases, a set of consistent values becomes increasingly important. To address this, an IAU Working Group on Nominal Units for Stellar and Planetary Astronomy formed in 2011, uniting experts from the solar, stellar, planetary, exoplanetary, and fundamental astronomy, as well as from general standards fields to converge on optimal values for nominal conversion constants. The effort resulted in the IAU 2015 Resolution B3, passed at the IAU General Assembly by a large majority. The resolution recommends the use of nominal solar and planetary values, which are by definition exact and are expressed in SI units. These nominal values should be understood as conversion factors only, not as the true solar/planetary properties or current best estimates. Authors and journal editors are urged to join in using the standard values set forth by this resolution in future work and publications to help minimize further confusion.« less
Nominal Values for Selected Solar and Planetary Quantities: IAU 2015 Resolution B3
NASA Astrophysics Data System (ADS)
Prša, Andrej; Harmanec, Petr; Torres, Guillermo; Mamajek, Eric; Asplund, Martin; Capitaine, Nicole; Christensen-Dalsgaard, Jørgen; Depagne, Éric; Haberreiter, Margit; Hekker, Saskia; Hilton, James; Kopp, Greg; Kostov, Veselin; Kurtz, Donald W.; Laskar, Jacques; Mason, Brian D.; Milone, Eugene F.; Montgomery, Michele; Richards, Mercedes; Schmutz, Werner; Schou, Jesper; Stewart, Susan G.
2016-08-01
In this brief communication we provide the rationale for and the outcome of the International Astronomical Union (IAU) resolution vote at the XXIXth General Assembly in Honolulu, Hawaii, in 2015, on recommended nominal conversion constants for selected solar and planetary properties. The problem addressed by the resolution is a lack of established conversion constants between solar and planetary values and SI units: a missing standard has caused a proliferation of solar values (e.g., solar radius, solar irradiance, solar luminosity, solar effective temperature, and solar mass parameter) in the literature, with cited solar values typically based on best estimates at the time of paper writing. As precision of observations increases, a set of consistent values becomes increasingly important. To address this, an IAU Working Group on Nominal Units for Stellar and Planetary Astronomy formed in 2011, uniting experts from the solar, stellar, planetary, exoplanetary, and fundamental astronomy, as well as from general standards fields to converge on optimal values for nominal conversion constants. The effort resulted in the IAU 2015 Resolution B3, passed at the IAU General Assembly by a large majority. The resolution recommends the use of nominal solar and planetary values, which are by definition exact and are expressed in SI units. These nominal values should be understood as conversion factors only, not as the true solar/planetary properties or current best estimates. Authors and journal editors are urged to join in using the standard values set forth by this resolution in future work and publications to help minimize further confusion.
Adaptive Value Normalization in the Prefrontal Cortex Is Reduced by Memory Load
Burke, C. J.; Seifritz, E.; Tobler, P. N.
2017-01-01
Abstract Adaptation facilitates neural representation of a wide range of diverse inputs, including reward values. Adaptive value coding typically relies on contextual information either obtained from the environment or retrieved from and maintained in memory. However, it is unknown whether having to retrieve and maintain context information modulates the brain’s capacity for value adaptation. To address this issue, we measured hemodynamic responses of the prefrontal cortex (PFC) in two studies on risky decision-making. In each trial, healthy human subjects chose between a risky and a safe alternative; half of the participants had to remember the risky alternatives, whereas for the other half they were presented visually. The value of safe alternatives varied across trials. PFC responses adapted to contextual risk information, with steeper coding of safe alternative value in lower-risk contexts. Importantly, this adaptation depended on working memory load, such that response functions relating PFC activity to safe values were steeper with presented versus remembered risk. An independent second study replicated the findings of the first study and showed that similar slope reductions also arose when memory maintenance demands were increased with a secondary working memory task. Formal model comparison showed that a divisive normalization model fitted effects of both risk context and working memory demands on PFC activity better than alternative models of value adaptation, and revealed that reduced suppression of background activity was the critical parameter impairing normalization with increased memory maintenance demand. Our findings suggest that mnemonic processes can constrain normalization of neural value representations. PMID:28462394
Determination of sustainable values for the parameters of the construction of residential buildings
NASA Astrophysics Data System (ADS)
Grigoreva, Larisa; Grigoryev, Vladimir
2018-03-01
For the formation of programs for housing construction and planning of capital investments, when developing the strategic planning companies by construction companies, the norms or calculated indicators of the duration of the construction of high-rise residential buildings and multifunctional complexes are mandatory. Determination of stable values of the parameters for the high-rise construction residential buildings provides an opportunity to establish a reasonable duration of construction at the planning and design stages of residential complexes, taking into account the influence of market conditions factors. The concept of the formation of enlarged models for the high-rise construction residential buildings is based on a real mapping in time and space of the most significant redistribution with their organizational and technological interconnection - the preparatory period, the underground part, the above-ground part, external engineering networks, landscaping. The total duration of the construction of a residential building, depending on the duration of each redistribution and the degree of their overlapping, can be determined by one of the proposed four options. At the same time, a unified approach to determining the overall duration of construction on the basis of the provisions of a streamlined construction organization with the testing of results on the example of high-rise residential buildings of the typical I-155B series was developed, and the coefficients for combining the work and the main redevelopment of the building were determined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaczmarski, Krzysztof; Guiochon, Georges A
The adsorption isotherms of selected compounds are our main source of information on the mechanisms of adsorption processes. Thus, the selection of the methods used to determine adsorption isotherm data and to evaluate the errors made is critical. Three chromatographic methods were evaluated, frontal analysis (FA), frontal analysis by characteristic point (FACP), and the pulse or perturbation method (PM), and their accuracies were compared. Using the equilibrium-dispersive (ED) model of chromatography, breakthrough curves of single components were generated corresponding to three different adsorption isotherm models: the Langmuir, the bi-Langmuir, and the Moreau isotherms. For each breakthrough curve, the best conventionalmore » procedures of each method (FA, FACP, PM) were used to calculate the corresponding data point, using typical values of the parameters of each isotherm model, for four different values of the column efficiency (N = 500, 1000, 2000, and 10,000). Then, the data points were fitted to each isotherm model and the corresponding isotherm parameters were compared to those of the initial isotherm model. When isotherm data are derived with a chromatographic method, they may suffer from two types of errors: (1) the errors made in deriving the experimental data points from the chromatographic records; (2) the errors made in selecting an incorrect isotherm model and fitting to it the experimental data. Both errors decrease significantly with increasing column efficiency with FA and FACP, but not with PM.« less
Royle, J. Andrew; Sutherland, Christopher S.; Fuller, Angela K.; Sun, Catherine C.
2015-01-01
We develop a likelihood analysis framework for fitting spatial capture-recapture (SCR) models to data collected on class structured or stratified populations. Our interest is motivated by the necessity of accommodating the problem of missing observations of individual class membership. This is particularly problematic in SCR data arising from DNA analysis of scat, hair or other material, which frequently yields individual identity but fails to identify the sex. Moreover, this can represent a large fraction of the data and, given the typically small sample sizes of many capture-recapture studies based on DNA information, utilization of the data with missing sex information is necessary. We develop the class structured likelihood for the case of missing covariate values, and then we address the scaling of the likelihood so that models with and without class structured parameters can be formally compared regardless of missing values. We apply our class structured model to black bear data collected in New York in which sex could be determined for only 62 of 169 uniquely identified individuals. The models containing sex-specificity of both the intercept of the SCR encounter probability model and the distance coefficient, and including a behavioral response are strongly favored by log-likelihood. Estimated population sex ratio is strongly influenced by sex structure in model parameters illustrating the importance of rigorous modeling of sex differences in capture-recapture models.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks.
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-08-31
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.
Adaptive Sampling-Based Information Collection for Wireless Body Area Networks
Xu, Xiaobin; Zhao, Fang; Wang, Wendong; Tian, Hui
2016-01-01
To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC) which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach. PMID:27589758
Roidoung, Sunisa; Dolan, Kirk D; Siddiq, Muhammad
2017-04-01
Color degradation in cranberry juice during storage is the most common consumer complaint. To enhance nutritional quality, juice is typically fortified with vitamin C. This study determined effect of gallic acid, a natural antioxidant, for the preservation of anthocyanins (ACYs) and color, and estimated kinetics of ACYs and color degradation. Juice, fortified with 40-80mg/100mL vitamin C and 0-320mg/100mL gallic acid, was pasteurized at 85°C for 1min and stored at 23°C for 16days. Total monomeric anthocyanins and red color intensity were evaluated spectrophotometrically and data were used to determine degradation rate constants (k values) and order of reaction (n) of ACYs and color. Due to high correlation, k and n could not be estimated simultaneously. To overcome this difficulty, both n and k were held at different constant values in separate analyses to allow accurate estimation of each. Parameters n and k were modeled empirically as functions of vitamin C, and of vitamin C and gallic acid, respectively. Reaction order n ranged from 1.2 to 4.4, and decreased with increasing vitamin C concentration. The final model offers an effective tool that could be used for predicting ACYs and color retention in cranberry juice during storage. Copyright © 2017. Published by Elsevier Ltd.
Modeling the effect of glutamate diffusion and uptake on NMDA and non-NMDA receptor saturation.
Holmes, W R
1995-01-01
One- and two-dimensional models of glutamate diffusion, uptake, and binding in the synaptic cleft were developed to determine if the release of single vesicles of glutamate would saturate NMDA and non-NMDA receptors. Ranges of parameter values were used in the simulations to determine the conditions when saturation could occur. Single vesicles of glutamate did not saturate NMDA receptors unless diffusion was very slow and the number of glutamate molecules in a vesicle was large. However, the release of eight vesicles at 400 Hz caused NMDA receptor saturation for all parameter values tested. Glutamate uptake was found to reduce NMDA receptor saturation, but the effect was smaller than that of changes in the diffusion coefficient or in the number of glutamate molecules in a vesicle. Non-NMDA receptors were not saturated unless diffusion was very slow and the number of glutamate molecules in a vesicle was large. The release of eight vesicles at 400 Hz caused significant non-NMDA receptor desensitization. The results suggest that NMDA and non-NMDA receptors are not saturated by single vesicles of glutamate under usual conditions, and that tetanic input, of the type typically used to induce long-term potentiation, will increase calcium influx by increasing receptor binding as well as by reducing voltage-dependent block of NMDA receptors. Images FIGURE 1 PMID:8580317
Wave packet analysis and break-up length calculations for an accelerating planar liquid jet
NASA Astrophysics Data System (ADS)
Turner, M. R.; Healey, J. J.; Sazhin, S. S.; Piazzesi, R.
2012-02-01
This paper examines the process of transition to turbulence within an accelerating planar liquid jet. By calculating the propagation and spatial evolution of disturbance wave packets generated at a nozzle where the jet emerges, we are able to estimate break-up lengths and break-up times for different magnitudes of acceleration and different liquid to air density ratios. This study uses a basic jet velocity profile that has shear layers in both air and the liquid either side of the fluid interface. The shear layers are constructed as functions of velocity which behave in line with our CFD simulations of injecting diesel jets. The non-dimensional velocity of the jet along the jet centre-line axis is assumed to take the form V (t) = tanh(at), where the parameter a determines the magnitude of the acceleration. We compare the fully unsteady results obtained by solving the unsteady Rayleigh equation to those of a quasi-steady jet to determine when the unsteady effects are significant and whether the jet can be regarded as quasi-steady in typical operating conditions for diesel engines. For a heavy fluid injecting into a lighter fluid (density ratio ρair/ρjet = q < 1), it is found that unsteady effects are mainly significant at early injection times where the jet velocity profile is changing fastest. When the shear layers in the jet thin with time, the unsteady effects cause the growth rate of the wave packet to be smaller than the corresponding quasi-steady jet, whereas for thickening shear layers the unsteady growth rate is larger than that of the quasi-steady jet. For large accelerations (large a), the unsteady effect remains at later times but its effect on the growth rate of the wave packet decreases as the time after injection increases. As the rate of acceleration is reduced, the range of velocity values for which the jet can be considered as quasi-steady increases until eventually the whole jet can be considered quasi-steady. For a homogeneous jet (q = 1), the range of values of a for which the jet can be considered completely quasi-steady increases to larger values of a. Finally, we investigate approximating the wave packet break-up length calculations with a method that follows the most unstable disturbance wave as the jet accelerates. This approach is similar to that used in CFD simulations as it greatly reduces computational time. We investigate whether or not this is a good approximation for the parameter values typically used in diesel engines.
Momentum broadening in unstable quark-gluon plasma
Carrington, M. E.; Mrówczyński, St.; Schenke, B.
2017-02-01
We present that quark-gluon plasma produced at the early stage of ultrarelativistic heavy-ion collisions is unstable, if weakly coupled, due to the anisotropy of its momentum distribution. Chromomagnetic fields are spontaneously generated and can reach magnitudes much exceeding typical values of the fields in equilibrated plasma. We consider a high-energy test parton traversing an unstable plasma that is populated with strong fields. We study the momentum broadening parametermore » $$ˆ\\atop{q}$$ which determines the radiative energy loss of the test parton. We develop a formalism which gives $$ˆ\\atop{q}$$ as the solution of an initial value problem, and we focus on extremely oblate plasmas which are physically relevant for relativistic heavy-ion collisions. The parameter $$ˆ\\atop{q}$$ is found to be strongly dependent on time. For short times it is of the order of the equilibrium value, but at later times $$ˆ\\atop{q}$$ grows exponentially due to the interaction of the test parton with unstable modes and becomes much bigger than the value in equilibrium. The momentum broadening is also strongly directionally dependent and is largest when the test parton velocity is transverse to the beam axis. Lastly, consequences of our findings for the phenomenology of jet quenching in relativistic heavy-ion collisions are briefly discussed.« less
Low temperature electrical properties of some Pb-free solders
NASA Astrophysics Data System (ADS)
Kisiel, Ryszard; Pekala, Marek
2006-03-01
The electronic industry is engaged in developing Pb-free technologies for more than ten years. However till now not all properties of new solders are described. The aim of the paper is to present some electrical properties of new series of Pb-free solders (eutectic SnAg, near eutectic SnAgCu with and without Bi) in low temperature ranges 10 K to 273K. The following parameters were analyzed: electrical resistivity, temperature coefficient of resistance and thermoelectric power. The electrical resistivity at temperatures above 50 K is a monotonically rising function of temperature for Pb-free solders studied. The electrical resistivity of the Bi containing alloys is higher as compared to the remaining ones. The thermoelectric power values at room temperature are about -8 μV/K to -6 μV/K for Pb-free solders studied, being higher as compared to typical values -3 μVK of SnPb solder. The relatively low absolute values as well as the smooth and weak temperature variation of electrical resistivity in lead free solders enable the possible low temperature application. The moderate values of thermoelectric power around and above the room temperature show that when applying the solders studied the temperature should be kept as uniform as possible, in order to avoid spurious or noise voltages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heifetz, Alexander; Vilim, Richard
Super-critical carbon dioxide (S-CO2) is a promising thermodynamic cycle for advanced nuclear reactors and solar energy conversion applications. Dynamic control of the proposed recompression S-CO2 cycle is accomplished with input from resistance temperature detector (RTD) measurements of the process fluid. One of the challenges in practical implementation of S-CO2 cycle is high corrosion rate of component and sensor materials. In this paper, we develop a mathematical model of RTD sensing using eigendecomposition model of radial heat transfer in a layered long cylinder. We show that the value of RTD time constant primarily depends on the rate of heat transfer frommore » the fluid to the outer wall of RTD. We also show that for typical material properties, RTD time constant can be calculated as the sum of reciprocal eigen-values of the heat transfer matrix. Using the computational model and a set of RTD and CO2 fluid thermophysical parameter values, we calculate the value of time constant of thermowell-mounted RTD sensor at the hot side of the precooler in the S-CO2 cycle. The eigendecomposition model of RTD will be used in future studies to model sensor degradation and its impact on control of S-CO2. (C) 2016 Elsevier B.V. All rights reserved.« less
Threat evaluation for impact assessment in situation analysis systems
NASA Astrophysics Data System (ADS)
Roy, Jean; Paradis, Stephane; Allouche, Mohamad
2002-07-01
Situation analysis is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of situation awareness, for the decision maker. Data fusion is a key enabler to meeting the demanding requirements of military situation analysis support systems. According to the data fusion model maintained by the Joint Directors of Laboratories' Data Fusion Group, impact assessment estimates the effects on situations of planned or estimated/predicted actions by the participants, including interactions between action plans of multiple players. In this framework, the appraisal of actual or potential threats is a necessary capability for impact assessment. This paper reviews and discusses in details the fundamental concepts of threat analysis. In particular, threat analysis generally attempts to compute some threat value, for the individual tracks, that estimates the degree of severity with which engagement events will potentially occur. Presenting relevant tracks to the decision maker in some threat list, sorted from the most threatening to the least, is clearly in-line with the cognitive demands associated with threat evaluation. A key parameter in many threat value evaluation techniques is the Closest Point of Approach (CPA). Along this line of thought, threatening tracks are often prioritized based upon which ones will reach their CPA first. Hence, the Time-to-CPA (TCPA), i.e., the time it will take for a track to reach its CPA, is also a key factor. Unfortunately, a typical assumption for the computation of the CPA/TCPA parameters is that the track velocity will remain constant. When a track is maneuvering, the CPA/TCPA values will change accordingly. These changes will in turn impact the threat value computations and, ultimately, the resulting threat list. This is clearly undesirable from a command decision-making perspective. In this regard, the paper briefly discusses threat value stabilization approaches based on neural networks and other mathematical techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerlach, Joerg; Kessler, Lutz; Paul, Udo
2007-05-17
The concept of forming limit curves (FLC) is widely used in industrial practice. The required data should be delivered for typical material properties (measured on coils with properties in a range of +/- of the standard deviation from the mean production values) by the material suppliers. In particular it should be noted that its use for the validation of forming robustness providing forming limit curves for the variety of scattering in the mechanical properties is impossible. Therefore a forecast of the expected limit strains without expensive cost and time-consuming experiments is necessary. In the paper the quality of a regressionmore » analysis for determining forming limit curves based on tensile test results is presented and discussed.Owing to the specific definition of limit strains with FLCs following linear strain paths, the significance of this failure definition is limited. To consider nonlinear strain path effects, different methods are given in literature. One simple method is the concept of limit stresses. It should be noted that the determined value of the critical stress is dependent on the extrapolation of the tensile test curve. When the yield curve extrapolation is very similar to an exponential function, the definition of the critical stress value is very complicated due to the low slope of the hardening function at large strains.A new method to determine general failure behavior in sheet metal forming is the common use and interpretation of three criteria: onset on material instability (comparable with FLC concept), value of critical shear fracture and the value of ductile fracture. This method seems to be particularly successful for newly developed high strength steel grades in connection with more complex strain paths for some specific material elements. Nevertheless the identification of the different failure material parameters or functions will increase and the user has to learn with the interpretation of the numerical results.« less
Lv, Jun; Huang, Wenjian; Zhang, Jue; Wang, Xiaoying
2018-06-01
In free-breathing multi-b-value diffusion-weighted imaging (DWI), a series of images typically requires several minutes to collect. During respiration the kidney is routinely displaced and may also undergo deformation. These respiratory motion effects generate artifacts and these are the main sources of error in the quantification of intravoxel incoherent motion (IVIM) derived parameters. This work proposes a fully automated framework that combines a kidney segmentation to improve the registration accuracy. 10 healthy subjects were recruited to participate in this experiment. For the segmentation, U-net was adopted to acquire the kidney's contour. The segmented kidney then served as a region of interest (ROI) for the registration method, known as pyramidal Lucas-Kanade. Our proposed framework confines the kidney's solution range, thus increasing the pyramidal Lucas-Kanade's accuracy. To demonstrate the feasibility of our presented framework, eight regions of interest were selected in the cortex and medulla, and data stability was estimated by comparing the normalized root-mean-square error (NRMSE) values of the fitted data from the bi-exponential intravoxel incoherent motion model pre- and post- registration. The results show that the NRMSE was significantly lower after registration both in the cortex (p < 0.05) and medulla (p < 0.01) during free-breathing measurements. In addition, expert visual scoring of the derived apparent diffusion coefficient (ADC), f, D and D* maps indicated there were significant improvements in the alignment of the kidney in the post-registered image. The proposed framework can effectively reduce the motion artifacts of misaligned multi-b-value DWIs and the inaccuracies of the ADC, f, D and D* estimations. Advances in knowledge: This study demonstrates the feasibility of our proposed fully automated framework combining U-net based segmentation and pyramidal Lucas-Kanade registration method for improving the alignment of multi-b-value diffusion-weighted MRIs and reducing the inaccuracy of parameter estimation during free-breathing.
Bautista, A I N; Necchi-Júnior, O
2008-02-01
Photoacclimation of photosynthesis was investigated in a tropical population of C. glomerata (São Paulo State, southeastern Brazil, 20 degrees 48' 24" S and 49 degrees 22' 24" W) by chlorophyll fluorescence parameters and chlorophyll a content. Plants were acclimated to two levels of irradiance: low (65 +/- 5 micromol.m(-2).s(-1)) and high (300 +/- 10 micromol.m(-2).s(-1)) and exposed short-term (4 days) and long-term (28 days) under a light-dark cycle of 12:12 hours. Photosynthesis-irradiance (PI) curves revealed distinct strategies of photoacclimation. In long-term exposure, plants acclimated by altering the photosynthetic units (PSU) number and keeping fixed the PSU size, revealed by increased rates of maximum photosynthesis (Pmax), lower photosynthetic efficiency (alpha) and higher values of the saturation parameter (Ik) under high irradiance. The short-term acclimation strategy consisted of changing the PSU size, with a fixed number of PSUs, as revealed by similar Pmax but higher alpha and lower Ik under low irradiance. Chlorophyll a contents followed the general pattern reported in green algae of higher concentrations under lower irradiance. Dark/light induction curves revealed consistently higher values of potential quantum yield under low irradiance. Initial and final values showed a higher recovery capacity in the short (84.4-90.6%) term exposure than in the long-term case (81.4-81.5%). ETR (electron transport rate) and NPQ (non-photochemical quenching) values were consistently higher under low irradiance. ETR showed a continuous and steady increase along the light exposure period in the short and long-term experiments, whereas NPQ values revealed a rapid increase after 15 seconds of light exposure, kept a slightly increasing trend and stabilized in most treatments. Lower photosynthetic performance (ETR) and recovery capacity of potential quantum yield were observed, particularly in long-term exposure, suggesting that this population is constrained by the typical high light environment of tropical regions.
Singh, Tarini; Laub, Ruth; Burgard, Jan Pablo; Frings, Christian
2018-05-01
Selective attention refers to the ability to selectively act upon relevant information at the expense of irrelevant information. Yet, in many experimental tasks, what happens to the representation of the irrelevant information is still debated. Typically, 2 approaches to distractor processing have been suggested, namely distractor inhibition and distractor-based retrieval. However, it is also typical that both processes are hard to disentangle. For instance, in the negative priming literature (for a review Frings, Schneider, & Fox, 2015) this has been a continuous debate since the early 1980s. In the present study, we attempted to prove that both processes exist, but that they reflect distractor processing at different levels of representation. Distractor inhibition impacts stimulus representation, whereas distractor-based retrieval impacts mainly motor processes. We investigated both processes in a distractor-priming task, which enables an independent measurement of both processes. For our argument that both processes impact different levels of distractor representation, we estimated the exponential parameter (τ) and Gaussian components (μ, σ) of the exponential Gaussian reaction-time (RT) distribution, which have previously been used to independently test the effects of cognitive and motor processes (e.g., Moutsopoulou & Waszak, 2012). The distractor-based retrieval effect was evident for the Gaussian component, which is typically discussed as reflecting motor processes, but not for the exponential parameter, whereas the inhibition component was evident for the exponential parameter, which is typically discussed as reflecting cognitive processes, but not for the Gaussian parameter. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; Mayes, Melanie; Parker, Jack C
2010-01-01
We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less
Acceptable Tolerances for Matching Icing Similarity Parameters in Scaling Applications
NASA Technical Reports Server (NTRS)
Anderson, David N.
2003-01-01
This paper reviews past work and presents new data to evaluate how changes in similarity parameters affect ice shapes and how closely scale values of the parameters should match reference values. Experimental ice shapes presented are from tests by various researchers in the NASA Glenn Icing Research Tunnel. The parameters reviewed are the modified inertia parameter (which determines the stagnation collection efficiency), accumulation parameter, freezing fraction, Reynolds number, and Weber number. It was demonstrated that a good match of scale and reference ice shapes could sometimes be achieved even when values of the modified inertia parameter did not match precisely. Consequently, there can be some flexibility in setting scale droplet size, which is the test condition determined from the modified inertia parameter. A recommended guideline is that the modified inertia parameter be chosen so that the scale stagnation collection efficiency is within 10 percent of the reference value. The scale accumulation parameter and freezing fraction should also be within 10 percent of their reference values. The Weber number based on droplet size and water properties appears to be a more important scaling parameter than one based on model size and air properties. Scale values of both the Reynolds and Weber numbers need to be in the range of 60 to 160 percent of the corresponding reference values. The effects of variations in other similarity parameters have yet to be established.
NASA Astrophysics Data System (ADS)
da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho
2018-04-01
A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.
NASA Astrophysics Data System (ADS)
Ghezzi, Luan; Dutra-Ferreira, Letícia; Lorenzo-Oliveira, Diego; Porto de Mello, Gustavo F.; Santiago, Basílio X.; De Lee, Nathan; Lee, Brian L.; da Costa, Luiz N.; Maia, Marcio A. G.; Ogando, Ricardo L. C.; Wisniewski, John P.; González Hernández, Jonay I.; Stassun, Keivan G.; Fleming, Scott W.; Schneider, Donald P.; Mahadevan, Suvrath; Cargile, Phillip; Ge, Jian; Pepper, Joshua; Wang, Ji; Paegert, Martin
2014-12-01
Studies of Galactic chemical, and dynamical evolution in the solar neighborhood depend on the availability of precise atmospheric parameters (effective temperature T eff, metallicity [Fe/H], and surface gravity log g) for solar-type stars. Many large-scale spectroscopic surveys operate at low to moderate spectral resolution for efficiency in observing large samples, which makes the stellar characterization difficult due to the high degree of blending of spectral features. Therefore, most surveys employ spectral synthesis, which is a powerful technique, but relies heavily on the completeness and accuracy of atomic line databases and can yield possibly correlated atmospheric parameters. In this work, we use an alternative method based on spectral indices to determine the atmospheric parameters of a sample of nearby FGK dwarfs and subgiants observed by the MARVELS survey at moderate resolving power (R ~ 12,000). To avoid a time-consuming manual analysis, we have developed three codes to automatically normalize the observed spectra, measure the equivalent widths of the indices, and, through a comparison of those with values calculated with predetermined calibrations, estimate the atmospheric parameters of the stars. The calibrations were derived using a sample of 309 stars with precise stellar parameters obtained from the analysis of high-resolution FEROS spectra, permitting the low-resolution equivalent widths to be directly related to the stellar parameters. A validation test of the method was conducted with a sample of 30 MARVELS targets that also have reliable atmospheric parameters derived from the high-resolution spectra and spectroscopic analysis based on the excitation and ionization equilibria method. Our approach was able to recover the parameters within 80 K for T eff, 0.05 dex for [Fe/H], and 0.15 dex for log g, values that are lower than or equal to the typical external uncertainties found between different high-resolution analyses. An additional test was performed with a subsample of 138 stars from the ELODIE stellar library, and the literature atmospheric parameters were recovered within 125 K for T eff, 0.10 dex for [Fe/H], and 0.29 dex for log g. These precisions are consistent with or better than those provided by the pipelines of surveys operating with similar resolutions. These results show that the spectral indices are a competitive tool to characterize stars with intermediate resolution spectra. Based on observations obtained with the 2.2 m MPG telescope at the European Southern Observatory (La Silla, Chile), under the agreement ESO-Observatório Nacional/MCT, and the Sloan Digital Sky Survey, which is owned and operated by the Astrophysical Research Consortium.
NASA Technical Reports Server (NTRS)
Palmer, Michael T.; Abbott, Kathy H.
1994-01-01
This study identifies improved methods to present system parameter information for detecting abnormal conditions and to identify system status. Two workstation experiments were conducted. The first experiment determined if including expected-value-range information in traditional parameter display formats affected subject performance. The second experiment determined if using a nontraditional parameter display format, which presented relative deviation from expected value, was better than traditional formats with expected-value ranges included. The inclusion of expected-value-range information onto traditional parameter formats was found to have essentially no effect. However, subjective results indicated support for including this information. The nontraditional column deviation parameter display format resulted in significantly fewer errors compared with traditional formats with expected-value-ranges included. In addition, error rates for the column deviation parameter display format remained stable as the scenario complexity increased, whereas error rates for the traditional parameter display formats with expected-value ranges increased. Subjective results also indicated that the subjects preferred this new format and thought that their performance was better with it. The column deviation parameter display format is recommended for display applications that require rapid recognition of out-of-tolerance conditions, especially for a large number of parameters.
Electronic Polarizability and the Effective Pair Potentials of Water
Leontyev, I. V.; Stuchebrukhov, A. A.
2014-01-01
Employing the continuum dielectric model for electronic polarizability, we have developed a new consistent procedure for parameterization of the effective nonpolarizable potential of liquid water. The model explains the striking difference between the value of water dipole moment μ~3D reported in recent ab initio and experimental studies with the value μeff~2.3D typically used in the empirical potentials, such as TIP3P or SPC/E. It is shown that the consistency of the parameterization scheme can be achieved if the magnitude of the effective dipole of water is understood as a scaled value μeff=μ∕εel, where εel =1.78 is the electronic (high-frequency) dielectric constant of water, and a new electronic polarization energy term, missing in the previous theories, is included. The new term is evaluated by using Kirkwood - Onsager theory. The new scheme is fully consistent with experimental data on enthalpy of vaporization, density, diffusion coefficient, and static dielectric constant. The new theoretical framework provides important insights into the nature of the effective parameters, which is crucial when the computational models of liquid water are used for simulations in different environments, such as proteins, or for interaction with solutes. PMID:25383062
Parametric Studies Of Lightweight Reflectors Supported On Linear Actuator Arrays
NASA Astrophysics Data System (ADS)
Seibert, George E.
1987-10-01
This paper presents the results of numerous design studies carried out at Perkin-Elmer in support of the design of large diameter controllable mirrors for use in laser beam control, surveillance, and astronomy programs. The results include relationships between actuator location and spacing and the associated degree of correctability attainable for a variety of faceplate configurations subjected to typical disturbance environments. Normalizations and design curves obtained from closed-form equations based on thin shallow shell theory and computer based finite-element analyses are presented for use in preliminary design estimates of actuator count, faceplate structural properties, system performance prediction and weight assessments. The results of the analyses were obtained from a very wide range of mirror configurations, including both continuous and segmented mirror geometries. Typically, the designs consisted of a thin facesheet controlled by point force actuators which in turn were mounted on a structurally efficient base panel, or "reaction structure". The faceplate materials considered were fused silica, ULE fused silica, Zerodur, aluminum and beryllium. Thin solid faceplates as well as rib-reinforced cross-sections were treated, with a wide variation in thickness and/or rib patterns. The magnitude and spatial frequency distribution of the residual or uncorrected errors were related to the input error functions for mirrors of many different diameters and focal ratios. The error functions include simple sphere-to-sphere corrections, "parabolization" of spheres, and higher spatial frequency input error maps ranging from 0.5 to 7.5 cycles per diameter. The parameter which dominates all of the results obtained to date, is a structural descriptor of thin shell behavior called the characteristic length. This parameter is a function of the shell's radius of curvature, thickness, and Poisson's ratio of the material used. The value of this constant, in itself, describes the extent to which the deflection under a point force is localized by the shell's curvature. The deflection shape is typically a near-gaussian "bump" with a zero-crossing at a local radius of approximately 3.5 characteristic lengths. The amplitude is a function of the shells elastic modulus, radius, and thickness, and is linearly proportional to the applied force. This basic shell behavior is well-treated in an excellent set of papers by Eric Reissner entitled "Stresses and Small Displacements of Shallow Spherical Shells".1'2 Building on the insight offered by these papers, we developed our design tools around two derived parameters, the ratio of the mirror's diameter to its characteristic length (D/l), and the ratio of the actuator spacing to the characteristic length (b/l). The D/1 ratio determines the "finiteness" of the shell, or its dependence on edge boundary conditions. For D/1 values greater than 10, the influence of edges is almost totally absent on interior behavior. The b/1 ratio, the basis of all our normalizations is the most universal term in the description of correctability or ratio of residual/input errors. The data presented in the paper, shows that the rms residual error divided by the peak amplitude of the input error function is related to the actuator spacing to characteristic length ratio by the following expression RMS Residual Error b 3.5 k (I) (1) Initial Error Ampl. The value of k ranges from approximately 0.001 for low spatial frequency initial errors up to 0.05 for higher error frequencies (e.g. 5 cycles/diameter). The studies also yielded insight to the forces required to produce typical corrections at both the center and edges of the mirror panels. Additionally, the data lends itself to rapid evaluation of the effects of trading faceplate weight for increased actuator count,
Trame, MN; Lesko, LJ
2015-01-01
A systems pharmacology model typically integrates pharmacokinetic, biochemical network, and systems biology concepts into a unifying approach. It typically consists of a large number of parameters and reaction species that are interlinked based upon the underlying (patho)physiology and the mechanism of drug action. The more complex these models are, the greater the challenge of reliably identifying and estimating respective model parameters. Global sensitivity analysis provides an innovative tool that can meet this challenge. CPT Pharmacometrics Syst. Pharmacol. (2015) 4, 69–79; doi:10.1002/psp4.6; published online 25 February 2015 PMID:27548289
The Influence of Normalization Weight in Population Pharmacokinetic Covariate Models.
Goulooze, Sebastiaan C; Völler, Swantje; Välitalo, Pyry A J; Calvier, Elisa A M; Aarons, Leon; Krekels, Elke H J; Knibbe, Catherijne A J
2018-03-23
In covariate (sub)models of population pharmacokinetic models, most covariates are normalized to the median value; however, for body weight, normalization to 70 kg or 1 kg is often applied. In this article, we illustrate the impact of normalization weight on the precision of population clearance (CL pop ) parameter estimates. The influence of normalization weight (70, 1 kg or median weight) on the precision of the CL pop estimate, expressed as relative standard error (RSE), was illustrated using data from a pharmacokinetic study in neonates with a median weight of 2.7 kg. In addition, a simulation study was performed to show the impact of normalization to 70 kg in pharmacokinetic studies with paediatric or obese patients. The RSE of the CL pop parameter estimate in the neonatal dataset was lowest with normalization to median weight (8.1%), compared with normalization to 1 kg (10.5%) or 70 kg (48.8%). Typical clearance (CL) predictions were independent of the normalization weight used. Simulations showed that the increase in RSE of the CL pop estimate with 70 kg normalization was highest in studies with a narrow weight range and a geometric mean weight away from 70 kg. When, instead of normalizing with median weight, a weight outside the observed range is used, the RSE of the CL pop estimate will be inflated, and should therefore not be used for model selection. Instead, established mathematical principles can be used to calculate the RSE of the typical CL (CL TV ) at a relevant weight to evaluate the precision of CL predictions.
Common-Cause Failure Treatment in Event Assessment: Basis for a Proposed New Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana Kelly; Song-Hua Shen; Gary DeMoss
2010-06-01
Event assessment is an application of probabilistic risk assessment in which observed equipment failures and outages are mapped into the risk model to obtain a numerical estimate of the event’s risk significance. In this paper, we focus on retrospective assessments to estimate the risk significance of degraded conditions such as equipment failure accompanied by a deficiency in a process such as maintenance practices. In modeling such events, the basic events in the risk model that are associated with observed failures and other off-normal situations are typically configured to be failed, while those associated with observed successes and unchallenged components aremore » assumed capable of failing, typically with their baseline probabilities. This is referred to as the failure memory approach to event assessment. The conditioning of common-cause failure probabilities for the common cause component group associated with the observed component failure is particularly important, as it is insufficient to simply leave these probabilities at their baseline values, and doing so may result in a significant underestimate of risk significance for the event. Past work in this area has focused on the mathematics of the adjustment. In this paper, we review the Basic Parameter Model for common-cause failure, which underlies most current risk modelling, discuss the limitations of this model with respect to event assessment, and introduce a proposed new framework for common-cause failure, which uses a Bayesian network to model underlying causes of failure, and which has the potential to overcome the limitations of the Basic Parameter Model with respect to event assessment.« less
Escape driven by alpha-stable white noises.
Dybiec, B; Gudowska-Nowak, E; Hänggi, P
2007-02-01
We explore the archetype problem of an escape dynamics occurring in a symmetric double well potential when the Brownian particle is driven by white Lévy noise in a dynamical regime where inertial effects can safely be neglected. The behavior of escaping trajectories from one well to another is investigated by pointing to the special character that underpins the noise-induced discontinuity which is caused by the generalized Brownian paths that jump beyond the barrier location without actually hitting it. This fact implies that the boundary conditions for the mean first passage time (MFPT) are no longer determined by the well-known local boundary conditions that characterize the case with normal diffusion. By numerically implementing properly the set up boundary conditions, we investigate the survival probability and the average escape time as a function of the corresponding Lévy white noise parameters. Depending on the value of the skewness beta of the Lévy noise, the escape can either become enhanced or suppressed: a negative asymmetry parameter beta typically yields a decrease for the escape rate while the rate itself depicts a non-monotonic behavior as a function of the stability index alpha that characterizes the jump length distribution of Lévy noise, exhibiting a marked discontinuity at alpha=1. We find that the typical factor of 2 that characterizes for normal diffusion the ratio between the MFPT for well-bottom-to-well-bottom and well-bottom-to-barrier-top no longer holds true. For sufficiently high barriers the survival probabilities assume an exponential behavior versus time. Distinct non-exponential deviations occur, however, for low barrier heights.
Huang, Yu; Guo, Feng; Li, Yongling; Liu, Yufeng
2015-01-01
Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO) is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm. PMID:25603158
Horobin, R W; Stockert, J C; Rashid-Doubell, F
2015-05-01
We discuss a variety of biological targets including generic biomembranes and the membranes of the endoplasmic reticulum, endosomes/lysosomes, Golgi body, mitochondria (outer and inner membranes) and the plasma membrane of usual fluidity. For each target, we discuss the access of probes to the target membrane, probe uptake into the membrane and the mechanism of selectivity of the probe uptake. A statement of the QSAR decision rule that describes the required physicochemical features of probes that enable selective staining also is provided, followed by comments on exceptions and limits. Examples of probes typically used to demonstrate each target structure are noted and decision rule tabulations are provided for probes that localize in particular targets; these tabulations show distribution of probes in the conceptual space defined by the relevant structure parameters ("parameter space"). Some general implications and limitations of the QSAR models for probe targeting are discussed including the roles of certain cell and protocol factors that play significant roles in lipid staining. A case example illustrates the predictive ability of QSAR models. Key limiting values of the head group hydrophilicity parameter associated with membrane-probe interactions are discussed in an appendix.
Model parameter learning using Kullback-Leibler divergence
NASA Astrophysics Data System (ADS)
Lin, Chungwei; Marks, Tim K.; Pajovic, Milutin; Watanabe, Shinji; Tung, Chih-kuan
2018-02-01
In this paper, we address the following problem: For a given set of spin configurations whose probability distribution is of the Boltzmann type, how do we determine the model coupling parameters? We demonstrate that directly minimizing the Kullback-Leibler divergence is an efficient method. We test this method against the Ising and XY models on the one-dimensional (1D) and two-dimensional (2D) lattices, and provide two estimators to quantify the model quality. We apply this method to two types of problems. First, we apply it to the real-space renormalization group (RG). We find that the obtained RG flow is sufficiently good for determining the phase boundary (within 1% of the exact result) and the critical point, but not accurate enough for critical exponents. The proposed method provides a simple way to numerically estimate amplitudes of the interactions typically truncated in the real-space RG procedure. Second, we apply this method to the dynamical system composed of self-propelled particles, where we extract the parameter of a statistical model (a generalized XY model) from a dynamical system described by the Viscek model. We are able to obtain reasonable coupling values corresponding to different noise strengths of the Viscek model. Our method is thus able to provide quantitative analysis of dynamical systems composed of self-propelled particles.
Coupling effects on turning points of infectious diseases epidemics in scale-free networks.
Kim, Kiseong; Lee, Sangyeon; Lee, Doheon; Lee, Kwang Hyung
2017-05-31
Pandemic is a typical spreading phenomenon that can be observed in the human society and is dependent on the structure of the social network. The Susceptible-Infective-Recovered (SIR) model describes spreading phenomena using two spreading factors; contagiousness (β) and recovery rate (γ). Some network models are trying to reflect the social network, but the real structure is difficult to uncover. We have developed a spreading phenomenon simulator that can input the epidemic parameters and network parameters and performed the experiment of disease propagation. The simulation result was analyzed to construct a new marker VRTP distribution. We also induced the VRTP formula for three of the network mathematical models. We suggest new marker VRTP (value of recovered on turning point) to describe the coupling between the SIR spreading and the Scale-free (SF) network and observe the aspects of the coupling effects with the various of spreading and network parameters. We also derive the analytic formulation of VRTP in the fully mixed model, the configuration model, and the degree-based model respectively in the mathematical function form for the insights on the relationship between experimental simulation and theoretical consideration. We discover the coupling effect between SIR spreading and SF network through devising novel marker VRTP which reflects the shifting effect and relates to entropy.
Alcalá-Quintana, Rocío; García-Pérez, Miguel A
2013-12-01
Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.
Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.
2018-03-01
Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.
Jacobsen, Svein; Stauffer, Paul R
2007-02-21
The total thermal dose that can be delivered during hyperthermia treatments is frequently limited by temperature heterogeneities in the heated tissue volume. Reliable temperature information on the heated area is thus vital for the optimization of clinical dosimetry. Microwave radiometry has been proposed as an accurate, quick and painless temperature sensing technique for biological tissue. Advantages include the ability to sense volume-averaged temperatures from subsurface tissue non-invasively, rather than with a limited set of point measurements typical of implanted temperature probes. We present a procedure to estimate the maximum tissue temperature from a single radiometric brightness temperature which is based on a numerical simulation of 3D tissue temperature distributions induced by microwave heating at 915 MHz. The temperature retrieval scheme is evaluated against errors arising from unknown variations in thermal, electromagnetic and design model parameters. Whereas realistic deviations from base values of dielectric and thermal parameters have only marginal impact on performance, pronounced deviations in estimated maximum tissue temperature are observed for unanticipated variations of the temperature or thickness of the bolus compartment. The need to pay particular attention to these latter applicator construction parameters in future clinical implementation of the thermometric method is emphasized.
NASA Astrophysics Data System (ADS)
Zhu, Guangtun Ben; Barrera-Ballesteros, Jorge K.; Heckman, Timothy M.; Zakamska, Nadia L.; Sánchez, Sebastian F.; Yan, Renbin; Brinkmann, Jonathan
2017-07-01
We revisit the relation between the stellar surface density, the gas surface density and the gas-phase metallicity of typical disc galaxies in the local Universe with the SDSS-IV/MaNGA survey, using the star formation rate surface density as an indicator for the gas surface density. We show that these three local parameters form a tight relationship, confirming previous works (e.g. by the PINGS and CALIFA surveys), but with a larger sample. We present a new local leaky-box model, assuming star-formation history and chemical evolution is localized except for outflowing materials. We derive closed-form solutions for the evolution of stellar surface density, gas surface density and gas-phase metallicity, and show that these parameters form a tight relation independent of initial gas density and time. We show that, with canonical values of model parameters, this predicted relation match the observed one well. In addition, we briefly describe a pathway to improving the current semi-analytic models of galaxy formation by incorporating the local leaky-box model in the cosmological context, which can potentially explain simultaneously multiple properties of Milky Way-type disc galaxies, such as the size growth and the global stellar mass-gas metallicity relation.
NASA Astrophysics Data System (ADS)
Jacobsen, Svein; Stauffer, Paul R.
2007-02-01
The total thermal dose that can be delivered during hyperthermia treatments is frequently limited by temperature heterogeneities in the heated tissue volume. Reliable temperature information on the heated area is thus vital for the optimization of clinical dosimetry. Microwave radiometry has been proposed as an accurate, quick and painless temperature sensing technique for biological tissue. Advantages include the ability to sense volume-averaged temperatures from subsurface tissue non-invasively, rather than with a limited set of point measurements typical of implanted temperature probes. We present a procedure to estimate the maximum tissue temperature from a single radiometric brightness temperature which is based on a numerical simulation of 3D tissue temperature distributions induced by microwave heating at 915 MHz. The temperature retrieval scheme is evaluated against errors arising from unknown variations in thermal, electromagnetic and design model parameters. Whereas realistic deviations from base values of dielectric and thermal parameters have only marginal impact on performance, pronounced deviations in estimated maximum tissue temperature are observed for unanticipated variations of the temperature or thickness of the bolus compartment. The need to pay particular attention to these latter applicator construction parameters in future clinical implementation of the thermometric method is emphasized.
González-Tomás, L; Costell, E
2006-12-01
Consumers' perceptions of the color and texture of 8 commercial vanilla dairy desserts were studied and related to color and rheological measurements. First, the 8 desserts were evaluated by a group of consumers by means of the Free Choice Profile. For both color and texture, a 2-dimensional solution was chosen, with dimension 1 highly related to yellow color intensity in the case of color and to thickness in the case of texture. Second, mechanical spectra, flow behavior, and instrumental color were determined. All the samples showed a time-dependent and shear-thinning flow and a mechanical spectrum typical of a weak gel. Differences were found in the flow index, in the apparent viscosity at 10 s(-1), and in the values of the storage modulus, the loss modulus, the loss angle tangent, and the complex viscosity at 1 Hz, as well as in the color parameters. Finally, sensory and instrumental relationships were investigated by a generalized Procrustes analysis. For both color and texture, a 3-dimensional solution explained a high percentage of the total variance (>80%). In these particular samples, the instrumental color parameters provided more accurate information on consumers' color perceptions than was provided by the rheological parameters of consumers' perceptions of texture.
Parameter identification of material constants in a composite shell structure
NASA Technical Reports Server (NTRS)
Martinez, David R.; Carne, Thomas G.
1988-01-01
One of the basic requirements in engineering analysis is the development of a mathematical model describing the system. Frequently comparisons with test data are used as a measurement of the adequacy of the model. An attempt is typically made to update or improve the model to provide a test verified analysis tool. System identification provides a systematic procedure for accomplishing this task. The terms system identification, parameter estimation, and model correlation all refer to techniques that use test information to update or verify mathematical models. The goal of system identification is to improve the correlation of model predictions with measured test data, and produce accurate, predictive models. For nonmetallic structures the modeling task is often difficult due to uncertainties in the elastic constants. A finite element model of the shell was created, which included uncertain orthotropic elastic constants. A modal survey test was then performed on the shell. The resulting modal data, along with the finite element model of the shell, were used in a Bayes estimation algorithm. This permitted the use of covariance matrices to weight the confidence in the initial parameter values as well as confidence in the measured test data. The estimation procedure also employed the concept of successive linearization to obtain an approximate solution to the original nonlinear estimation problem.
Malins, Alex; Kurikami, Hiroshi; Kitamura, Akihiro; Machida, Masahiko
2016-10-01
Calculations are reported for ambient dose equivalent rates [H˙*(10)] at 1 m height above the ground surface before and after remediating radiocesium-contaminated soil at wide and open sites. The results establish how the change in H˙*(10) upon remediation depends on the initial depth distribution of radiocesium within the ground, on the size of the remediated area, and on the mass per unit area of remediated soil. The remediation strategies considered were topsoil removal (with and without recovering with a clean soil layer), interchanging a topsoil layer with a subsoil layer, and in situ mixing of the topsoil. The results show the ratio of the radiocesium components of H˙*(10) post-remediation relative to their initial values (residual dose factors). It is possible to use the residual dose factors to gauge absolute changes in H˙*(10) upon remediation. The dependency of the residual dose factors on the number of years elapsed after fallout deposition is analyzed when remediation parameters remain fixed and radiocesium undergoes typical downward migration within the soil column.
Surface modification by electrolytic plasma processing for high Nb-TiAl alloys
NASA Astrophysics Data System (ADS)
Gui, Wanyuan; Hao, Guojian; Liang, Yongfeng; Li, Feng; Liu, Xiao; Lin, Junpin
2016-12-01
Metal surface modification by electrolytic plasma processing (EPP) is an innovative treatment widely commonly applied to material processing and pretreatment process of coating and galvanization. EPP involves complex processes and a great deal of parameters, such as preset voltage, current, solution temperature and processing time. Several characterization methods are presented in this paper for evaluating the micro-structure surfaces of Ti45Al8Nb alloys: SEM, EDS, XRD and 3D topography. The results showed that the oxide scale and other contaminants on the surface of Ti45Al8Nb alloys can be effectively removed via EPP. The typical micro-crater structure of the surface of Ti45Al8Nb alloys were observed by 3D topography after EPP to find that the mean diameter of the surface structure and roughness value can be effectively controlled by altering the processing parameters. The mechanical properties of the surface according to nanomechanical probe testing exhibited slight decrease in microhardness and elastic modulus after EPP, but a dramatic increase in surface roughness, which is beneficial for further processing or coating.
Using unassisted ecosystem development to restore marginal land case study of post mining areas
NASA Astrophysics Data System (ADS)
Frouz, Jan
2017-04-01
When we evaluate efficiency of individual restoration measures we typically compare individual restoration treatments or compare them with initial state or similar ecosystem in surrounding landscape. We argue that sensible way to show added value of restoration measure is to compare them with unassisted ecosystem development. Case study of ecosystem development in Sokolov post mining district (Czech Republic) show that spontaneous succession of ecosystem can be, in many parameters, comparable with various reclamation approaches. In suitable substrates the succession is driven mainly by site topography. In sites which were leveled grassy vegetation develops. In sites where original wave like topography was preserved the ecosystem develops towards forest. In forest sites the development on most of the investigated ecosystem parameters (cower, biomass soil developments, water holding capacity, carbon storage) in succession sites is little bit slower compare to reclaimed plantation during first 15-20 years. However in older sites differences disappear and succession sites show similarity with restored sites. Despite similarity in these ecosystem functions possibilities of spontaneous sites for commercial use has to be explored.
Anisotropic mesoscale eddy transport in ocean general circulation models
NASA Astrophysics Data System (ADS)
Reckinger, Scott; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank; Dennis, John; Danabasoglu, Gokhan
2014-11-01
In modern climate models, the effects of oceanic mesoscale eddies are introduced by relating subgrid eddy fluxes to the resolved gradients of buoyancy or other tracers, where the proportionality is, in general, governed by an eddy transport tensor. The symmetric part of the tensor, which represents the diffusive effects of mesoscale eddies, is universally treated isotropically. However, the diffusive processes that the parameterization approximates, such as shear dispersion and potential vorticity barriers, typically have strongly anisotropic characteristics. Generalizing the eddy diffusivity tensor for anisotropy extends the number of parameters from one to three: major diffusivity, minor diffusivity, and alignment. The Community Earth System Model (CESM) with the anisotropic eddy parameterization is used to test various choices for the parameters, which are motivated by observations and the eddy transport tensor diagnosed from high resolution simulations. Simply setting the ratio of major to minor diffusivities to a value of five globally, while aligning the major axis along the flow direction, improves biogeochemical tracer ventilation and reduces temperature and salinity biases. These effects can be improved by parameterizing the oceanic anisotropic transport mechanisms.
Avila, Manuel; Graterol, Eduardo; Alezones, Jesús; Criollo, Beisy; Castillo, Dámaso; Kuri, Victoria; Oviedo, Norman; Moquete, Cesar; Romero, Marbella; Hanley, Zaida; Taylor, Margie
2012-06-01
The appearance of rice grain is a key aspect in quality determination. Mainly, this analysis is performed by expert analysts through visual observation; however, due to the subjective nature of the analysis, the results may vary among analysts. In order to evaluate the concordance between analysts from Latin-American rice quality laboratories for rice grain appearance through digital images, an inter-laboratory test was performed with ten analysts and images of 90 grains captured with a high resolution scanner. Rice grains were classified in four categories including translucent, chalky, white belly, and damaged grain. Data was categorized using statistic parameters like mode and its frequency, the relative concordance, and the reproducibility parameter kappa. Additionally, a referential image gallery of typical grain for each category was constructed based on mode frequency. Results showed a Kappa value of 0.49, corresponding to a moderate reproducibility, attributable to subjectivity in the visual analysis of grain images. These results reveal the need for standardize the evaluation criteria among analysts to improve the confidence of the determination of rice grain appearance.
Analytically optimal parameters of dynamic vibration absorber with negative stiffness
NASA Astrophysics Data System (ADS)
Shen, Yongjun; Peng, Haibo; Li, Xianghong; Yang, Shaopu
2017-02-01
In this paper the optimal parameters of a dynamic vibration absorber (DVA) with negative stiffness is analytically studied. The analytical solution is obtained by Laplace transform method when the primary system is subjected to harmonic excitation. The research shows there are still two fixed points independent of the absorber damping in the amplitude-frequency curve of the primary system when the system contains negative stiffness. Then the optimum frequency ratio and optimum damping ratio are respectively obtained based on the fixed-point theory. A new strategy is proposed to obtain the optimum negative stiffness ratio and make the system remain stable at the same time. At last the control performance of the presented DVA is compared with those of three existing typical DVAs, which were presented by Den Hartog, Ren and Sims respectively. The comparison results in harmonic and random excitation show that the presented DVA in this paper could not only reduce the peak value of the amplitude-frequency curve of the primary system significantly, but also broaden the efficient frequency range of vibration mitigation.
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1986-01-01
This paper outlines methods for modeling, identification and estimation for static determination of flexible structures. The shape estimation schemes are based on structural models specified by (possibly interconnected) elliptic partial differential equations. The identification techniques provide approximate knowledge of parameters in elliptic systems. The techniques are based on the method of maximum-likelihood that finds parameter values such that the likelihood functional associated with the system model is maximized. The estimation methods are obtained by means of a function-space approach that seeks to obtain the conditional mean of the state given the data and a white noise characterization of model errors. The solutions are obtained in a batch-processing mode in which all the data is processed simultaneously. After methods for computing the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the related estimation error is conducted. In addition to outlining the above theoretical results, the paper presents typical flexible structure simulations illustrating performance of the shape determination methods.
On the probability distribution of daily streamflow in the United States
Blum, Annalise G.; Archfield, Stacey A.; Vogel, Richard M.
2017-01-01
Daily streamflows are often represented by flow duration curves (FDCs), which illustrate the frequency with which flows are equaled or exceeded. FDCs have had broad applications across both operational and research hydrology for decades; however, modeling FDCs has proven elusive. Daily streamflow is a complex time series with flow values ranging over many orders of magnitude. The identification of a probability distribution that can approximate daily streamflow would improve understanding of the behavior of daily flows and the ability to estimate FDCs at ungaged river locations. Comparisons of modeled and empirical FDCs at nearly 400 unregulated, perennial streams illustrate that the four-parameter kappa distribution provides a very good representation of daily streamflow across the majority of physiographic regions in the conterminous United States (US). Further, for some regions of the US, the three-parameter generalized Pareto and lognormal distributions also provide a good approximation to FDCs. Similar results are found for the period of record FDCs, representing the long-term hydrologic regime at a site, and median annual FDCs, representing the behavior of flows in a typical year.