NASA Astrophysics Data System (ADS)
Avanzi, Francesco; De Michele, Carlo; Gabriele, Salvatore; Ghezzi, Antonio; Rosso, Renzo
2015-04-01
Here, we show how atmospheric circulation and topography rule the variability of depth-duration-frequency (DDF) curves parameters, and we discuss how this variability has physical implications on the formation of extreme precipitations at high elevations. A DDF is a curve ruling the value of the maximum annual precipitation H as a function of duration D and the level of probability F. We consider around 1500 stations over the Italian territory, with at least 20 years of data of maximum annual precipitation depth at different durations. We estimated the DDF parameters at each location by using the asymptotic distribution of extreme values, i.e. the so-called Generalized Extreme Value (GEV) distribution, and considering a statistical simple scale invariance hypothesis. Consequently, a DDF curve depends on five different parameters. A first set relates H with the duration (namely, the mean value of annual maximum precipitation depth for unit duration and the scaling exponent), while a second set links H to F (namely, a scale, position and shape parameter). The value of the shape parameter has consequences on the type of random variable (unbounded, upper or lower bounded). This extensive analysis shows that the variability of the mean value of annual maximum precipitation depth for unit duration obeys to the coupled effect of topography and modal direction of moisture flux during extreme events. Median values of this parameter decrease with elevation. We called this phenomenon "reverse orographic effect" on extreme precipitation of short durations, since it is in contrast with general knowledge about the orographic effect on mean precipitation. Moreover, the scaling exponent is mainly driven by topography alone (with increasing values of this parameter at increasing elevations). Therefore, the quantiles of H(D,F) at durations greater than unit turn to be more variable at high elevations than at low elevations. Additionally, the analysis of the variability of the shape parameter with elevation shows that extreme events at high elevations appear to be distributed according to an upper bounded probability distribution. These evidences could be a characteristic sign of the formation of extreme precipitation events at high elevations.
Extremes in ecology: Avoiding the misleading effects of sampling variation in summary analyses
Link, W.A.; Sauer, J.R.
1996-01-01
Surveys such as the North American Breeding Bird Survey (BBS) produce large collections of parameter estimates. One's natural inclination when confronted with lists of parameter estimates is to look for the extreme values: in the BBS, these correspond to the species that appear to have the greatest changes in population size through time. Unfortunately, extreme estimates are liable to correspond to the most poorly estimated parameters. Consequently, the most extreme parameters may not match up with the most extreme parameter estimates. The ranking of parameter values on the basis of their estimates are a difficult statistical problem. We use data from the BBS and simulations to illustrate the potential misleading effects of sampling variation in rankings of parameters. We describe empirical Bayes and constrained empirical Bayes procedures which provide partial solutions to the problem of ranking in the presence of sampling variation.
Exchangeability, extreme returns and Value-at-Risk forecasts
NASA Astrophysics Data System (ADS)
Huang, Chun-Kai; North, Delia; Zewotir, Temesgen
2017-07-01
In this paper, we propose a new approach to extreme value modelling for the forecasting of Value-at-Risk (VaR). In particular, the block maxima and the peaks-over-threshold methods are generalised to exchangeable random sequences. This caters for the dependencies, such as serial autocorrelation, of financial returns observed empirically. In addition, this approach allows for parameter variations within each VaR estimation window. Empirical prior distributions of the extreme value parameters are attained by using resampling procedures. We compare the results of our VaR forecasts to that of the unconditional extreme value theory (EVT) approach and the conditional GARCH-EVT model for robust conclusions.
Min and Max Exponential Extreme Interval Values and Statistics
ERIC Educational Resources Information Center
Jance, Marsha; Thomopoulos, Nick
2009-01-01
The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…
Nonstationary Extreme Value Analysis in a Changing Climate: A Software Package
NASA Astrophysics Data System (ADS)
Cheng, L.; AghaKouchak, A.; Gilleland, E.
2013-12-01
Numerous studies show that climatic extremes have increased substantially in the second half of the 20th century. For this reason, analysis of extremes under a nonstationary assumption has received a great deal of attention. This paper presents a software package developed for estimation of return levels, return periods, and risks of climatic extremes in a changing climate. This MATLAB software package offers tools for analysis of climate extremes under both stationary and non-stationary assumptions. The Nonstationary Extreme Value Analysis (hereafter, NEVA) provides an efficient and generalized framework for analyzing extremes using Bayesian inference. NEVA estimates the extreme value parameters using a Differential Evolution Markov Chain (DE-MC) which utilizes the genetic algorithm Differential Evolution (DE) for global optimization over the real parameter space with the Markov Chain Monte Carlo (MCMC) approach and has the advantage of simplicity, speed of calculation and convergence over conventional MCMC. NEVA also offers the confidence interval and uncertainty bounds of estimated return levels based on the sampled parameters. NEVA integrates extreme value design concepts, data analysis tools, optimization and visualization, explicitly designed to facilitate analysis extremes in geosciences. The generalized input and output files of this software package make it attractive for users from across different fields. Both stationary and nonstationary components of the package are validated for a number of case studies using empirical return levels. The results show that NEVA reliably describes extremes and their return levels.
Quinn, Terrance; Sinkala, Zachariah
2014-01-01
We develop a general method for computing extreme value distribution (Gumbel, 1958) parameters for gapped alignments. Our approach uses mixture distribution theory to obtain associated BLOSUM matrices for gapped alignments, which in turn are used for determining significance of gapped alignment scores for pairs of biological sequences. We compare our results with parameters already obtained in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jiali; Han, Yuefeng; Stein, Michael L.
2016-02-10
The Weather Research and Forecast (WRF) model downscaling skill in extreme maximum daily temperature is evaluated by using the generalized extreme value (GEV) distribution. While the GEV distribution has been used extensively in climatology and meteorology for estimating probabilities of extreme events, accurately estimating GEV parameters based on data from a single pixel can be difficult, even with fairly long data records. This work proposes a simple method assuming that the shape parameter, the most difficult of the three parameters to estimate, does not vary over a relatively large region. This approach is applied to evaluate 31-year WRF-downscaled extreme maximummore » temperature through comparison with North American Regional Reanalysis (NARR) data. Uncertainty in GEV parameter estimates and the statistical significance in the differences of estimates between WRF and NARR are accounted for by conducting bootstrap resampling. Despite certain biases over parts of the United States, overall, WRF shows good agreement with NARR in the spatial pattern and magnitudes of GEV parameter estimates. Both WRF and NARR show a significant increase in extreme maximum temperature over the southern Great Plains and southeastern United States in January and over the western United States in July. The GEV model shows clear benefits from the regionally constant shape parameter assumption, for example, leading to estimates of the location and scale parameters of the model that show coherent spatial patterns.« less
Surface atmospheric extremes (Launch and transportation areas)
NASA Technical Reports Server (NTRS)
1972-01-01
The effects of extreme values of surface and low altitude atmospheric parameters on space vehicle design, tests, and operations are discussed. Atmospheric extremes from the surface to 150 meters for geographic locations of interest to NASA are given. Thermal parameters (temperature and solar radiation), humidity, pressure, and atmospheric electricity (lighting and static) are presented. Weather charts and tables are included.
Spatial variation of statistical properties of extreme water levels along the eastern Baltic Sea
NASA Astrophysics Data System (ADS)
Pindsoo, Katri; Soomere, Tarmo; Rocha, Eugénio
2016-04-01
Most of existing projections of future extreme water levels rely on the use of classic generalised extreme value distributions. The choice to use a particular distribution is often made based on the absolute value of the shape parameter of the Generalise Extreme Value distribution. If this parameter is small, the Gumbel distribution is most appropriate while in the opposite case the Weibull or Frechet distribution could be used. We demonstrate that the alongshore variation in the statistical properties of numerically simulated high water levels along the eastern coast of the Baltic Sea is so large that the use of a single distribution for projections of extreme water levels is highly questionable. The analysis is based on two simulated data sets produced in the Swedish Meteorological and Hydrological Institute. The output of the Rossby Centre Ocean model is sampled with a resolution of 6 h and the output of the circulation model NEMO with a resolution of 1 h. As the maxima of water levels of subsequent years may be correlated in the Baltic Sea, we also employ maxima for stormy seasons. We provide a detailed analysis of spatial variation of the parameters of the family of extreme value distributions along an approximately 600 km long coastal section from the north-western shore of Latvia in the Baltic Proper until the eastern Gulf of Finland. The parameters are evaluated using maximum likelihood method and method of moments. The analysis also covers the entire Gulf of Riga. The core parameter of this family of distributions, the shape parameter of the Generalised Extreme Value distribution, exhibits extensive variation in the study area. Its values evaluated using the Hydrognomon software and maximum likelihood method, vary from about -0.1 near the north-western coast of Latvia in the Baltic Proper up to about 0.05 in the eastern Gulf of Finland. This parameter is very close to zero near Tallinn in the western Gulf of Finland. Thus, it is natural that the Gumbel distribution gives adequate projections of extreme water levels for the vicinity of Tallinn. More importantly, this feature indicates that the use of a single distribution for the projections of extreme water levels and their return periods for the entire Baltic Sea coast is inappropriate. The physical reason is the interplay of the complex shape of large subbasins (such as the Gulf of Riga and Gulf of Finland) of the sea and highly anisotropic wind regime. The 'impact' of this anisotropy on the statistics of water level is amplified by the overall anisotropy of the distributions of the frequency of occurrence of high and low water levels. The most important conjecture is that long-term behaviour of water level extremes in different coastal sections of the Baltic Sea may be fundamentally different.
Modelling hydrological extremes under non-stationary conditions using climate covariates
NASA Astrophysics Data System (ADS)
Vasiliades, Lampros; Galiatsatou, Panagiota; Loukas, Athanasios
2013-04-01
Extreme value theory is a probabilistic theory that can interpret the future probabilities of occurrence of extreme events (e.g. extreme precipitation and streamflow) using past observed records. Traditionally, extreme value theory requires the assumption of temporal stationarity. This assumption implies that the historical patterns of recurrence of extreme events are static over time. However, the hydroclimatic system is nonstationary on time scales that are relevant to extreme value analysis, due to human-mediated and natural environmental change. In this study the generalized extreme value (GEV) distribution is used to assess nonstationarity in annual maximum daily rainfall and streamflow timeseries at selected meteorological and hydrometric stations in Greece and Cyprus. The GEV distribution parameters (location, scale, and shape) are specified as functions of time-varying covariates and estimated using the conditional density network (CDN) as proposed by Cannon (2010). The CDN is a probabilistic extension of the multilayer perceptron neural network. Model parameters are estimated via the generalized maximum likelihood (GML) approach using the quasi-Newton BFGS optimization algorithm, and the appropriate GEV-CDN model architecture for the selected meteorological and hydrometric stations is selected by fitting increasingly complicated models and choosing the one that minimizes the Akaike information criterion with small sample size correction. For all case studies in Greece and Cyprus different formulations are tested with combinational cases of stationary and nonstationary parameters of the GEV distribution, linear and non-linear architecture of the CDN and combinations of the input climatic covariates. Climatic indices such as the Southern Oscillation Index (SOI), which describes atmospheric circulation in the eastern tropical pacific related to El Niño Southern Oscillation (ENSO), the Pacific Decadal Oscillation (PDO) index that varies on an interdecadal rather than interannual time scale and the atmospheric circulation patterns as expressed by the North Atlantic Oscillation (NAO) index are used to express the GEV parameters as functions of the covariates. Results show that the nonstationary GEV model can be an efficient tool to take into account the dependencies between extreme value random variables and the temporal evolution of the climate.
Future Projection of Summer Extreme Precipitation from High Resolution Multi-RCMs over East Asia
NASA Astrophysics Data System (ADS)
Kim, Gayoung; Park, Changyong; Cha, Dong-Hyun; Lee, Dong-Kyou; Suh, Myoung-Seok; Ahn, Joong-Bae; Min, Seung-Ki; Hong, Song-You; Kang, Hyun-Suk
2017-04-01
Recently, the frequency and intensity of natural hazards have been increasing due to human-induced climate change. Because most damages of natural hazards over East Asia have been related to extreme precipitation events, it is important to estimate future change in extreme precipitation characteristics caused by climate change. We investigate future changes in extremal values of summer precipitation simulated by five regional climate models participating in the CORDEX-East Asia project (i.e., HadGEM3-RA, RegCM4, MM5, WRF, and GRIMs) over East Asia. 100-year return value calculated from the generalized extreme value (GEV) parameters is analysed as an indicator of extreme intensity. In the future climate, the mean values as well as the extreme values of daily precipitation tend to increase over land region. The increase of 100-year return value can be significantly associated with the changes in the location (intensity) and scale (variability) GEV parameters for extreme precipitation. It is expected that the results of this study can be used as fruitful references when making the policy of disaster management. Acknowledgements The research was supported by the Ministry of Public Safety and Security of Korean government and Development program under grant MPSS-NH-2013-63 and the National Research Foundation of Korea Grant funded by the Ministry of Science, ICT and Future Planning of Korea (NRF-2016M3C4A7952637) for its support and assistant in completion of the study.
Bivariate extreme value distributions
NASA Technical Reports Server (NTRS)
Elshamy, M.
1992-01-01
In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.
New Insights into the Estimation of Extreme Geomagnetic Storm Occurrences
NASA Astrophysics Data System (ADS)
Ruffenach, Alexis; Winter, Hugo; Lavraud, Benoit; Bernardara, Pietro
2017-04-01
Space weather events such as intense geomagnetic storms are major disturbances of the near-Earth environment that may lead to serious impacts on our modern society. As such, it is of great importance to estimate their probability, and in particular that of extreme events. One approach largely used in statistical sciences for extreme events probability estimates is Extreme Value Analysis (EVA). Using this rigorous statistical framework, estimations of the occurrence of extreme geomagnetic storms are performed here based on the most relevant global parameters related to geomagnetic storms, such as ground parameters (e.g. geomagnetic Dst and aa indexes), and space parameters related to the characteristics of Coronal Mass Ejections (CME) (velocity, southward magnetic field component, electric field). Using our fitted model, we estimate the annual probability of a Carrington-type event (Dst = -850nT) to be on the order of 10-3, with a lower limit of the uncertainties on the return period of ˜500 years. Our estimate is significantly higher than that of most past studies, which typically had a return period of a few 100 years at maximum. Thus precautions are required when extrapolating intense values. Currently, the complexity of the processes and the length of available data inevitably leads to significant uncertainties in return period estimates for the occurrence of extreme geomagnetic storms. However, our application of extreme value models for extrapolating into the tail of the distribution provides a mathematically justified framework for the estimation of extreme return periods, thereby enabling the determination of more accurate estimates and reduced associated uncertainties.
Towards a General Theory of Extremes for Observables of Chaotic Dynamical Systems.
Lucarini, Valerio; Faranda, Davide; Wouters, Jeroen; Kuna, Tobias
2014-01-01
In this paper we provide a connection between the geometrical properties of the attractor of a chaotic dynamical system and the distribution of extreme values. We show that the extremes of so-called physical observables are distributed according to the classical generalised Pareto distribution and derive explicit expressions for the scaling and the shape parameter. In particular, we derive that the shape parameter does not depend on the chosen observables, but only on the partial dimensions of the invariant measure on the stable, unstable, and neutral manifolds. The shape parameter is negative and is close to zero when high-dimensional systems are considered. This result agrees with what was derived recently using the generalized extreme value approach. Combining the results obtained using such physical observables and the properties of the extremes of distance observables, it is possible to derive estimates of the partial dimensions of the attractor along the stable and the unstable directions of the flow. Moreover, by writing the shape parameter in terms of moments of the extremes of the considered observable and by using linear response theory, we relate the sensitivity to perturbations of the shape parameter to the sensitivity of the moments, of the partial dimensions, and of the Kaplan-Yorke dimension of the attractor. Preliminary numerical investigations provide encouraging results on the applicability of the theory presented here. The results presented here do not apply for all combinations of Axiom A systems and observables, but the breakdown seems to be related to very special geometrical configurations.
Towards a General Theory of Extremes for Observables of Chaotic Dynamical Systems
NASA Astrophysics Data System (ADS)
Lucarini, Valerio; Faranda, Davide; Wouters, Jeroen; Kuna, Tobias
2014-02-01
In this paper we provide a connection between the geometrical properties of the attractor of a chaotic dynamical system and the distribution of extreme values. We show that the extremes of so-called physical observables are distributed according to the classical generalised Pareto distribution and derive explicit expressions for the scaling and the shape parameter. In particular, we derive that the shape parameter does not depend on the chosen observables, but only on the partial dimensions of the invariant measure on the stable, unstable, and neutral manifolds. The shape parameter is negative and is close to zero when high-dimensional systems are considered. This result agrees with what was derived recently using the generalized extreme value approach. Combining the results obtained using such physical observables and the properties of the extremes of distance observables, it is possible to derive estimates of the partial dimensions of the attractor along the stable and the unstable directions of the flow. Moreover, by writing the shape parameter in terms of moments of the extremes of the considered observable and by using linear response theory, we relate the sensitivity to perturbations of the shape parameter to the sensitivity of the moments, of the partial dimensions, and of the Kaplan-Yorke dimension of the attractor. Preliminary numerical investigations provide encouraging results on the applicability of the theory presented here. The results presented here do not apply for all combinations of Axiom A systems and observables, but the breakdown seems to be related to very special geometrical configurations.
NASA Astrophysics Data System (ADS)
Hasan, Husna; Salam, Norfatin; Kassim, Suraiya
2013-04-01
Extreme temperature of several stations in Malaysia is modeled by fitting the annual maximum to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are used to detect stochastic trends among the stations. The Mann-Kendall (MK) test suggests a non-stationary model. Three models are considered for stations with trend and the Likelihood Ratio test is used to determine the best-fitting model. The results show that Subang and Bayan Lepas stations favour a model which is linear for the location parameters while Kota Kinabalu and Sibu stations are suitable with a model in the logarithm of the scale parameters. The return level is the level of events (maximum temperature) which is expected to be exceeded once, on average, in a given number of years, is obtained.
40 CFR 80.49 - Fuels to be used in augmenting the complex emission model through vehicle testing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... shall be within the blending tolerances defined in this paragraph (a)(4) relative to the values... be within the blending tolerances defined in this paragraph (c) relative to the values specified in... “candidate” level of the parameter shall refer to the most extreme value of the parameter, relative to...
Assessing the features of extreme smog in China and the differentiated treatment strategy
NASA Astrophysics Data System (ADS)
Deng, Lu; Zhang, Zhengjun
2018-01-01
Extreme smog can have potentially harmful effects on human health, the economy and daily life. However, the average (mean) values do not provide strategically useful information on the hazard analysis and control of extreme smog. This article investigates China's smog extremes by applying extreme value analysis to hourly PM2.5 data from 2014 to 2016 obtained from monitoring stations across China. By fitting a generalized extreme value (GEV) distribution to exceedances over a station-specific extreme smog level at each monitoring location, all study stations are grouped into eight different categories based on the estimated mean and shape parameter values of fitted GEV distributions. The extreme features characterized by the mean of the fitted extreme value distribution, the maximum frequency and the tail index of extreme smog at each location are analysed. These features can provide useful information for central/local government to conduct differentiated treatments in cities within different categories and conduct similar prevention goals and control strategies among those cities belonging to the same category in a range of areas. Furthermore, hazardous hours, breaking probability and the 1-year return level of each station are demonstrated by category, based on which the future control and reduction targets of extreme smog are proposed for the cities of Beijing, Tianjin and Hebei as an example.
NASA Astrophysics Data System (ADS)
Wintoft, Peter; Viljanen, Ari; Wik, Magnus
2016-05-01
High-frequency ( ≈ minutes) variability of ground magnetic fields is caused by ionospheric and magnetospheric processes driven by the changing solar wind. The varying magnetic fields induce electrical fields that cause currents to flow in man-made conductors like power grids and pipelines. Under extreme conditions the geomagnetically induced currents (GIC) may be harmful to the power grids. Increasing our understanding of the extreme events is thus important for solar-terrestrial science and space weather. In this work 1-min resolution of the time derivative of measured local magnetic fields (|dBh/dt|) and computed electrical fields (Eh), for locations in Europe, have been analysed with extreme value analysis (EVA). The EVA results in an estimate of the generalized extreme value probability distribution that is described by three parameters: location, width, and shape. The shape parameter controls the extreme behaviour. The stations cover geomagnetic latitudes from 40 to 70° N. All stations included in the study have contiguous coverage of 18 years or more with 1-min resolution data. As expected, the EVA shows that the higher latitude stations have higher probability of large |dBh/dt| and |Eh| compared to stations further south. However, the EVA also shows that the shape of the distribution changes with magnetic latitude. The high latitudes have distributions that fall off faster to zero than the low latitudes, and upward bounded distributions can not be ruled out. The transition occurs around 59-61° N magnetic latitudes. Thus, the EVA shows that the observed series north of ≈ 60° N have already measured values that are close to the expected maxima values, while stations south of ≈ ° N will measure larger values in the future.
On alternative q-Weibull and q-extreme value distributions: Properties and applications
NASA Astrophysics Data System (ADS)
Zhang, Fode; Ng, Hon Keung Tony; Shi, Yimin
2018-01-01
Tsallis statistics and Tsallis distributions have been attracting a significant amount of research work in recent years. Importantly, the Tsallis statistics, q-distributions have been applied in different disciplines. Yet, a relationship between some existing q-Weibull distributions and q-extreme value distributions that is parallel to the well-established relationship between the conventional Weibull and extreme value distributions through a logarithmic transformation has not be established. In this paper, we proposed an alternative q-Weibull distribution that leads to a q-extreme value distribution via the q-logarithm transformation. Some important properties of the proposed q-Weibull and q-extreme value distributions are studied. Maximum likelihood and least squares estimation methods are used to estimate the parameters of q-Weibull distribution and their performances are investigated through a Monte Carlo simulation study. The methodologies and the usefulness of the proposed distributions are illustrated by fitting the 2014 traffic fatalities data from The National Highway Traffic Safety Administration.
The critical role of uncertainty in projections of hydrological extremes
NASA Astrophysics Data System (ADS)
Meresa, Hadush K.; Romanowicz, Renata J.
2017-08-01
This paper aims to quantify the uncertainty in projections of future hydrological extremes in the Biala Tarnowska River at Koszyce gauging station, south Poland. The approach followed is based on several climate projections obtained from the EURO-CORDEX initiative, raw and bias-corrected realizations of catchment precipitation, and flow simulations derived using multiple hydrological model parameter sets. The projections cover the 21st century. Three sources of uncertainty are considered: one related to climate projection ensemble spread, the second related to the uncertainty in hydrological model parameters and the third related to the error in fitting theoretical distribution models to annual extreme flow series. The uncertainty of projected extreme indices related to hydrological model parameters was conditioned on flow observations from the reference period using the generalized likelihood uncertainty estimation (GLUE) approach, with separate criteria for high- and low-flow extremes. Extreme (low and high) flow quantiles were estimated using the generalized extreme value (GEV) distribution at different return periods and were based on two different lengths of the flow time series. A sensitivity analysis based on the analysis of variance (ANOVA) shows that the uncertainty introduced by the hydrological model parameters can be larger than the climate model variability and the distribution fit uncertainty for the low-flow extremes whilst for the high-flow extremes higher uncertainty is observed from climate models than from hydrological parameter and distribution fit uncertainties. This implies that ignoring one of the three uncertainty sources may cause great risk to future hydrological extreme adaptations and water resource planning and management.
Exact extreme-value statistics at mixed-order transitions.
Bar, Amir; Majumdar, Satya N; Schehr, Grégory; Mukamel, David
2016-05-01
We study extreme-value statistics for spatially extended models exhibiting mixed-order phase transitions (MOT). These are phase transitions that exhibit features common to both first-order (discontinuity of the order parameter) and second-order (diverging correlation length) transitions. We consider here the truncated inverse distance squared Ising model, which is a prototypical model exhibiting MOT, and study analytically the extreme-value statistics of the domain lengths The lengths of the domains are identically distributed random variables except for the global constraint that their sum equals the total system size L. In addition, the number of such domains is also a fluctuating variable, and not fixed. In the paramagnetic phase, we show that the distribution of the largest domain length l_{max} converges, in the large L limit, to a Gumbel distribution. However, at the critical point (for a certain range of parameters) and in the ferromagnetic phase, we show that the fluctuations of l_{max} are governed by novel distributions, which we compute exactly. Our main analytical results are verified by numerical simulations.
NASA Astrophysics Data System (ADS)
Otto, F. E. L.; Mitchell, D.; Sippel, S.; Black, M. T.; Dittus, A. J.; Harrington, L. J.; Mohd Saleh, N. H.
2014-12-01
A shift in the distribution of socially-relevant climate variables such as daily minimum winter temperatures and daily precipitation extremes, has been attributed to anthropogenic climate change for various mid-latitude regions. However, while there are many process-based arguments suggesting also a change in the shape of these distributions, attribution studies demonstrating this have not currently been undertaken. Here we use a very large initial condition ensemble of ~40,000 members simulating the European winter 2013/2014 using the distributed computing infrastructure under the weather@home project. Two separate scenarios are used:1. current climate conditions, and 2. a counterfactual scenario of "world that might have been" without anthropogenic forcing. Specifically focusing on extreme events, we assess how the estimated parameters of the Generalized Extreme Value (GEV) distribution vary depending on variable-type, sampling frequency (daily, monthly, …) and geographical region. We find that the location parameter changes for most variables but, depending on the region and variables, we also find significant changes in scale and shape parameters. The very large ensemble allows, furthermore, to assess whether such findings in the fitted GEV distributions are consistent with an empirical analysis of the model data, and whether the most extreme data still follow a known underlying distribution that in a small sample size might otherwise be thought of as an out-lier. The ~40,000 member ensemble is simulated using 12 different SST patterns (1 'observed', and 11 best guesses of SSTs with no anthropogenic warming). The range in SSTs, along with the corresponding changings in the NAO and high-latitude blocking inform on the dynamics governing some of these extreme events. While strong tele-connection patterns are not found in this particular experiment, the high number of simulated extreme events allows for a more thorough analysis of the dynamics than has been performed before. Therefore, combining extreme value theory with very large ensemble simulations allows us to understand the dynamics of changes in extreme events which is not possible just using the former but also shows in which cases statistics combined with smaller ensembles give as valid results as very large initial conditions.
Extremely cold events and sudden air temperature drops during winter season in the Czech Republic
NASA Astrophysics Data System (ADS)
Crhová, Lenka; Valeriánová, Anna; Holtanová, Eva; Müller, Miloslav; Kašpar, Marek; Stříž, Martin
2014-05-01
Today a great attention is turned to analysis of extreme weather events and frequency of their occurrence under changing climate. In most cases, these studies are focused on extremely warm events in summer season. However, extremely low values of air temperature during winter can have serious impacts on many sectors as well (e.g. power engineering, transportation, industry, agriculture, human health). Therefore, in present contribution we focus on extremely and abnormally cold air temperature events in winter season in the Czech Republic. Besides the seasonal extremes of minimum air temperature determined from station data, the standardized data with removed annual cycle are used as well. Distribution of extremely cold events over the season and the temporal evolution of frequency of occurrence during the period 1961-2010 are analyzed. Furthermore, the connection of cold events with extreme sudden temperature drops is studied. The extreme air temperature events and events of extreme sudden temperature drop are assessed using the Weather Extremity Index, which evaluates the extremity (based on return periods) and spatial extent of the meteorological extreme event of interest. The generalized extreme value distribution parameters are used to estimate return periods of daily temperature values. The work has been supported by the grant P209/11/1990 funded by the Czech Science Foundation.
Stationary and non-stationary extreme value modeling of extreme temperature in Malaysia
NASA Astrophysics Data System (ADS)
Hasan, Husna; Salleh, Nur Hanim Mohd; Kassim, Suraiya
2014-09-01
Extreme annual temperature of eighteen stations in Malaysia is fitted to the Generalized Extreme Value distribution. Stationary and non-stationary models with trend are considered for each station and the Likelihood Ratio test is used to determine the best-fitting model. Results show that three out of eighteen stations i.e. Bayan Lepas, Labuan and Subang favor a model which is linear in the location parameter. A hierarchical cluster analysis is employed to investigate the existence of similar behavior among the stations. Three distinct clusters are found in which one of them consists of the stations that favor the non-stationary model. T-year estimated return levels of the extreme temperature are provided based on the chosen models.
Correlation dimension and phase space contraction via extreme value theory
NASA Astrophysics Data System (ADS)
Faranda, Davide; Vaienti, Sandro
2018-04-01
We show how to obtain theoretical and numerical estimates of correlation dimension and phase space contraction by using the extreme value theory. The maxima of suitable observables sampled along the trajectory of a chaotic dynamical system converge asymptotically to classical extreme value laws where: (i) the inverse of the scale parameter gives the correlation dimension and (ii) the extremal index is associated with the rate of phase space contraction for backward iteration, which in dimension 1 and 2, is closely related to the positive Lyapunov exponent and in higher dimensions is related to the metric entropy. We call it the Dynamical Extremal Index. Numerical estimates are straightforward to obtain as they imply just a simple fit to a univariate distribution. Numerical tests range from low dimensional maps, to generalized Henon maps and climate data. The estimates of the indicators are particularly robust even with relatively short time series.
Statistical Modeling of Extreme Values and Evidence of Presence of Dragon King (DK) in Solar Wind
NASA Astrophysics Data System (ADS)
Gomes, T.; Ramos, F.; Rempel, E. L.; Silva, S.; C-L Chian, A.
2017-12-01
The solar wind constitutes a nonlinear dynamical system, presenting intermittent turbulence, multifractality and chaotic dynamics. One characteristic shared by many such complex systems is the presence of extreme events, that play an important role in several Geophysical phenomena and their statistical characterization is a problem of great practical relevance. This work investigates the presence of extreme events in time series of the modulus of the interplanetary magnetic field measured by Cluster spacecraft on February 2, 2002. One of the main results is that the solar wind near the Earth's bow shock can be modeled by the Generalized Pareto (GP) and Generalized Extreme Values (GEV) distributions. Both models present a statistically significant positive shape parameter which implyies a heavy tail in the probability distribution functions and an unbounded growth in return values as return periods become too long. There is evidence that current sheets are the main responsible for positive values of the shape parameter. It is also shown that magnetic reconnection at the interface between two interplanetary magnetic flux ropes in the solar wind can be considered as Dragon Kings (DK), a class of extreme events whose formation mechanisms are fundamentally different from others. As long as magnetic reconnection can be classified as a Dragon King, there is the possibility of its identification and even its prediction. Dragon kings had previously been identified in time series of financial crashes, nuclear power generation accidents, stock market and so on. It is believed that they are associated with the occurrence of extreme events in dynamical systems at phase transition, bifurcation, crises or tipping points.
Parameter uncertainty in simulations of extreme precipitation and attribution studies.
NASA Astrophysics Data System (ADS)
Timmermans, B.; Collins, W. D.; O'Brien, T. A.; Risser, M. D.
2017-12-01
The attribution of extreme weather events, such as heavy rainfall, to anthropogenic influence involves the analysis of their probability in simulations of climate. The climate models used however, such as the Community Atmosphere Model (CAM), employ approximate physics that gives rise to "parameter uncertainty"—uncertainty about the most accurate or optimal values of numerical parameters within the model. In particular, approximate parameterisations for convective processes are well known to be influential in the simulation of precipitation extremes. Towards examining the impact of this source of uncertainty on attribution studies, we investigate the importance of components—through their associated tuning parameters—of parameterisations relating to deep and shallow convection, and cloud and aerosol microphysics in CAM. We hypothesise that as numerical resolution is increased the change in proportion of variance induced by perturbed parameters associated with the respective components is consistent with the decreasing applicability of the underlying hydrostatic assumptions. For example, that the relative influence of deep convection should diminish as resolution approaches that where convection can be resolved numerically ( 10 km). We quantify the relationship between the relative proportion of variance induced and numerical resolution by conducting computer experiments that examine precipitation extremes over the contiguous U.S. In order to mitigate the enormous computational burden of running ensembles of long climate simulations, we use variable-resolution CAM and employ both extreme value theory and surrogate modelling techniques ("emulators"). We discuss the implications of the relationship between parameterised convective processes and resolution both in the context of attribution studies and progression towards models that fully resolve convection.
More tornadoes in the most extreme U.S. tornado outbreaks
NASA Astrophysics Data System (ADS)
Tippett, Michael K.; Lepore, Chiara; Cohen, Joel E.
2016-12-01
Tornadoes and severe thunderstorms kill people and damage property every year. Estimated U.S. insured losses due to severe thunderstorms in the first half of 2016 were $8.5 billion (US). The largest U.S. effects of tornadoes result from tornado outbreaks, which are sequences of tornadoes that occur in close succession. Here, using extreme value analysis, we find that the frequency of U.S. outbreaks with many tornadoes is increasing and that it is increasing faster for more extreme outbreaks. We model this behavior by extreme value distributions with parameters that are linear functions of time or of some indicators of multidecadal climatic variability. Extreme meteorological environments associated with severe thunderstorms show consistent upward trends, but the trends do not resemble those currently expected to result from global warming.
NASA Astrophysics Data System (ADS)
da Costa, Diogo Ricardo; Hansen, Matheus; Guarise, Gustavo; Medrano-T, Rene O.; Leonel, Edson D.
2016-04-01
We show that extreme orbits, trajectories that connect local maximum and minimum values of one dimensional maps, play a major role in the parameter space of dissipative systems dictating the organization for the windows of periodicity, hence producing sets of shrimp-like structures. Here we solve three fundamental problems regarding the distribution of these sets and give: (i) their precise localization in the parameter space, even for sets of very high periods; (ii) their local and global distributions along cascades; and (iii) the association of these cascades to complicate sets of periodicity. The extreme orbits are proved to be a powerful indicator to investigate the organization of windows of periodicity in parameter planes. As applications of the theory, we obtain some results for the circle map and perturbed logistic map. The formalism presented here can be extended to many other different nonlinear and dissipative systems.
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
Assessment of extreme value distributions for maximum temperature in the Mediterranean area
NASA Astrophysics Data System (ADS)
Beck, Alexander; Hertig, Elke; Jacobeit, Jucundus
2015-04-01
Extreme maximum temperatures highly affect the natural as well as the societal environment Heat stress has great effects on flora, fauna and humans and culminates in heat related morbidity and mortality. Agriculture and different industries are severely affected by extreme air temperatures. Even more under climate change conditions, it is necessary to detect potential hazards which arise from changes in the distributional parameters of extreme values, and this is especially relevant for the Mediterranean region which is characterized as a climate change hot spot. Therefore statistical approaches are developed to estimate these parameters with a focus on non-stationarities emerging in the relationship between regional climate variables and their large-scale predictors like sea level pressure, geopotential heights, atmospheric temperatures and relative humidity. Gridded maximum temperature data from the daily E-OBS dataset (Haylock et al., 2008) with a spatial resolution of 0.25° x 0.25° from January 1950 until December 2012 are the predictands for the present analyses. A s-mode principal component analysis (PCA) has been performed in order to reduce data dimension and to retain different regions of similar maximum temperature variability. The grid box with the highest PC-loading represents the corresponding principal component. A central part of the analyses is the model development for temperature extremes under the use of extreme value statistics. A combined model is derived consisting of a Generalized Pareto Distribution (GPD) model and a quantile regression (QR) model which determines the GPD location parameters. The QR model as well as the scale parameters of the GPD model are conditioned by various large-scale predictor variables. In order to account for potential non-stationarities in the predictors-temperature relationships, a special calibration and validation scheme is applied, respectively. Haylock, M. R., N. Hofstra, A. M. G. Klein Tank, E. J. Klok, P. D. Jones, and M. New (2008), A European daily high-resolution gridded data set of surface temperature and precipitation for 1950 - 2006, J. Geophys. Res., 113, D20119, doi:10.1029/2008JD010201.
Dealing with Non-stationarity in Intensity-Frequency-Duration Curve
NASA Astrophysics Data System (ADS)
Rengaraju, S.; Rajendran, V.; C T, D.
2017-12-01
Extremes like flood and drought are becoming frequent and more vulnerable in recent times, generally attributed to the recent revelation of climate change. One of the main concerns is that whether the present infrastructures like dams, storm water drainage networks, etc., which were designed following the so called `stationary' assumption, are capable of withstanding the expected severe extremes. Stationary assumption considers that extremes are not changing with respect to time. However, recent studies proved that climate change has altered the climate extremes both temporally and spatially. Traditionally, the observed non-stationary in the extreme precipitation is incorporated in the extreme value distributions in terms of changing parameters. Nevertheless, this raises a question which parameter needs to be changed, i.e. location or scale or shape, since either one or more of these parameters vary at a given location. Hence, this study aims to detect the changing parameters to reduce the complexity involved in the development of non-stationary IDF curve and to provide the uncertainty bound of estimated return level using Bayesian Differential Evolutionary Monte Carlo (DE-MC) algorithm. Firstly, the extreme precipitation series is extracted using Peak Over Threshold. Then, the time varying parameter(s) is(are) detected for the extracted series using Generalized Additive Models for Location Scale and Shape (GAMLSS). Then, the IDF curve is constructed using Generalized Pareto Distribution incorporating non-stationarity only if the parameter(s) is(are) changing with respect to time, otherwise IDF curve will follow stationary assumption. Finally, the posterior probability intervals of estimated return revel are computed through Bayesian DE-MC approach and the non-stationary based IDF curve is compared with the stationary based IDF curve. The results of this study emphasize that the time varying parameters also change spatially and the IDF curves should incorporate non-stationarity only if there is change in the parameters, though there may be significant change in the extreme rainfall series. Our results evoke the importance of updating the infrastructure design strategies for the changing climate, by adopting the non-stationary based IDF curves.
Spatial distribution of precipitation extremes in Norway
NASA Astrophysics Data System (ADS)
Verpe Dyrrdal, Anita; Skaugen, Thomas; Lenkoski, Alex; Thorarinsdottir, Thordis; Stordal, Frode; Førland, Eirik J.
2015-04-01
Estimates of extreme precipitation, in terms of return levels, are crucial in planning and design of important infrastructure. Through two separate studies, we have examined the levels and spatial distribution of daily extreme precipitation over catchments in Norway, and hourly extreme precipitation in a point. The analyses were carried out through the development of two new methods for estimating extreme precipitation in Norway. For daily precipitation we fit the Generalized Extreme Value (GEV) distribution to areal time series from a gridded dataset, consisting of daily precipitation during the period 1957-today with a resolution of 1x1 km². This grid-based method is more objective and less manual and time-consuming compared to the existing method at MET Norway. In addition, estimates in ungauged catchments are easier to obtain, and the GEV approach includes a measure of uncertainty, which is a requirement in climate studies today. Further, we go into depth on the debated GEV shape parameter, which plays an important role for longer return periods. We show that it varies according to dominating precipitation types, having positive values in the southeast and negative values in the southwest. We also find indications that the degree of orographic enhancement might affect the shape parameter. For hourly precipitation, we estimate return levels on a 1x1 km² grid, by linking GEV distributions with latent Gaussian fields in a Bayesian hierarchical model (BHM). Generalized linear models on the GEV parameters, estimated from observations, are able to incorporate location-specific geographic and meteorological information and thereby accommodate these effects on extreme precipitation. Gaussian fields capture additional unexplained spatial heterogeneity and overcome the sparse grid on which observations are collected, while a Bayesian model averaging component directly assesses model uncertainty. We find that mean summer precipitation, mean summer temperature, latitude, longitude, mean annual precipitation and elevation are good covariate candidates for hourly precipitation in our model. Summer indices succeed because hourly precipitation extremes often occur during the convective season. The spatial distribution of hourly and daily precipitation differs in Norway. Daily precipitation extremes are larger along the southwestern coast, where large-scale frontal systems dominate during fall season and the mountain ridge generates strong orographic enhancement. The largest hourly precipitation extremes are mostly produced by intense convective showers during summer, and are thus found along the entire southern coast, including the Oslo-region.
Generalized extreme gust wind speeds distributions
Cheng, E.; Yeung, C.
2002-01-01
Since summer 1996, the US wind engineers are using the extreme gust (or 3-s gust) as the basic wind speed to quantify the destruction of extreme winds. In order to better understand these destructive wind forces, it is important to know the appropriate representations of these extreme gust wind speeds. Therefore, the purpose of this study is to determine the most suitable extreme value distributions for the annual extreme gust wind speeds recorded in large selected areas. To achieve this objective, we are using the generalized Pareto distribution as the diagnostic tool for determining the types of extreme gust wind speed distributions. The three-parameter generalized extreme value distribution function is, thus, reduced to either Type I Gumbel, Type II Frechet or Type III reverse Weibull distribution function for the annual extreme gust wind speeds recorded at a specific site.With the considerations of the quality and homogeneity of gust wind data collected at more than 750 weather stations throughout the United States, annual extreme gust wind speeds at selected 143 stations in the contiguous United States were used in the study. ?? 2002 Elsevier Science Ltd. All rights reserved.
More tornadoes in the most extreme U.S. tornado outbreaks.
Tippett, Michael K; Lepore, Chiara; Cohen, Joel E
2016-12-16
Tornadoes and severe thunderstorms kill people and damage property every year. Estimated U.S. insured losses due to severe thunderstorms in the first half of 2016 were $8.5 billion (US). The largest U.S. effects of tornadoes result from tornado outbreaks, which are sequences of tornadoes that occur in close succession. Here, using extreme value analysis, we find that the frequency of U.S. outbreaks with many tornadoes is increasing and that it is increasing faster for more extreme outbreaks. We model this behavior by extreme value distributions with parameters that are linear functions of time or of some indicators of multidecadal climatic variability. Extreme meteorological environments associated with severe thunderstorms show consistent upward trends, but the trends do not resemble those currently expected to result from global warming. Copyright © 2016, American Association for the Advancement of Science.
400 Years of summer hydroclimate from stable isotopes in Iberian trees
NASA Astrophysics Data System (ADS)
Andreu-Hayles, Laia; Ummenhofer, Caroline C.; Barriendos, Mariano; Schleser, Gerhard H.; Helle, Gerhard; Leuenberger, Markus; Gutiérrez, Emilia; Cook, Edward R.
2017-07-01
Tree rings are natural archives that annually record distinct types of past climate variability depending on the parameters measured. Here, we use ring-width and stable isotopes in cellulose of trees from the northwestern Iberian Peninsula (IP) to understand regional summer hydroclimate over the last 400 years and the associated atmospheric patterns. Correlations between tree rings and climate data demonstrate that isotope signatures in the targeted Iberian pine forests are very sensitive to water availability during the summer period, and are mainly controlled by stomatal conductance. Non-linear methods based on extreme events analysis allow for capturing distinct seasonal climatic variability recorded by tree-ring parameters and asymmetric signals of the associated atmospheric features. Moreover, years with extreme high (low) values in the tree-ring records were characterised by coherent large-scale atmospheric circulation patterns with reduced (enhanced) moisture transport onto the northwestern IP. These analyses of extremes revealed that high/low proxy values do not necessarily correspond to mirror images in the atmospheric anomaly patterns, suggesting different drivers of these patterns and the corresponding signature recorded in the proxies. Regional hydroclimate features across the broader IP and western Europe during extreme wet/dry summers detected by the northwestern IP trees compare favourably to independent multicentury sea level pressure and drought reconstructions for Europe. Historical records also validate our findings that attribute non-linear moisture signals recorded by extreme tree-ring values to distinct large-scale atmospheric patterns and allow for 400-year reconstructions of the frequency of occurrence of extreme conditions in late spring and summer hydroclimate.
400 years of summer hydroclimate from stable isotopes in Iberian trees
NASA Astrophysics Data System (ADS)
Andreu-Hayles, Laia; Ummenhofer, Caroline C.; Barriendos, Mariano; Schleser, Gerhard H.; Helle, Gerhard; Leuenberger, Markus; Gutierrez, Emilia; Cook, Edward R.
2017-04-01
Tree rings are natural archives that annually record distinct types of past climate variability depending on the parameters measured. Here, we use ring-width and stable isotopes in cellulose of trees from the northwestern Iberian Peninsula (IP) to understand regional summer hydroclimate over the last 400 years and the associated atmospheric patterns. Correlations between tree rings and climate data demonstrate that isotope signatures in the targeted Iberian pine forests are very sensitive to water availability during the summer period, and are mainly controlled by stomatal conductance. Non-linear methods based on extreme events analysis allow for capturing distinct seasonal climatic variability recorded by tree-ring parameters and asymmetric signals of the associated atmospheric features. Moreover, years with extreme high (low) values in the tree-ring records were characterised by coherent large-scale atmospheric circulation patterns with reduced (enhanced) moisture transport onto the northwestern IP. These analyses of extremes revealed that high/low proxy values do not necessarily correspond to mirror images in the atmospheric anomaly patterns, suggesting different drivers of these patterns and the corresponding signature recorded in the proxies. Regional hydroclimate features across the broader IP and western Europe during extreme wet/dry summers detected by the northwestern IP trees compare favourably to an independent multicentury sea level pressure and drought reconstruction for Europe. Historical records also validate our findings that attribute non-linear moisture signals recorded by extreme tree-ring values to distinct large-scale atmospheric patterns and allow for 400-yr reconstructions of the frequency of occurrence of extreme conditions in summer hydroclimate. We will discuss how the results for Lillo compare with other records.
NASA Astrophysics Data System (ADS)
Lazoglou, Georgia; Anagnostopoulou, Christina; Tolika, Konstantia; Kolyva-Machera, Fotini
2018-04-01
The increasing trend of the intensity and frequency of temperature and precipitation extremes during the past decades has substantial environmental and socioeconomic impacts. Thus, the objective of the present study is the comparison of several statistical methods of the extreme value theory (EVT) in order to identify which is the most appropriate to analyze the behavior of the extreme precipitation, and high and low temperature events, in the Mediterranean region. The extremes choice was made using both the block maxima and the peaks over threshold (POT) technique and as a consequence both the generalized extreme value (GEV) and generalized Pareto distributions (GPDs) were used to fit them. The results were compared, in order to select the most appropriate distribution for extremes characterization. Moreover, this study evaluates the maximum likelihood estimation, the L-moments and the Bayesian method, based on both graphical and statistical goodness-of-fit tests. It was revealed that the GPD can characterize accurately both precipitation and temperature extreme events. Additionally, GEV distribution with the Bayesian method is proven to be appropriate especially for the greatest values of extremes. Another important objective of this investigation was the estimation of the precipitation and temperature return levels for three return periods (50, 100, and 150 years) classifying the data into groups with similar characteristics. Finally, the return level values were estimated with both GEV and GPD and with the three different estimation methods, revealing that the selected method can affect the return level values for both the parameter of precipitation and temperature.
Slice sampling technique in Bayesian extreme of gold price modelling
NASA Astrophysics Data System (ADS)
Rostami, Mohammad; Adam, Mohd Bakri; Ibrahim, Noor Akma; Yahya, Mohamed Hisham
2013-09-01
In this paper, a simulation study of Bayesian extreme values by using Markov Chain Monte Carlo via slice sampling algorithm is implemented. We compared the accuracy of slice sampling with other methods for a Gumbel model. This study revealed that slice sampling algorithm offers more accurate and closer estimates with less RMSE than other methods . Finally we successfully employed this procedure to estimate the parameters of Malaysia extreme gold price from 2000 to 2011.
NASA Astrophysics Data System (ADS)
Alahmadi, F.; Rahman, N. A.; Abdulrazzak, M.
2014-09-01
Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS) in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III) and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC), Corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC) and Anderson-Darling Criterion (ADC). The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.
Inter-model variability in hydrological extremes projections for Amazonian sub-basins
NASA Astrophysics Data System (ADS)
Andres Rodriguez, Daniel; Garofolo, Lucas; Lázaro de Siqueira Júnior, José; Samprogna Mohor, Guilherme; Tomasella, Javier
2014-05-01
Irreducible uncertainties due to knowledge's limitations, chaotic nature of climate system and human decision-making process drive uncertainties in Climate Change projections. Such uncertainties affect the impact studies, mainly when associated to extreme events, and difficult the decision-making process aimed at mitigation and adaptation. However, these uncertainties allow the possibility to develop exploratory analyses on system's vulnerability to different sceneries. The use of different climate model's projections allows to aboard uncertainties issues allowing the use of multiple runs to explore a wide range of potential impacts and its implications for potential vulnerabilities. Statistical approaches for analyses of extreme values are usually based on stationarity assumptions. However, nonstationarity is relevant at the time scales considered for extreme value analyses and could have great implications in dynamic complex systems, mainly under climate change transformations. Because this, it is required to consider the nonstationarity in the statistical distribution parameters. We carried out a study of the dispersion in hydrological extremes projections using climate change projections from several climate models to feed the Distributed Hydrological Model of the National Institute for Spatial Research, MHD-INPE, applied in Amazonian sub-basins. This model is a large-scale hydrological model that uses a TopModel approach to solve runoff generation processes at the grid-cell scale. MHD-INPE model was calibrated for 1970-1990 using observed meteorological data and comparing observed and simulated discharges by using several performance coeficients. Hydrological Model integrations were performed for present historical time (1970-1990) and for future period (2010-2100). Because climate models simulate the variability of the climate system in statistical terms rather than reproduce the historical behavior of climate variables, the performances of the model's runs during the historical period, when feed with climate model data, were tested using descriptors of the Flow Duration Curves. The analyses of projected extreme values were carried out considering the nonstationarity of the GEV distribution parameters and compared with extremes events in present time. Results show inter-model variability in a broad dispersion on projected extreme's values. Such dispersion implies different degrees of socio-economic impacts associated to extreme hydrological events. Despite the no existence of one optimum result, this variability allows the analyses of adaptation strategies and its potential vulnerabilities.
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
Extreme value modelling of Ghana stock exchange index.
Nortey, Ezekiel N N; Asare, Kwabena; Mettle, Felix Okoe
2015-01-01
Modelling of extreme events has always been of interest in fields such as hydrology and meteorology. However, after the recent global financial crises, appropriate models for modelling of such rare events leading to these crises have become quite essential in the finance and risk management fields. This paper models the extreme values of the Ghana stock exchange all-shares index (2000-2010) by applying the extreme value theory (EVT) to fit a model to the tails of the daily stock returns data. A conditional approach of the EVT was preferred and hence an ARMA-GARCH model was fitted to the data to correct for the effects of autocorrelation and conditional heteroscedastic terms present in the returns series, before the EVT method was applied. The Peak Over Threshold approach of the EVT, which fits a Generalized Pareto Distribution (GPD) model to excesses above a certain selected threshold, was employed. Maximum likelihood estimates of the model parameters were obtained and the model's goodness of fit was assessed graphically using Q-Q, P-P and density plots. The findings indicate that the GPD provides an adequate fit to the data of excesses. The size of the extreme daily Ghanaian stock market movements were then computed using the value at risk and expected shortfall risk measures at some high quantiles, based on the fitted GPD model.
NASA Technical Reports Server (NTRS)
Ashouri, Hamed; Sorooshian, Soroosh; Hsu, Kuo-Lin; Bosilovich, Michael G.; Lee, Jaechoul; Wehner, Michael F.; Collow, Allison
2016-01-01
This study evaluates the performance of NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) precipitation product in reproducing the trend and distribution of extreme precipitation events. Utilizing the extreme value theory, time-invariant and time-variant extreme value distributions are developed to model the trends and changes in the patterns of extreme precipitation events over the contiguous United States during 1979-2010. The Climate Prediction Center (CPC) U.S.Unified gridded observation data are used as the observational dataset. The CPC analysis shows that the eastern and western parts of the United States are experiencing positive and negative trends in annual maxima, respectively. The continental-scale patterns of change found in MERRA seem to reasonably mirror the observed patterns of change found in CPC. This is not previously expected, given the difficulty in constraining precipitation in reanalysis products. MERRA tends to overestimate the frequency at which the 99th percentile of precipitation is exceeded because this threshold tends to be lower in MERRA, making it easier to be exceeded. This feature is dominant during the summer months. MERRA tends to reproduce spatial patterns of the scale and location parameters of the generalized extreme value and generalized Pareto distributions. However, MERRA underestimates these parameters, particularly over the Gulf Coast states, leading to lower magnitudes in extreme precipitation events. Two issues in MERRA are identified: 1) MERRA shows a spurious negative trend in Nebraska and Kansas, which is most likely related to the changes in the satellite observing system over time that has apparently affected the water cycle in the central United States, and 2) the patterns of positive trend over the Gulf Coast states and along the East Coast seem to be correlated with the tropical cyclones in these regions. The analysis of the trends in the seasonal precipitation extremes indicates that the hurricane and winter seasons are contributing the most to these trend patterns in the southeastern United States. In addition, the increasing annual trend simulated by MERRA in the Gulf Coast region is due to an incorrect trend in winter precipitation extremes.
Ashouri, Hamed; Sorooshian, Soroosh; Hsu, Kuo-Lin; ...
2016-02-03
This study evaluates the performance of NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) precipitation product in reproducing the trend and distribution of extreme precipitation events. Utilizing the extreme value theory, time-invariant and time-variant extreme value distributions are developed to model the trends and changes in the patterns of extreme precipitation events over the contiguous United States during 1979-2010. The Climate Prediction Center (CPC)U.S.Unified gridded observation data are used as the observational dataset. The CPC analysis shows that the eastern and western parts of the United States are experiencing positive and negative trends in annual maxima, respectively. The continental-scalemore » patterns of change found in MERRA seem to reasonably mirror the observed patterns of change found in CPC. This is not previously expected, given the difficulty in constraining precipitation in reanalysis products. MERRA tends to overestimate the frequency at which the 99th percentile of precipitation is exceeded because this threshold tends to be lower in MERRA, making it easier to be exceeded. This feature is dominant during the summer months. MERRAtends to reproduce spatial patterns of the scale and location parameters of the generalized extreme value and generalized Pareto distributions. However, MERRA underestimates these parameters, particularly over the Gulf Coast states, leading to lower magnitudes in extreme precipitation events. Two issues in MERRA are identified: 1)MERRAshows a spurious negative trend in Nebraska andKansas, which ismost likely related to the changes in the satellite observing system over time that has apparently affected the water cycle in the central United States, and 2) the patterns of positive trend over theGulf Coast states and along the East Coast seem to be correlated with the tropical cyclones in these regions. The analysis of the trends in the seasonal precipitation extremes indicates that the hurricane and winter seasons are contributing the most to these trend patterns in the southeastern United States. The increasing annual trend simulated by MERRA in the Gulf Coast region is due to an incorrect trend in winter precipitation extremes.« less
Probability distribution of extreme share returns in Malaysia
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin
2014-09-01
The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.
XDGMM: eXtreme Deconvolution Gaussian Mixture Modeling
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-08-01
XDGMM uses Gaussian mixtures to do density estimation of noisy, heterogenous, and incomplete data using extreme deconvolution (XD) algorithms which is compatible with the scikit-learn machine learning methods. It implements both the astroML and Bovy et al. (2011) algorithms, and extends the BaseEstimator class from scikit-learn so that cross-validation methods work. It allows the user to produce a conditioned model if values of some parameters are known.
Climatic extremes improve predictions of spatial patterns of tree species
Zimmermann, N.E.; Yoccoz, N.G.; Edwards, T.C.; Meier, E.S.; Thuiller, W.; Guisan, Antoine; Schmatz, D.R.; Pearman, P.B.
2009-01-01
Understanding niche evolution, dynamics, and the response of species to climate change requires knowledge of the determinants of the environmental niche and species range limits. Mean values of climatic variables are often used in such analyses. In contrast, the increasing frequency of climate extremes suggests the importance of understanding their additional influence on range limits. Here, we assess how measures representing climate extremes (i.e., interannual variability in climate parameters) explain and predict spatial patterns of 11 tree species in Switzerland. We find clear, although comparably small, improvement (+20% in adjusted D2, +8% and +3% in cross-validated True Skill Statistic and area under the receiver operating characteristics curve values) in models that use measures of extremes in addition to means. The primary effect of including information on climate extremes is a correction of local overprediction and underprediction. Our results demonstrate that measures of climate extremes are important for understanding the climatic limits of tree species and assessing species niche characteristics. The inclusion of climate variability likely will improve models of species range limits under future conditions, where changes in mean climate and increased variability are expected.
Extreme Value Theory and the New Sunspot Number Series
NASA Astrophysics Data System (ADS)
Acero, F. J.; Carrasco, V. M. S.; Gallego, M. C.; García, J. A.; Vaquero, J. M.
2017-04-01
Extreme value theory was employed to study solar activity using the new sunspot number index. The block maxima approach was used at yearly (1700-2015), monthly (1749-2016), and daily (1818-2016) scales, selecting the maximum sunspot number value for each solar cycle, and the peaks-over-threshold (POT) technique was used after a declustering process only for the daily data. Both techniques led to negative values for the shape parameters. This implies that the extreme sunspot number value distribution has an upper bound. The return level (RL) values obtained from the POT approach were greater than when using the block maxima technique. Regarding the POT approach, the 110 year (550 and 1100 year) RLs were lower (higher) than the daily maximum observed sunspot number value of 528. Furthermore, according to the block maxima approach, the 10-cycle RL lay within the block maxima daily sunspot number range, as expected, but it was striking that the 50- and 100-cycle RLs were also within that range. Thus, it would seem that the RL is reaching a plateau, and, although one must be cautious, it would be difficult to attain sunspot number values greater than 550. The extreme value trends from the four series (yearly, monthly, and daily maxima per solar cycle, and POT after declustering the daily data) were analyzed with the Mann-Kendall test and Sen’s method. Only the negative trend of the daily data with the POT technique was statistically significant.
Modeling Spatial Dependence of Rainfall Extremes Across Multiple Durations
NASA Astrophysics Data System (ADS)
Le, Phuong Dong; Leonard, Michael; Westra, Seth
2018-03-01
Determining the probability of a flood event in a catchment given that another flood has occurred in a nearby catchment is useful in the design of infrastructure such as road networks that have multiple river crossings. These conditional flood probabilities can be estimated by calculating conditional probabilities of extreme rainfall and then transforming rainfall to runoff through a hydrologic model. Each catchment's hydrological response times are unlikely to be the same, so in order to estimate these conditional probabilities one must consider the dependence of extreme rainfall both across space and across critical storm durations. To represent these types of dependence, this study proposes a new approach for combining extreme rainfall across different durations within a spatial extreme value model using max-stable process theory. This is achieved in a stepwise manner. The first step defines a set of common parameters for the marginal distributions across multiple durations. The parameters are then spatially interpolated to develop a spatial field. Storm-level dependence is represented through the max-stable process for rainfall extremes across different durations. The dependence model shows a reasonable fit between the observed pairwise extremal coefficients and the theoretical pairwise extremal coefficient function across all durations. The study demonstrates how the approach can be applied to develop conditional maps of the return period and return level across different durations.
Ensemble-based evaluation of extreme water levels for the eastern Baltic Sea
NASA Astrophysics Data System (ADS)
Eelsalu, Maris; Soomere, Tarmo
2016-04-01
The risks and damages associated with coastal flooding that are naturally associated with an increase in the magnitude of extreme storm surges are one of the largest concerns of countries with extensive low-lying nearshore areas. The relevant risks are even more contrast for semi-enclosed water bodies such as the Baltic Sea where subtidal (weekly-scale) variations in the water volume of the sea substantially contribute to the water level and lead to large spreading of projections of future extreme water levels. We explore the options for using large ensembles of projections to more reliably evaluate return periods of extreme water levels. Single projections of the ensemble are constructed by means of fitting several sets of block maxima with various extreme value distributions. The ensemble is based on two simulated data sets produced in the Swedish Meteorological and Hydrological Institute. A hindcast by the Rossby Centre Ocean model is sampled with a resolution of 6 h and a similar hindcast by the circulation model NEMO with a resolution of 1 h. As the annual maxima of water levels in the Baltic Sea are not always uncorrelated, we employ maxima for calendar years and for stormy seasons. As the shape parameter of the Generalised Extreme Value distribution changes its sign and substantially varies in magnitude along the eastern coast of the Baltic Sea, the use of a single distribution for the entire coast is inappropriate. The ensemble involves projections based on the Generalised Extreme Value, Gumbel and Weibull distributions. The parameters of these distributions are evaluated using three different ways: maximum likelihood method and method of moments based on both biased and unbiased estimates. The total number of projections in the ensemble is 40. As some of the resulting estimates contain limited additional information, the members of pairs of projections that are highly correlated are assigned weights 0.6. A comparison of the ensemble-based projection of extreme water levels and their return periods with similar estimates derived from local observations reveals an interesting pattern of match and mismatch. The match is almost perfect in measurement sites where local effects (e.g., wave-induced set-up or local surge in very shallow areas that are not resolved by circulation models) do not contribute to the observed values of water level. There is, however, substantial mismatch between projected and observed extreme values for most of the Estonian coast. The mismatch is largest for sections that are open to high waves and for several bays that are deeply cut into mainland but open for predominant strong wind directions. Detailed quantification of this mismatch eventually makes it possible to develop substantially improved estimates of extreme water levels in sections where local effects considerably contribute into the total water level.
Trends in hydrological extremes in the Senegal and the Niger Rivers
NASA Astrophysics Data System (ADS)
Wilcox, C.; Bodian, A.; Vischel, T.; Panthou, G.; Quantin, G.
2017-12-01
In recent years, West Africa has witnessed several floods of unprecedented magnitude. Although the evolution of hydrological extremes has been evaluated in the region to some extent, results lack regional coverage, significance levels, uncertainty estimations, model selection criteria, or a combination of the above. In this study, Generalized Extreme Value (GEV) distributions with and without various non-stationary temporal covariates are applied to annual maxima of daily discharge (AMAX) data sets in the Sudano-Guinean part of the Senegal River basin and in the Sahelian part of the Niger River basin. The data ranges from the 1950s to the 2010s. The two models of best fit most often selected (with an alpha=0.05 certainty level) were 1) a double-linear model for the central tendency parameter (μ) with stationary dispersion (σ) and 2) a double-linear model for both parameters. Change points are relatively consistent for the Senegal basin, with stations switching from a decreasing streamflow trend to an increasing streamflow trend in the early 1980s. In the Niger basin the trend in μ was generally positive with an increase in slope after the change point, but the change point location was less consistent. The study clearly demonstrates the significant trends in extreme discharge values in West Africa over the past six decades. Moreover, it proposes a clear methodology for comparing GEV models and selecting the best for use. The return levels generated from the chosen models can be applied to river basin management and hydraulic works sizing. The results provide a first evaluation of non-stationarity in extreme hydrological values in West Africa that is accompanied by significance levels, uncertainties, and non-stationary return level estimations .
Lateral position detection and control for friction stir systems
Fleming, Paul; Lammlein, David H.; Cook, George E.; Wilkes, Don Mitchell; Strauss, Alvin M.; Delapp, David R.; Hartman, Daniel A.
2012-06-05
An apparatus and computer program are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.
Lateral position detection and control for friction stir systems
Fleming, Paul [Boulder, CO; Lammlein, David H [Houston, TX; Cook, George E [Brentwood, TN; Wilkes, Don Mitchell [Nashville, TN; Strauss, Alvin M [Nashville, TN; Delapp, David R [Ashland City, TN; Hartman, Daniel A [Fairhope, AL
2011-11-08
Friction stir methods are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-06-01
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashouri, Hamed; Sorooshian, Soroosh; Hsu, Kuo-Lin
This study evaluates the performance of NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) precipitation product in reproducing the trend and distribution of extreme precipitation events. Utilizing the extreme value theory, time-invariant and time-variant extreme value distributions are developed to model the trends and changes in the patterns of extreme precipitation events over the contiguous United States during 1979-2010. The Climate Prediction Center (CPC)U.S.Unified gridded observation data are used as the observational dataset. The CPC analysis shows that the eastern and western parts of the United States are experiencing positive and negative trends in annual maxima, respectively. The continental-scalemore » patterns of change found in MERRA seem to reasonably mirror the observed patterns of change found in CPC. This is not previously expected, given the difficulty in constraining precipitation in reanalysis products. MERRA tends to overestimate the frequency at which the 99th percentile of precipitation is exceeded because this threshold tends to be lower in MERRA, making it easier to be exceeded. This feature is dominant during the summer months. MERRAtends to reproduce spatial patterns of the scale and location parameters of the generalized extreme value and generalized Pareto distributions. However, MERRA underestimates these parameters, particularly over the Gulf Coast states, leading to lower magnitudes in extreme precipitation events. Two issues in MERRA are identified: 1)MERRAshows a spurious negative trend in Nebraska andKansas, which ismost likely related to the changes in the satellite observing system over time that has apparently affected the water cycle in the central United States, and 2) the patterns of positive trend over theGulf Coast states and along the East Coast seem to be correlated with the tropical cyclones in these regions. The analysis of the trends in the seasonal precipitation extremes indicates that the hurricane and winter seasons are contributing the most to these trend patterns in the southeastern United States. The increasing annual trend simulated by MERRA in the Gulf Coast region is due to an incorrect trend in winter precipitation extremes.« less
Extreme sea storm in the Mediterranean Sea. Trends during the 2nd half of the 20th century.
NASA Astrophysics Data System (ADS)
Pino, C.; Lionello, P.; Galati, M. B.
2009-04-01
Extreme sea storm in the Mediterranean Sea. Trends during the 2nd half of the 20th century Piero Lionello, University of Salento, piero.lionello@unisalento.it Maria Barbara Galati, University of Salento, mariabarbara.galati@unisalento.it Cosimo Pino, University of Salento, pino@le.infn.it The analysis of extreme Significant Wave Height (SWH) values and their trend is crucial for planning and managing coastal defences and off-shore activities. The analysis provided by this study covers a 44-year long period (1958-2001). First the WW3 (Wave Watch 3) model forced with the REMO-Hipocas regional model wind fields has been used for the hindcast of extreme SWH values over the Mediterranean basin with a 0.25 deg lat-lon resolution. Subsequently, the model results have been processed with an ad hoc software to detect storms. GEV analysis has been perfomed and a set of indicators for extreme SWH have been computed, using the Mann Kendall test for assessing statistical significance of trends for different parameter such as the number of extreme events, their duration and their intensity. Results suggest a transition towards weaker extremes and a milder climate over most of the Mediterranean Sea.
An Extreme-Value Approach to Anomaly Vulnerability Identification
NASA Technical Reports Server (NTRS)
Everett, Chris; Maggio, Gaspare; Groen, Frank
2010-01-01
The objective of this paper is to present a method for importance analysis in parametric probabilistic modeling where the result of interest is the identification of potential engineering vulnerabilities associated with postulated anomalies in system behavior. In the context of Accident Precursor Analysis (APA), under which this method has been developed, these vulnerabilities, designated as anomaly vulnerabilities, are conditions that produce high risk in the presence of anomalous system behavior. The method defines a parameter-specific Parameter Vulnerability Importance measure (PVI), which identifies anomaly risk-model parameter values that indicate the potential presence of anomaly vulnerabilities, and allows them to be prioritized for further investigation. This entails analyzing each uncertain risk-model parameter over its credible range of values to determine where it produces the maximum risk. A parameter that produces high system risk for a particular range of values suggests that the system is vulnerable to the modeled anomalous conditions, if indeed the true parameter value lies in that range. Thus, PVI analysis provides a means of identifying and prioritizing anomaly-related engineering issues that at the very least warrant improved understanding to reduce uncertainty, such that true vulnerabilities may be identified and proper corrective actions taken.
NASA Astrophysics Data System (ADS)
Rychlik, Igor; Mao, Wengang
2018-02-01
The wind speed variability in the North Atlantic has been successfully modelled using a spatio-temporal transformed Gaussian field. However, this type of model does not correctly describe the extreme wind speeds attributed to tropical storms and hurricanes. In this study, the transformed Gaussian model is further developed to include the occurrence of severe storms. In this new model, random components are added to the transformed Gaussian field to model rare events with extreme wind speeds. The resulting random field is locally stationary and homogeneous. The localized dependence structure is described by time- and space-dependent parameters. The parameters have a natural physical interpretation. To exemplify its application, the model is fitted to the ECMWF ERA-Interim reanalysis data set. The model is applied to compute long-term wind speed distributions and return values, e.g., 100- or 1000-year extreme wind speeds, and to simulate random wind speed time series at a fixed location or spatio-temporal wind fields around that location.
Statistical distributions of extreme dry spell in Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Jemain, Abdul Aziz
2010-11-01
Statistical distributions of annual extreme (AE) series and partial duration (PD) series for dry-spell event are analyzed for a database of daily rainfall records of 50 rain-gauge stations in Peninsular Malaysia, with recording period extending from 1975 to 2004. The three-parameter generalized extreme value (GEV) and generalized Pareto (GP) distributions are considered to model both series. In both cases, the parameters of these two distributions are fitted by means of the L-moments method, which provides a robust estimation of them. The goodness-of-fit (GOF) between empirical data and theoretical distributions are then evaluated by means of the L-moment ratio diagram and several goodness-of-fit tests for each of the 50 stations. It is found that for the majority of stations, the AE and PD series are well fitted by the GEV and GP models, respectively. Based on the models that have been identified, we can reasonably predict the risks associated with extreme dry spells for various return periods.
Neighboring extremals of dynamic optimization problems with path equality constraints
NASA Technical Reports Server (NTRS)
Lee, A. Y.
1988-01-01
Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.
NASA Astrophysics Data System (ADS)
Tomas, A.; Menendez, M.; Mendez, F. J.; Coco, G.; Losada, I. J.
2012-04-01
In the last decades, freak or rogue waves have become an important topic in engineering and science. Forecasting the occurrence probability of freak waves is a challenge for oceanographers, engineers, physicists and statisticians. There are several mechanisms responsible for the formation of freak waves, and different theoretical formulations (primarily based on numerical models with simplifying assumption) have been proposed to predict the occurrence probability of freak wave in a sea state as a function of N (number of individual waves) and kurtosis (k). On the other hand, different attempts to parameterize k as a function of spectral parameters such as the Benjamin-Feir Index (BFI) and the directional spreading (Mori et al., 2011) have been proposed. The objective of this work is twofold: (1) develop a statistical model to describe the uncertainty of maxima individual wave height, Hmax, considering N and k as covariates; (2) obtain a predictive formulation to estimate k as a function of aggregated sea state spectral parameters. For both purposes, we use free surface measurements (more than 300,000 20-minutes sea states) from the Spanish deep water buoy network (Puertos del Estado, Spanish Ministry of Public Works). Non-stationary extreme value models are nowadays widely used to analyze the time-dependent or directional-dependent behavior of extreme values of geophysical variables such as significant wave height (Izaguirre et al., 2010). In this work, a Generalized Extreme Value (GEV) statistical model for the dimensionless maximum wave height (x=Hmax/Hs) in every sea state is used to assess the probability of freak waves. We allow the location, scale and shape parameters of the GEV distribution to vary as a function of k and N. The kurtosis-dependency is parameterized using third-order polynomials and the model is fitted using standard log-likelihood theory, obtaining a very good behavior to predict the occurrence probability of freak waves (x>2). Regarding the second objective of this work, we apply different algorithms using three spectral parameters (wave steepness, directional dispersion, frequential dispersion) as predictors, to estimate the probability density function of the kurtosis for a given sea state. ACKNOWLEDGMENTS The authors thank to Puertos del Estado (Spanish Ministry of Public Works) for providing the free surface measurement database.
Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H
2017-11-01
A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.
CAN A NANOFLARE MODEL OF EXTREME-ULTRAVIOLET IRRADIANCES DESCRIBE THE HEATING OF THE SOLAR CORONA?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tajfirouze, E.; Safari, H.
2012-01-10
Nanoflares, the basic units of impulsive energy release, may produce much of the solar background emission. Extrapolation of the energy frequency distribution of observed microflares, which follows a power law to lower energies, can give an estimation of the importance of nanoflares for heating the solar corona. If the power-law index is greater than 2, then the nanoflare contribution is dominant. We model a time series of extreme-ultraviolet emission radiance as random flares with a power-law exponent of the flare event distribution. The model is based on three key parameters: the flare rate, the flare duration, and the power-law exponentmore » of the flare intensity frequency distribution. We use this model to simulate emission line radiance detected in 171 A, observed by Solar Terrestrial Relation Observatory/Extreme-Ultraviolet Imager and Solar Dynamics Observatory/Atmospheric Imaging Assembly. The observed light curves are matched with simulated light curves using an Artificial Neural Network, and the parameter values are determined across the active region, quiet Sun, and coronal hole. The damping rate of nanoflares is compared with the radiative losses cooling time. The effect of background emission, data cadence, and network sensitivity on the key parameters of the model is studied. Most of the observed light curves have a power-law exponent, {alpha}, greater than the critical value 2. At these sites, nanoflare heating could be significant.« less
GPS FOM Chimney Analysis using Generalized Extreme Value Distribution
NASA Technical Reports Server (NTRS)
Ott, Rick; Frisbee, Joe; Saha, Kanan
2004-01-01
Many a time an objective of a statistical analysis is to estimate a limit value like 3-sigma 95% confidence upper limit from a data sample. The generalized Extreme Value Distribution method can be profitably employed in many situations for such an estimate. . .. It is well known that according to the Central Limit theorem the mean value of a large data set is normally distributed irrespective of the distribution of the data from which the mean value is derived. In a somewhat similar fashion it is observed that many times the extreme value of a data set has a distribution that can be formulated with a Generalized Distribution. In space shuttle entry with 3-string GPS navigation the Figure Of Merit (FOM) value gives a measure of GPS navigated state accuracy. A GPS navigated state with FOM of 6 or higher is deemed unacceptable and is said to form a FOM 6 or higher chimney. A FOM chimney is a period of time during which the FOM value stays higher than 5. A longer period of FOM of value 6 or higher causes navigated state to accumulate more error for a lack of state update. For an acceptable landing it is imperative that the state error remains low and hence at low altitude during entry GPS data of FOM greater than 5 must not last more than 138 seconds. I To test the GPS performAnce many entry test cases were simulated at the Avionics Development Laboratory. Only high value FoM chimneys are consequential. The extreme value statistical technique is applied to analyze high value FOM chimneys. The Maximum likelihood method is used to determine parameters that characterize the GEV distribution, and then the limit value statistics are estimated.
Empirical Bayes estimation of proportions with application to cowbird parasitism rates
Link, W.A.; Hahn, D.C.
1996-01-01
Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).
Comparison of Observed Surface Temperatures of 4 Vesta to the KRC Thermal Model
NASA Technical Reports Server (NTRS)
Titus, T. N.; Becker, K. J.; Anderson, J. A.; Capria, M. T.; Tosi, F.; DeSanctis, M. C.; Palomba, E.; Grassi, D.; Capaccioni, F.; Ammannito, E.;
2012-01-01
In this work, we will compare ob-served temperatures of the surface of Vesta using data acquired by the Dawn [1] Visible and Infrared Map-ping Spectrometer (VIR-MS) [2] during the approach phase to model results from the KRC thermal model. High thermal inertia materials, such as bedrock, resist changes in temperature while temperatures of low thermal inertia material, such as dust, respond quickly to changes in solar insolation. The surface of Vesta is expected to have low to medium thermal inertia values, with the most commonly used value being extremely low at 15 TIU [4]. There are several parameters which affect observed temperatures in addition to thermal inertia: bond albedo, slope, and surface roughness. In addition to these parameters, real surfaces are rarely uniform monoliths that can be described by a single thermal inertia value. Real surfaces are often vertically layered or are mixtures of dust and rock. For Vesta's surface, with temperature extremes ranging from 50 K to 275 K and no atmosphere, even a uniform monolithic surface may have non-uniform thermal inertia due to temperature dependent thermal conductivity.
NASA Astrophysics Data System (ADS)
Dhakal, N.; Jain, S.
2013-12-01
Rare and unusually large events (such as hurricanes and floods) can create unusual and interesting trends in statistics. Generalized Extreme Value (GEV) distribution is usually used to statistically describe extreme rainfall events. A number of the recent studies have shown that the frequency of extreme rainfall events has increased over the last century and as a result, there has been change in parameters of GEV distribution with the time (non-stationary). But what impact does a single unusually large rainfall event (e.g., hurricane Irene) have on the GEV parameters and consequently on the level of risks or the return periods used in designing the civil infrastructures? In other words, if such a large event occurs today, how will it influence the level of risks (estimated based on past rainfall records) for the civil infrastructures? To answer these questions, we performed sensitivity analysis of the distribution parameters of GEV as well as the return periods to unusually large outlier events. The long-term precipitation records over the period of 1981-2010 from 12 USHCN stations across the state of Maine were used for analysis. For most of the stations, addition of each outlier event caused an increase in the shape parameter with a huge decrease on the corresponding return period. This is a key consideration for time-varying engineering design. These isolated extreme weather events should simultaneously be considered with traditional statistical methodology related to extreme events while designing civil infrastructures (such as dams, bridges, and culverts). Such analysis is also useful in understanding the statistical uncertainty of projecting extreme events into future.
NASA Astrophysics Data System (ADS)
Arnaud, Patrick; Cantet, Philippe; Odry, Jean
2017-11-01
Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with the use of a statistical law with two parameters (here generalised extreme value Type I distribution) and clearly lower than those associated with the use of a three-parameter law (here generalised extreme value Type II distribution). For extreme flood quantiles, the uncertainties are mostly due to the rainfall generator because of the progressive saturation of the hydrological model.
NASA Astrophysics Data System (ADS)
Alexandre, E.; Cuadra, L.; Nieto-Borge, J. C.; Candil-García, G.; del Pino, M.; Salcedo-Sanz, S.
2015-08-01
Wave parameters computed from time series measured by buoys (significant wave height Hs, mean wave period, etc.) play a key role in coastal engineering and in the design and operation of wave energy converters. Storms or navigation accidents can make measuring buoys break down, leading to missing data gaps. In this paper we tackle the problem of locally reconstructing Hs at out-of-operation buoys by using wave parameters from nearby buoys, based on the spatial correlation among values at neighboring buoy locations. The novelty of our approach for its potential application to problems in coastal engineering is twofold. On one hand, we propose a genetic algorithm hybridized with an extreme learning machine that selects, among the available wave parameters from the nearby buoys, a subset FnSP with nSP parameters that minimizes the Hs reconstruction error. On the other hand, we evaluate to what extent the selected parameters in subset FnSP are good enough in assisting other machine learning (ML) regressors (extreme learning machines, support vector machines and gaussian process regression) to reconstruct Hs. The results show that all the ML method explored achieve a good Hs reconstruction in the two different locations studied (Caribbean Sea and West Atlantic).
Alves, Gelio; Yu, Yi-Kuo
2016-09-01
There is a growing trend for biomedical researchers to extract evidence and draw conclusions from mass spectrometry based proteomics experiments, the cornerstone of which is peptide identification. Inaccurate assignments of peptide identification confidence thus may have far-reaching and adverse consequences. Although some peptide identification methods report accurate statistics, they have been limited to certain types of scoring function. The extreme value statistics based method, while more general in the scoring functions it allows, demands accurate parameter estimates and requires, at least in its original design, excessive computational resources. Improving the parameter estimate accuracy and reducing the computational cost for this method has two advantages: it provides another feasible route to accurate significance assessment, and it could provide reliable statistics for scoring functions yet to be developed. We have formulated and implemented an efficient algorithm for calculating the extreme value statistics for peptide identification applicable to various scoring functions, bypassing the need for searching large random databases. The source code, implemented in C ++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit yyu@ncbi.nlm.nih.gov Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.
Modeling Source Water Threshold Exceedances with Extreme Value Theory
NASA Astrophysics Data System (ADS)
Rajagopalan, B.; Samson, C.; Summers, R. S.
2016-12-01
Variability in surface water quality, influenced by seasonal and long-term climate changes, can impact drinking water quality and treatment. In particular, temperature and precipitation can impact surface water quality directly or through their influence on streamflow and dilution capacity. Furthermore, they also impact land surface factors, such as soil moisture and vegetation, which can in turn affect surface water quality, in particular, levels of organic matter in surface waters which are of concern. All of these will be exacerbated by anthropogenic climate change. While some source water quality parameters, particularly Total Organic Carbon (TOC) and bromide concentrations, are not directly regulated for drinking water, these parameters are precursors to the formation of disinfection byproducts (DBPs), which are regulated in drinking water distribution systems. These DBPs form when a disinfectant, added to the water to protect public health against microbial pathogens, most commonly chlorine, reacts with dissolved organic matter (DOM), measured as TOC or dissolved organic carbon (DOC), and inorganic precursor materials, such as bromide. Therefore, understanding and modeling the extremes of TOC and Bromide concentrations is of critical interest for drinking water utilities. In this study we develop nonstationary extreme value analysis models for threshold exceedances of source water quality parameters, specifically TOC and bromide concentrations. In this, the threshold exceedances are modeled as Generalized Pareto Distribution (GPD) whose parameters vary as a function of climate and land surface variables - thus, enabling to capture the temporal nonstationarity. We apply these to model threshold exceedance of source water TOC and bromide concentrations at two locations with different climate and find very good performance.
Seasonal extreme value statistics for precipitation in Germany
NASA Astrophysics Data System (ADS)
Fischer, Madlen; Rust, Henning W.; Ulbrich, Uwe
2013-04-01
Extreme precipitation has a strong influence on environment, society and economy. It leads to large damage due to floods, mudslides, increased erosion or hail. While standard annual return levels are important for hydrological structures, seasonaly resolved return levels provide additional information for risk managment, e.g., for the agricultural sector. For 1208 stations in Germany, we calculate monthly resolved return levels. Instead of estimating parameters separately for every month in the year, we use a non-stationary approach and benefit from smoothly varying return levels throughout the year. This natural approach is more suitable to characterise seasonal variability of extreme precipitation and leads to more accurate return level estimates. Harmonic functions of different orders are used to describe the seasonal variation of GEV parameters and crossvalidation is used to determine a suitable model forall stations. Finally particularly vulnerable regions and associated month are investigated in more detail.
NASA Astrophysics Data System (ADS)
Pierini, J. O.; Restrepo, J. C.; Aguirre, J.; Bustamante, A. M.; Velásquez, G. J.
2017-04-01
A measure of the variability in seasonal extreme streamflow was estimated for the Colombian Caribbean coast, using monthly time series of freshwater discharge from ten watersheds. The aim was to detect modifications in the streamflow monthly distribution, seasonal trends, variance and extreme monthly values. A 20-year length time moving window, with 1-year successive shiftments, was applied to the monthly series to analyze the seasonal variability of streamflow. The seasonal-windowed data were statistically fitted through the Gamma distribution function. Scale and shape parameters were computed using the Maximum Likelihood Estimation (MLE) and the bootstrap method for 1000 resample. A trend analysis was performed for each windowed-serie, allowing to detect the window of maximum absolute values for trends. Significant temporal shifts in seasonal streamflow distribution and quantiles (QT), were obtained for different frequencies. Wet and dry extremes periods increased significantly in the last decades. Such increase did not occur simultaneously through the region. Some locations exhibited continuous increases only at minimum QT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaffer, Richard, E-mail: rickyshaffer@yahoo.co.u; Department of Clinical Oncology, Imperial College London National Health Service Trust, London; Pickles, Tom
Purpose: Prior studies have derived low values of alpha-beta ratio (a/ss) for prostate cancer of approximately 1-2 Gy. These studies used poorly matched groups, differing definitions of biochemical failure, and insufficient follow-up. Methods and Materials: National Comprehensive Cancer Network low- or low-intermediate risk prostate cancer patients, treated with external beam radiotherapy or permanent prostate brachytherapy, were matched for prostate-specific antigen, Gleason score, T-stage, percentage of positive cores, androgen deprivation therapy, and era, yielding 118 patient pairs. The Phoenix definition of biochemical failure was used. The best-fitting value for a/ss was found for up to 90-month follow-up using maximum likelihood analysis,more » and the 95% confidence interval using the profile likelihood method. Linear quadratic formalism was applied with the radiobiological parameters of relative biological effectiveness = 1.0, potential doubling time = 45 days, and repair half-time = 1 hour. Bootstrap analysis was performed to estimate uncertainties in outcomes, and hence in a/ss. Sensitivity analysis was performed by varying the values of the radiobiological parameters to extreme values. Results: The value of a/ss best fitting the outcomes data was >30 Gy, with lower 95% confidence limit of 5.2 Gy. This was confirmed on bootstrap analysis. Varying parameters to extreme values still yielded best-fit a/ss of >30 Gy, although the lower 95% confidence interval limit was reduced to 0.6 Gy. Conclusions: Using carefully matched groups, long follow-up, the Phoenix definition of biochemical failure, and well-established statistical methods, the best estimate of a/ss for low and low-tier intermediate-risk prostate cancer is likely to be higher than that of normal tissues, although a low value cannot be excluded.« less
NASA Astrophysics Data System (ADS)
Luke, Adam; Vrugt, Jasper A.; AghaKouchak, Amir; Matthew, Richard; Sanders, Brett F.
2017-07-01
Nonstationary extreme value analysis (NEVA) can improve the statistical representation of observed flood peak distributions compared to stationary (ST) analysis, but management of flood risk relies on predictions of out-of-sample distributions for which NEVA has not been comprehensively evaluated. In this study, we apply split-sample testing to 1250 annual maximum discharge records in the United States and compare the predictive capabilities of NEVA relative to ST extreme value analysis using a log-Pearson Type III (LPIII) distribution. The parameters of the LPIII distribution in the ST and nonstationary (NS) models are estimated from the first half of each record using Bayesian inference. The second half of each record is reserved to evaluate the predictions under the ST and NS models. The NS model is applied for prediction by (1) extrapolating the trend of the NS model parameters throughout the evaluation period and (2) using the NS model parameter values at the end of the fitting period to predict with an updated ST model (uST). Our analysis shows that the ST predictions are preferred, overall. NS model parameter extrapolation is rarely preferred. However, if fitting period discharges are influenced by physical changes in the watershed, for example from anthropogenic activity, the uST model is strongly preferred relative to ST and NS predictions. The uST model is therefore recommended for evaluation of current flood risk in watersheds that have undergone physical changes. Supporting information includes a MATLAB® program that estimates the (ST/NS/uST) LPIII parameters from annual peak discharge data through Bayesian inference.
A Generalized Framework for Non-Stationary Extreme Value Analysis
NASA Astrophysics Data System (ADS)
Ragno, E.; Cheng, L.; Sadegh, M.; AghaKouchak, A.
2017-12-01
Empirical trends in climate variables including precipitation, temperature, snow-water equivalent at regional to continental scales are evidence of changes in climate over time. The evolving climate conditions and human activity-related factors such as urbanization and population growth can exert further changes in weather and climate extremes. As a result, the scientific community faces an increasing demand for updated appraisal of the time-varying climate extremes. The purpose of this study is to offer a robust and flexible statistical tool for non-stationary extreme value analysis which can better characterize the severity and likelihood of extreme climatic variables. This is critical to ensure a more resilient environment in a changing climate. Following the positive feedback on the first version of Non-Stationary Extreme Value Analysis (NEVA) Toolbox by Cheng at al. 2014, we present an improved version, i.e. NEVA2.0. The upgraded version herein builds upon a newly-developed hybrid evolution Markov Chain Monte Carlo (MCMC) approach for numerical parameters estimation and uncertainty assessment. This addition leads to a more robust uncertainty estimates of return levels, return periods, and risks of climatic extremes under both stationary and non-stationary assumptions. Moreover, NEVA2.0 is flexible in incorporating any user-specified covariate other than the default time-covariate (e.g., CO2 emissions, large scale climatic oscillation patterns). The new feature will allow users to examine non-stationarity of extremes induced by physical conditions that underlie the extreme events (e.g. antecedent soil moisture deficit, large-scale climatic teleconnections, urbanization). In addition, the new version offers an option to generate stationary and/or non-stationary rainfall Intensity - Duration - Frequency (IDF) curves that are widely used for risk assessment and infrastructure design. Finally, a Graphical User Interface (GUI) of the package is provided, making NEVA accessible to a broader audience.
White, L J; Mandl, J N; Gomes, M G M; Bodley-Tickell, A T; Cane, P A; Perez-Brena, P; Aguilar, J C; Siqueira, M M; Portes, S A; Straliotto, S M; Waris, M; Nokes, D J; Medley, G F
2007-09-01
The nature and role of re-infection and partial immunity are likely to be important determinants of the transmission dynamics of human respiratory syncytial virus (hRSV). We propose a single model structure that captures four possible host responses to infection and subsequent reinfection: partial susceptibility, altered infection duration, reduced infectiousness and temporary immunity (which might be partial). The magnitude of these responses is determined by four homotopy parameters, and by setting some of these parameters to extreme values we generate a set of eight nested, deterministic transmission models. In order to investigate hRSV transmission dynamics, we applied these models to incidence data from eight international locations. Seasonality is included as cyclic variation in transmission. Parameters associated with the natural history of the infection were assumed to be independent of geographic location, while others, such as those associated with seasonality, were assumed location specific. Models incorporating either of the two extreme assumptions for immunity (none or solid and lifelong) were unable to reproduce the observed dynamics. Model fits with either waning or partial immunity to disease or both were visually comparable. The best fitting structure was a lifelong partial immunity to both disease and infection. Observed patterns were reproduced by stochastic simulations using the parameter values estimated from the deterministic models.
NASA Astrophysics Data System (ADS)
Buser, R.; Fenkart, R. P.
1990-11-01
This paper presents an extended calibration of the color-magnitude and two-color diagrams and the metal-abundance parameter for the intermediate Population II and the extreme halo dwarfs observed in the Basel Palomar-Schmidt RGU three-color photometric surveys of the galaxy. The calibration covers the metallicity range between values +0.50 and -3.00. It is shown that the calibrations presented are sufficiently accurate to be useful for the future analyses of photographic survey data.
Extreme-value statistics of work done in stretching a polymer in a gradient flow.
Vucelja, M; Turitsyn, K S; Chertkov, M
2015-02-01
We analyze the statistics of work generated by a gradient flow to stretch a nonlinear polymer. We obtain the large deviation function (LDF) of the work in the full range of appropriate parameters by combining analytical and numerical tools. The LDF shows two distinct asymptotes: "near tails" are linear in work and dominated by coiled polymer configurations, while "far tails" are quadratic in work and correspond to preferentially fully stretched polymers. We find the extreme value statistics of work for several singular elastic potentials, as well as the mean and the dispersion of work near the coil-stretch transition. The dispersion shows a maximum at the transition.
Long-term statistics of extreme tsunami height at Crescent City
NASA Astrophysics Data System (ADS)
Dong, Sheng; Zhai, Jinjin; Tao, Shanshan
2017-06-01
Historically, Crescent City is one of the most vulnerable communities impacted by tsunamis along the west coast of the United States, largely attributed to its offshore geography. Trans-ocean tsunamis usually produce large wave runup at Crescent Harbor resulting in catastrophic damages, property loss and human death. How to determine the return values of tsunami height using relatively short-term observation data is of great significance to assess the tsunami hazards and improve engineering design along the coast of Crescent City. In the present study, the extreme tsunami heights observed along the coast of Crescent City from 1938 to 2015 are fitted using six different probabilistic distributions, namely, the Gumbel distribution, the Weibull distribution, the maximum entropy distribution, the lognormal distribution, the generalized extreme value distribution and the generalized Pareto distribution. The maximum likelihood method is applied to estimate the parameters of all above distributions. Both Kolmogorov-Smirnov test and root mean square error method are utilized for goodness-of-fit test and the better fitting distribution is selected. Assuming that the occurrence frequency of tsunami in each year follows the Poisson distribution, the Poisson compound extreme value distribution can be used to fit the annual maximum tsunami amplitude, and then the point and interval estimations of return tsunami heights are calculated for structural design. The results show that the Poisson compound extreme value distribution fits tsunami heights very well and is suitable to determine the return tsunami heights for coastal disaster prevention.
Extreme geomagnetically induced currents
NASA Astrophysics Data System (ADS)
Kataoka, Ryuho; Ngwira, Chigomezyo
2016-12-01
We propose an emergency alert framework for geomagnetically induced currents (GICs), based on the empirically extreme values and theoretical upper limits of the solar wind parameters and of d B/d t, the time derivative of magnetic field variations at ground. We expect this framework to be useful for preparing against extreme events. Our analysis is based on a review of various papers, including those presented during Extreme Space Weather Workshops held in Japan in 2011, 2012, 2013, and 2014. Large-amplitude d B/d t values are the major cause of hazards associated with three different types of GICs: (1) slow d B/d t with ring current evolution (RC-type), (2) fast d B/d t associated with auroral electrojet activity (AE-type), and (3) transient d B/d t of sudden commencements (SC-type). We set "caution," "warning," and "emergency" alert levels during the main phase of superstorms with the peak Dst index of less than -300 nT (once per 10 years), -600 nT (once per 60 years), or -900 nT (once per 100 years), respectively. The extreme d B/d t values of the AE-type GICs are 2000, 4000, and 6000 nT/min at caution, warning, and emergency levels, respectively. For the SC-type GICs, a "transient alert" is also proposed for d B/d t values of 40 nT/s at low latitudes and 110 nT/s at high latitudes, especially when the solar energetic particle flux is unusually high.
Improving power and robustness for detecting genetic association with extreme-value sampling design.
Chen, Hua Yun; Li, Mingyao
2011-12-01
Extreme-value sampling design that samples subjects with extremely large or small quantitative trait values is commonly used in genetic association studies. Samples in such designs are often treated as "cases" and "controls" and analyzed using logistic regression. Such a case-control analysis ignores the potential dose-response relationship between the quantitative trait and the underlying trait locus and thus may lead to loss of power in detecting genetic association. An alternative approach to analyzing such data is to model the dose-response relationship by a linear regression model. However, parameter estimation from this model can be biased, which may lead to inflated type I errors. We propose a robust and efficient approach that takes into consideration of both the biased sampling design and the potential dose-response relationship. Extensive simulations demonstrate that the proposed method is more powerful than the traditional logistic regression analysis and is more robust than the linear regression analysis. We applied our method to the analysis of a candidate gene association study on high-density lipoprotein cholesterol (HDL-C) which includes study subjects with extremely high or low HDL-C levels. Using our method, we identified several SNPs showing a stronger evidence of association with HDL-C than the traditional case-control logistic regression analysis. Our results suggest that it is important to appropriately model the quantitative traits and to adjust for the biased sampling when dose-response relationship exists in extreme-value sampling designs. © 2011 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Fallica, Roberto; Stowers, Jason K.; Grenville, Andrew; Frommhold, Andreas; Robinson, Alex P. G.; Ekinci, Yasin
2016-07-01
The dynamic absorption coefficients of several chemically amplified resists (CAR) and non-CAR extreme ultraviolet (EUV) photoresists are measured experimentally using a specifically developed setup in transmission mode at the x-ray interference lithography beamline of the Swiss Light Source. The absorption coefficient α and the Dill parameters ABC were measured with unprecedented accuracy. In general, the α of resists match very closely with the theoretical value calculated from elemental densities and absorption coefficients, whereas exceptions are observed. In addition, through the direct measurements of the absorption coefficients and dose-to-clear values, we introduce a new figure of merit called chemical sensitivity to account for all the postabsorption chemical reaction ongoing in the resist, which also predicts a quantitative clearing volume and clearing radius, due to the photon absorption in the resist. These parameters may help provide deeper insight into the underlying mechanisms of the EUV concepts of clearing volume and clearing radius, which are then defined and quantitatively calculated.
Control of extreme events in the bubbling onset of wave turbulence.
Galuzio, P P; Viana, R L; Lopes, S R
2014-04-01
We show the existence of an intermittent transition from temporal chaos to turbulence in a spatially extended dynamical system, namely, the forced and damped one-dimensional nonlinear Schrödinger equation. For some values of the forcing parameter, the system dynamics intermittently switches between ordered states and turbulent states, which may be seen as extreme events in some contexts. In a Fourier phase space, the intermittency takes place due to the loss of transversal stability of unstable periodic orbits embedded in a low-dimensional subspace. We mapped these transversely unstable regions and perturbed the system in order to significantly reduce the occurrence of extreme events of turbulence.
The end of trend-estimation for extreme floods under climate change?
NASA Astrophysics Data System (ADS)
Schulz, Karsten; Bernhardt, Matthias
2016-04-01
An increased risk of flood events is one of the major threats under future climate change conditions. Therefore, many recent studies have investigated trends in flood extreme occurences using historic long-term river discharge data as well as simulations from combined global/regional climate and hydrological models. Severe floods are relatively rare events and the robust estimation of their probability of occurrence requires long time series of data (6). Following a method outlined by the IPCC research community, trends in extreme floods are calculated based on the difference of discharge values exceeding e.g. a 100-year level (Q100) between two 30-year windows, which represents prevailing conditions in a reference and a future time period, respectively. Following this approach, we analysed multiple, synthetically derived 2,000-year trend-free, yearly maximum runoff data generated using three different extreme value distributions (EDV). The parameters were estimated from long term runoff data of four large European watersheds (Danube, Elbe, Rhine, Thames). Both, Q100-values estimated from 30-year moving windows, as well as the subsequently derived trends showed enormous variations with time: for example, estimating the Extreme Value (Gumbel) - distribution for the Danube data, trends of Q100 in the synthetic time-series range from -4,480 to 4,028 m³/s per 100 years (Q100 =10,071m³/s, for reference). Similar results were found when applying other extreme value distributions (Weibull, and log-Normal) to all of the watersheds considered. This variability or "background noise" of estimating trends in flood extremes makes it almost impossible to significantly distinguish any real trend in observed as well as modelled data when such an approach is applied. These uncertainties, even though known in principle are hardly addressed and discussed by the climate change impact community. Any decision making and flood risk management, including the dimensioning of flood protection measures, that is based on such studies might therefore be fundamentally flawed.
NASA Astrophysics Data System (ADS)
Mascaro, Giuseppe
2018-04-01
This study uses daily rainfall records of a dense network of 240 gauges in central Arizona to gain insights on (i) the variability of the seasonal distributions of rainfall extremes; (ii) how the seasonal distributions affect the shape of the annual distribution; and (iii) the presence of spatial patterns and orographic control for these distributions. For this aim, recent methodological advancements in peak-over-threshold analysis and application of the Generalized Pareto Distribution (GPD) were used to assess the suitability of the GPD hypothesis and improve the estimation of its parameters, while limiting the effect of short sample sizes. The distribution of daily rainfall extremes was found to be heavy-tailed (i.e., GPD shape parameter ξ > 0) during the summer season, dominated by convective monsoonal thunderstorms. The exponential distribution (a special case of GPD with ξ = 0) was instead showed to be appropriate for modeling wintertime daily rainfall extremes, mainly caused by cold fronts transported by westerly flow. The annual distribution exhibited a mixed behavior, with lighter upper tails than those found in summer. A hybrid model mixing the two seasonal distributions was demonstrated capable of reproducing the annual distribution. Organized spatial patterns, mainly controlled by elevation, were observed for the GPD scale parameter, while ξ did not show any clear control of location or orography. The quantiles returned by the GPD were found to be very similar to those provided by the National Oceanic and Atmospheric Administration (NOAA) Atlas 14, which used the Generalized Extreme Value (GEV) distribution. Results of this work are useful to improve statistical modeling of daily rainfall extremes at high spatial resolution and provide diagnostic tools for assessing the ability of climate models to simulate extreme events.
Dolka, B; Włodarczyk, R; Zbikowski, A; Dolka, I; Szeleszczuk, P; Kluciński, W
2014-06-01
The knowledge of the correct morphological and biochemical parameters in mute swans is an important indicator of their health status, body condition, adaptation to habitat and useful diagnostic tools in veterinary practice and ecological research. The aim of the study was to obtain hematological parameters in relation to age, sex and serum biochemistry values in wild-living mute swans. We found the significant differences in the erythrocyte count, hematocrit, hemoglobin concentration and erythrocyte sedimentation rate in relation to age of mute swans. There were no differences in hematological values between males and females. The leukogram and H/L ratio did not vary by age and sex in swans. Among of biochemical parameters the slightly increased AST, ALP, CK, K, urea, decreased CHOL and TG values were recorded. As far as we know, this is the first study in which the morphometric parameters of blood cells in mute swans were presented. We found extremely low concentration of lead in blood (at subthreshold level). No blood parasites were found in blood smears. The analysis of body mass and biometric parameters revealed a significant differences dependent on age and sex. No differences in the scaled mass index were found. Our results represent a normal hematologic and blood chemistry values and age-sex related changes, as reference values for the mute swan.
NASA Astrophysics Data System (ADS)
Szymczak, Sonja; Hetzer, Timo; Bräuning, Achim; Joachimski, Michael M.; Leuschner, Hanns-Hubert; Kuhlemann, Joachim
2014-10-01
We present a new multi-parameter dataset from Corsican black pine growing on the island of Corsica in the Western Mediterranean basin covering the period AD 1410-2008. Wood parameters measured include tree-ring width, latewood width, earlywood width, cell lumen area, cell width, cell wall thickness, modelled wood density, as well as stable carbon and oxygen isotopes. We evaluated the relationships between different parameters and determined the value of the dataset for climate reconstructions. Correlation analyses revealed that carbon isotope ratios are influenced by cell parameters determining cell size, whereas oxygen isotope ratios are influenced by cell parameters determining the amount of transportable water in the xylem. A summer (June to August) precipitation reconstruction dating back to AD 1185 was established based on tree-ring width. No long-term trends or pronounced periods with extreme high/low precipitation are recorded in our reconstruction, indicating relatively stable moisture conditions over the entire time period. By comparing the precipitation reconstruction with a summer temperature reconstruction derived from the carbon isotope chronologies, we identified summers with extreme climate conditions, i.e. warm-dry, warm-wet, cold-dry and cold-wet. Extreme climate conditions during summer months were found to influence cell parameter characteristics. Cold-wet summers promote the production of broad latewood composed of wide and thin-walled tracheids, while warm-wet summers promote the production of latewood with small thick-walled cells. The presented dataset emphasizes the potential of multi-parameter wood analysis from one tree species over long time scales.
Spatial dependence of extreme rainfall
NASA Astrophysics Data System (ADS)
Radi, Noor Fadhilah Ahmad; Zakaria, Roslinazairimah; Satari, Siti Zanariah; Azman, Muhammad Az-zuhri
2017-05-01
This study aims to model the spatial extreme daily rainfall process using the max-stable model. The max-stable model is used to capture the dependence structure of spatial properties of extreme rainfall. Three models from max-stable are considered namely Smith, Schlather and Brown-Resnick models. The methods are applied on 12 selected rainfall stations in Kelantan, Malaysia. Most of the extreme rainfall data occur during wet season from October to December of 1971 to 2012. This period is chosen to assure the available data is enough to satisfy the assumption of stationarity. The dependence parameters including the range and smoothness, are estimated using composite likelihood approach. Then, the bootstrap approach is applied to generate synthetic extreme rainfall data for all models using the estimated dependence parameters. The goodness of fit between the observed extreme rainfall and the synthetic data is assessed using the composite likelihood information criterion (CLIC). Results show that Schlather model is the best followed by Brown-Resnick and Smith models based on the smallest CLIC's value. Thus, the max-stable model is suitable to be used to model extreme rainfall in Kelantan. The study on spatial dependence in extreme rainfall modelling is important to reduce the uncertainties of the point estimates for the tail index. If the spatial dependency is estimated individually, the uncertainties will be large. Furthermore, in the case of joint return level is of interest, taking into accounts the spatial dependence properties will improve the estimation process.
Self-force as a cosmic censor in the Kerr overspinning problem
NASA Astrophysics Data System (ADS)
Colleoni, Marta; Barack, Leor; Shah, Abhay G.; van de Meent, Maarten
2015-10-01
It is known that a near-extremal Kerr black hole can be spun up beyond its extremal limit by capturing a test particle. Here we show that overspinning is always averted once backreaction from the particle's own gravity is properly taken into account. We focus on nonspinning, uncharged, massive particles thrown in along the equatorial plane and work in the first-order self-force approximation (i.e., we include all relevant corrections to the particle's acceleration through linear order in the ratio, assumed small, between the particle's energy and the black hole's mass). Our calculation is a numerical implementation of a recent analysis by two of us [Phys. Rev. D 91, 104024 (2015)], in which a necessary and sufficient "censorship" condition was formulated for the capture scenario, involving certain self-force quantities calculated on the one-parameter family of unstable circular geodesics in the extremal limit. The self-force information accounts both for radiative losses and for the finite-mass correction to the critical value of the impact parameter. Here we obtain the required self-force data and present strong evidence to suggest that captured particles never drive the black hole beyond its extremal limit. We show, however, that, within our first-order self-force approximation, it is possible to reach the extremal limit with a suitable choice of initial orbital parameters. To rule out such a possibility would require (currently unavailable) information about higher-order self-force corrections.
Modelling audiovisual integration of affect from videos and music.
Gao, Chuanji; Wedell, Douglas H; Kim, Jongwan; Weber, Christine E; Shinkareva, Svetlana V
2018-05-01
Two experiments examined how affective values from visual and auditory modalities are integrated. Experiment 1 paired music and videos drawn from three levels of valence while holding arousal constant. Experiment 2 included a parallel combination of three levels of arousal while holding valence constant. In each experiment, participants rated their affective states after unimodal and multimodal presentations. Experiment 1 revealed a congruency effect in which stimulus combinations of the same extreme valence resulted in more extreme state ratings than component stimuli presented in isolation. An interaction between music and video valence reflected the greater influence of negative affect. Video valence was found to have a significantly greater effect on combined ratings than music valence. The pattern of data was explained by a five parameter differential weight averaging model that attributed greater weight to the visual modality and increased weight with decreasing values of valence. Experiment 2 revealed a congruency effect only for high arousal combinations and no interaction effects. This pattern was explained by a three parameter constant weight averaging model with greater weight for the auditory modality and a very low arousal value for the initial state. These results demonstrate key differences in audiovisual integration between valence and arousal.
Juckett, D A; Rosenberg, B
1992-04-21
The distributions for human disease-specific mortality exhibit two striking characteristics: survivorship curves that intersect near the longevity limit; and, the clustering of best-fitting Weibull shape parameter values into groups centered on integers. Correspondingly, we have hypothesized that the distribution intersections result from either competitive processes or population partitioning and the integral clustering in the shape parameter results from the occurrence of a small number of rare, rate-limiting events in disease progression. In this report we initiate a theoretical examination of these questions by exploring serial chain model dynamics and parameteric competing risks theory. The links in our chain models are composed of more than one bond, where the number of bonds in a link are denoted the link size and are the number of events necessary to break the link and, hence, the chain. We explored chains with all links of the same size or with segments of the chain composed of different size links (competition). Simulations showed that chain breakage dynamics depended on the weakest-link principle and followed kinetics of extreme-values which were very similar to human mortality kinetics. In particular, failure distributions for simple chains were Weibull-type extreme-value distributions with shape parameter values that were identifiable with the integral link size in the limit of infinite chain length. Furthermore, for chains composed of several segments of differing link size, the survival distributions for the various segments converged at a point in the S(t) tails indistinguishable from human data. This was also predicted by parameteric competing risks theory using Weibull underlying distributions. In both the competitive chain simulations and the parametric competing risks theory, however, the shape values for the intersecting distributions deviated from the integer values typical of human data. We conclude that rare events can be the source of integral shapes in human mortality, that convergence is a salient feature of multiple endpoints, but that pure competition may not be the best explanation for the exact type of convergence observable in human mortality. Finally, while the chain models were not motivated by any specific biological structures, interesting biological correlates to them may be useful in gerontological research.
NASA Astrophysics Data System (ADS)
Parey, S.
2014-12-01
F. J. Acero1, S. Parey2, T.T.H. Hoang2, D. Dacunha-Castelle31Dpto. Física, Universidad de Extremadura, Avda. de Elvas s/n, 06006, Badajoz 2EDF/R&D, 6 quai Watier, 78401 Chatou Cedex, France 3Laboratoire de Mathématiques, Université Paris 11, Orsay, France Trends can already be detected in daily rainfall amount in the Iberian Peninsula (IP), and this will have an impact on the extreme levels. In this study, we compare different ways to estimate future return levels for heavy rainfall, based on the statistical extreme value theory. Both Peaks over Threshold (POT) and block maxima with the Generalized Extreme Value (GEV) distribution will be used and their results compared when linear trends are assumed in the parameters: threshold and scale parameter for POT and location and scale parameter for GEV. But rainfall over the IP is a special variable in that a large number of the values are 0. Thus, the impact of taking this into account is discussed too. Another approach is then tested, based on the evolutions of the mean and variance obtained from the time series of rainy days only, and of the number of rainy days. A statistical test, similar to that designed for temperature in Parey et al. 2013, is used to assess if the trends in extremes can be considered as mostly due to these evolutions when considering only rainy days. The results show that it is mainly the case: the extremes of the residuals, after removing the trends in mean and standard deviation, cannot be differentiated from those of a stationary process. Thus, the future return levels can be estimated from the stationary return level of these residuals and an estimation of the future mean and standard deviation. Moreover, an estimation of the future number of rainy days is used to retrieve the return levels for all days. All of these comparisons are made for an ensemble of high quality rainfall time series observed in the Iberian Peninsula over the period 1961-2010, from which we want to estimate a 20-year return level expected in 2020. The evolutions and the impact of the different approaches will be discussed for 3 seasons: fall, spring and winter. Parey S., Hoang T.T.H., Dacunha-Castelle D.: The importance of mean and variance in predicting changes in temperature extremes, Journal of Geophysical Research: Atmospheres, Vol. 118, 1-12, 2013.
Preliminary research of a novel center-driven robot for upper extremity rehabilitation.
Cao, Wujing; Zhang, Fei; Yu, Hongliu; Hu, Bingshan; Meng, Qiaoling
2018-01-19
Loss of upper limb function often appears after stroke. Robot-assisted systems are becoming increasingly common in upper extremity rehabilitation. Rehabilitation robot provides intensive motor therapy, which can be performed in a repetitive, accurate and controllable manner. This study aims to propose a novel center-driven robot for upper extremity rehabilitation. A new power transmission mechanism is designed to transfer the power to elbow and shoulder joints from three motors located on the base. The forward and inverse kinematics equations of the center-driven robot (CENTROBOT) are deduced separately. The theoretical values of the scope of joint movements are obtained with the Denavit-Hartenberg parameters method. A prototype of the CENTROBOT is developed and tested. The elbow flexion/extension, shoulder flexion/extension and shoulder adduction/abduction can be realized of the center-driven robot. The angles value of joints are in conformity with the theoretical value. The CENTROBOT reduces the overall size of the robot arm, the influence of motor noise, radiation and other adverse factors by setting all motors on the base. It can satisfy the requirements of power and movement transmission of the robot arm.
Nearly extremal apparent horizons in simulations of merging black holes
NASA Astrophysics Data System (ADS)
Lovelace, Geoffrey; Scheel, Mark A.; Owen, Robert; Giesler, Matthew; Katebi, Reza; Szilágyi, Béla; Chu, Tony; Demos, Nicholas; Hemberger, Daniel A.; Kidder, Lawrence E.; Pfeiffer, Harald P.; Afshari, Nousha
2015-03-01
The spin angular momentum S of an isolated Kerr black hole is bounded by the surface area A of its apparent horizon: 8π S≤slant A, with equality for extremal black holes. In this paper, we explore the extremality of individual and common apparent horizons for merging, rapidly spinning binary black holes. We consider simulations of merging black holes with equal masses M and initial spin angular momenta aligned with the orbital angular momentum, including new simulations with spin magnitudes up to S/{{M}2}=0.994. We measure the area and (using approximate Killing vectors) the spin on the individual and common apparent horizons, finding that the inequality 8π S\\lt A is satisfied in all cases but is very close to equality on the common apparent horizon at the instant it first appears. We also evaluate the Booth-Fairhurst extremality, whose value for a given apparent horizon depends on the scaling of the horizon’s null normal vectors. In particular, we introduce a gauge-invariant lower bound on the extremality by computing the smallest value that Booth and Fairhurst’s extremality parameter can take for any scaling. Using this lower bound, we conclude that the common horizons are at least moderately close to extremal just after they appear. Finally, following Lovelace et al (2008 Phys. Rev. D 78 084017), we construct quasiequilibrium binary-black hole initial data with ‘overspun’ marginally trapped surfaces with 8π S\\gt A. We show that the overspun surfaces are indeed superextremal: our lower bound on their Booth-Fairhurst extremality exceeds unity. However, we confirm that these superextremal surfaces are always surrounded by marginally outer trapped surfaces (i.e., by apparent horizons) with 8π S\\lt A. The extremality lower bound on the enclosing apparent horizon is always less than unity but can exceed the value for an extremal Kerr black hole.
A geostatistical extreme-value framework for fast simulation of natural hazard events
Stephenson, David B.
2016-01-01
We develop a statistical framework for simulating natural hazard events that combines extreme value theory and geostatistics. Robust generalized additive model forms represent generalized Pareto marginal distribution parameters while a Student’s t-process captures spatial dependence and gives a continuous-space framework for natural hazard event simulations. Efficiency of the simulation method allows many years of data (typically over 10 000) to be obtained at relatively little computational cost. This makes the model viable for forming the hazard module of a catastrophe model. We illustrate the framework by simulating maximum wind gusts for European windstorms, which are found to have realistic marginal and spatial properties, and validate well against wind gust measurements. PMID:27279768
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
Indexing of exoplanets in search for potential habitability: application to Mars-like worlds
NASA Astrophysics Data System (ADS)
Kashyap Jagadeesh, Madhu; Gudennavar, Shivappa B.; Doshi, Urmi; Safonova, Margarita
2017-08-01
Study of exoplanets is one of the main goals of present research in planetary sciences and astrobiology. Analysis of huge planetary data from space missions such as CoRoT and Kepler is directed ultimately at finding a planet similar to Earth—the Earth's twin, and answering the question of potential exo-habitability. The Earth Similarity Index (ESI) is a first step in this quest, ranging from 1 (Earth) to 0 (totally dissimilar to Earth). It was defined for the four physical parameters of a planet: radius, density, escape velocity and surface temperature. The ESI is further sub-divided into interior ESI (geometrical mean of radius and density) and surface ESI (geometrical mean of escape velocity and surface temperature). The challenge here is to determine which exoplanet parameter(s) is important in finding this similarity; how exactly the individual parameters entering the interior ESI and surface ESI are contributing to the global ESI. Since the surface temperature entering surface ESI is a non-observable quantity, it is difficult to determine its value. Using the known data for the Solar System objects, we established the calibration relation between surface and equilibrium temperatures to devise an effective way to estimate the value of the surface temperature of exoplanets. ESI is a first step in determining potential exo-habitability that may not be very similar to a terrestrial life. A new approach, called Mars Similarity Index (MSI), is introduced to identify planets that may be habitable to the extreme forms of life. MSI is defined in the range between 1 (present Mars) and 0 (dissimilar to present Mars) and uses the same physical parameters as ESI. We are interested in Mars-like planets to search for planets that may host the extreme life forms, such as the ones living in extreme environments on Earth; for example, methane on Mars may be a product of the methane-specific extremophile life form metabolism.
Manevska, Nevena; Stojanoski, Sinisa; Pop Gjorceva, Daniela; Todorovska, Lidija; Miladinova, Daniela; Zafirova, Beti
2017-09-01
Introduction Muscle perfusion is a physiologic process that can undergo quantitative assessment and thus define the range of normal values of perfusion indexes and perfusion reserve. The investigation of the microcirculation has a crucial role in determining the muscle perfusion. Materials and method The study included 30 examinees, 24-74 years of age, without a history of confirmed peripheral artery disease and all had normal findings on Doppler ultrasonography and pedo-brachial index of lower extremity (PBI). 99mTc-MIBI tissue muscle perfusion scintigraphy of lower limbs evaluates tissue perfusion in resting condition "rest study" and after workload "stress study", through quantitative parameters: Inter-extremity index (for both studies), left thigh/right thigh (LT/RT) left calf/right calf (LC/RC) and perfusion reserve (PR) for both thighs and calves. Results In our investigated group we assessed the normal values of quantitative parameters of perfusion indexes. Indexes ranged for LT/RT in rest study 0.91-1.05, in stress study 0.92-1.04. LC/RC in rest 0.93-1.07 and in stress study 0.93-1.09. The examinees older than 50 years had insignificantly lower perfusion reserve of these parameters compared with those younger than 50, LC (p=0.98), and RC (p=0.6). Conclusion This non-invasive scintigraphic method allows in individuals without peripheral artery disease to determine the range of normal values of muscle perfusion at rest and stress condition and to clinically implement them in evaluation of patients with peripheral artery disease for differentiating patients with normal from those with impaired lower limbs circulation.
Guidelines for the Selection of Near-Earth Thermal Environment Parameters for Spacecraft Design
NASA Technical Reports Server (NTRS)
Anderson, B. J.; Justus, C. G.; Batts, G. W.
2001-01-01
Thermal analysis and design of Earth orbiting systems requires specification of three environmental thermal parameters: the direct solar irradiance, Earth's local albedo, and outgoing longwave radiance (OLR). In the early 1990s data sets from the Earth Radiation Budget Experiment were analyzed on behalf of the Space Station Program to provide an accurate description of these parameters as a function of averaging time along the orbital path. This information, documented in SSP 30425 and, in more generic form in NASA/TM-4527, enabled the specification of the proper thermal parameters for systems of various thermal response time constants. However, working with the engineering community and SSP-30425 and TM-4527 products over a number of years revealed difficulties in interpretation and application of this material. For this reason it was decided to develop this guidelines document to help resolve these issues of practical application. In the process, the data were extensively reprocessed and a new computer code, the Simple Thermal Environment Model (STEM) was developed to simplify the process of selecting the parameters for input into extreme hot and cold thermal analyses and design specifications. In the process, greatly improved values for the cold case OLR values for high inclination orbits were derived. Thermal parameters for satellites in low, medium, and high inclination low-Earth orbit and with various system thermal time constraints are recommended for analysis of extreme hot and cold conditions. Practical information as to the interpretation and application of the information and an introduction to the STEM are included. Complete documentation for STEM is found in the user's manual, in preparation.
Horizon structure of rotating Einstein-Born-Infeld black holes and shadow
NASA Astrophysics Data System (ADS)
Atamurotov, Farruh; Ghosh, Sushant G.; Ahmedov, Bobomurat
2016-05-01
We investigate the horizon structure of the rotating Einstein-Born-Infeld solution which goes over to the Einstein-Maxwell's Kerr-Newman solution as the Born-Infeld parameter goes to infinity (β → ∞). We find that for a given β , mass M, and charge Q, there exist a critical spinning parameter aE and rHE, which corresponds to an extremal Einstein-Born-Infeld black hole with degenerate horizons, and aE decreases and rHE increases with increase of the Born-Infeld parameter β , while a
NASA Astrophysics Data System (ADS)
García-Cueto, O. Rafael; Cavazos, M. Tereza; de Grau, Pamela; Santillán-Soto, Néstor
2014-04-01
The generalized extreme value distribution is applied in this article to model the statistical behavior of the maximum and minimum temperature distribution tails in four cities of Baja California in northwestern Mexico, using data from 1950-2010. The approach used of the maximum of annual time blocks. Temporal trends were included as covariates in the location parameter (μ), which resulted in significant improvements to the proposed models, particularly for the extreme maximum temperature values in the cities of Mexicali, Tijuana, and Tecate, and the extreme minimum temperature values in Mexicali and Ensenada. These models were used to estimate future probabilities over the next 100 years (2015-2110) for different time periods, and they were compared with changes in the extreme (P90th and P10th) percentiles of maximum and minimum temperature scenarios for a set of six general circulation models under low (RCP4.5) and high (RCP8.5) radiative forcings. By the end of the twenty-first century, the scenarios of the changes in extreme maximum summer temperature are of the same order in both the statistical model and the high radiative scenario (increases of 4-5 °C). The low radiative scenario is more conservative (increases of 2-3 °C). The winter scenario shows that minimum temperatures could be less severe; the temperature increases suggested by the probabilistic model are greater than those projected for the end of the century by the set of global models under RCP4.5 and RCP8.5 scenarios. The likely impacts on the region are discussed.
Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo
2018-01-01
Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66–96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges’ Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (P<0.005). These results highlight the specificities of each muscle quality metric to LEF biometrics, SCHOL, and BMI, and particularly highlight the value of the connective tissue regime in this regard. PMID:29513690
Edmunds, Kyle; Gíslason, Magnús; Sigurðsson, Sigurður; Guðnason, Vilmundur; Harris, Tamara; Carraro, Ugo; Gargiulo, Paolo
2018-01-01
Sarcopenic muscular degeneration has been consistently identified as an independent risk factor for mortality in aging populations. Recent investigations have realized the quantitative potential of computed tomography (CT) image analysis to describe skeletal muscle volume and composition; however, the optimum approach to assessing these data remains debated. Current literature reports average Hounsfield unit (HU) values and/or segmented soft tissue cross-sectional areas to investigate muscle quality. However, standardized methods for CT analyses and their utility as a comorbidity index remain undefined, and no existing studies compare these methods to the assessment of entire radiodensitometric distributions. The primary aim of this study was to present a comparison of nonlinear trimodal regression analysis (NTRA) parameters of entire radiodensitometric muscle distributions against extant CT metrics and their correlation with lower extremity function (LEF) biometrics (normal/fast gait speed, timed up-and-go, and isometric leg strength) and biochemical and nutritional parameters, such as total solubilized cholesterol (SCHOL) and body mass index (BMI). Data were obtained from 3,162 subjects, aged 66-96 years, from the population-based AGES-Reykjavik Study. 1-D k-means clustering was employed to discretize each biometric and comorbidity dataset into twelve subpopulations, in accordance with Sturges' Formula for Class Selection. Dataset linear regressions were performed against eleven NTRA distribution parameters and standard CT analyses (fat/muscle cross-sectional area and average HU value). Parameters from NTRA and CT standards were analogously assembled by age and sex. Analysis of specific NTRA parameters with standard CT results showed linear correlation coefficients greater than 0.85, but multiple regression analysis of correlative NTRA parameters yielded a correlation coefficient of 0.99 (P<0.005). These results highlight the specificities of each muscle quality metric to LEF biometrics, SCHOL, and BMI, and particularly highlight the value of the connective tissue regime in this regard.
Replica and extreme-value analysis of the Jarzynski free-energy estimator
NASA Astrophysics Data System (ADS)
Palassini, Matteo; Ritort, Felix
2008-03-01
We analyze the Jarzynski estimator of free-energy differences from nonequilibrium work measurements. By a simple mapping onto Derrida's Random Energy Model, we obtain a scaling limit for the expectation of the bias of the estimator. We then derive analytical approximations in three different regimes of the scaling parameter x = log(N)/W, where N is the number of measurements and W the mean dissipated work. Our approach is valid for a generic distribution of the dissipated work, and is based on a replica symmetry breaking scheme for x >> 1, the asymptotic theory of extreme value statistics for x << 1, and a direct approach for x near one. The combination of the three analytic approximations describes well Monte Carlo data for the expectation value of the estimator, for a wide range of values of N, from N=1 to large N, and for different work distributions. Based on these results, we introduce improved free-energy estimators and discuss the application to the analysis of experimental data.
Maximum likelihood estimation for life distributions with competing failure modes
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1979-01-01
Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.
[Quantitative Evaluation of Metal Artifacts on CT Images on the Basis of Statistics of Extremes].
Kitaguchi, Shigetoshi; Imai, Kuniharu; Ueda, Suguru; Hashimoto, Naomi; Hattori, Shouta; Saika, Takahiro; Ono, Yoshifumi
2016-05-01
It is well-known that metal artifacts have a harmful effect on the image quality of computed tomography (CT) images. However, the physical property remains still unknown. In this study, we investigated the relationship between metal artifacts and tube currents using statistics of extremes. A commercially available phantom for measuring CT dose index 160 mm in diameter was prepared and a brass rod 13 mm in diameter was placed at the centerline of the phantom. This phantom was used as a target object to evaluate metal artifacts and was scanned using an area detector CT scanner with various tube currents under a constant tube voltage of 120 kV. Sixty parallel line segments with a length of 100 pixels were placed to cross metal artifacts on CT images and the largest difference between two adjacent CT values in each of 60 CT value profiles of these line segments was employed as a feature variable for measuring metal artifacts; these feature variables were analyzed on the basis of extreme value theory. The CT value variation induced by metal artifacts was statistically characterized by Gumbel distribution, which was one of the extreme value distributions; namely, metal artifacts have the same statistical characteristic as streak artifacts. Therefore, Gumbel evaluation method makes it possible to analyze not only streak artifacts but also metal artifacts. Furthermore, the location parameter in Gumbel distribution was shown to be in inverse proportion to the square root of a tube current. This result suggested that metal artifacts have the same dose dependence as image noises.
Magnetic anisotropy and order parameter in nanostructured CoPt particles
NASA Astrophysics Data System (ADS)
Komogortsev, S. V.; Iskhakov, R. S.; Zimin, A. A.; Filatov, E. Yu.; Korenev, S. V.; Shubin, Yu. V.; Chizhik, N. A.; Yurkin, G. Yu.; Eremin, E. V.
2013-10-01
The correlation of magnetic anisotropy energy with order parameter in the crystallites of CoPt nanostructured particles prepared by thermal decomposition and further annealing has been studied by investigation of the approach magnetization to saturation curves and x-ray powder diffraction pattern profiles. It is shown that magnetic anisotropy energy value in partially ordered CoPt crystallite could be described as an intermediate case between two extremes, corresponding to either single or several c-domains of L10 phase in crystallite.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lis, Jakub
In this paper, we investigate the Q-ball Ansatz in the baby Skyrme model. First, the appearance of peakons, i.e. solutions with extremely large absolute values of the second derivative at maxima, is analyzed. It is argued that such solutions are intrinsic to the baby Skyrme model and do not depend on the detailed form of a potential used in calculations. Next, we concentrate on compact nonspinning Q-balls. We show the failure of a small parameter expansion in this case. Finally, we explore the existence and parameter dependence of Q-ball solutions.
How Do Microphysical Processes Influence Large-Scale Precipitation Variability and Extremes?
Hagos, Samson; Ruby Leung, L.; Zhao, Chun; ...
2018-02-10
Convection permitting simulations using the Model for Prediction Across Scales-Atmosphere (MPAS-A) are used to examine how microphysical processes affect large-scale precipitation variability and extremes. An episode of the Madden-Julian Oscillation is simulated using MPAS-A with a refined region at 4-km grid spacing over the Indian Ocean. It is shown that cloud microphysical processes regulate the precipitable water (PW) statistics. Because of the non-linear relationship between precipitation and PW, PW exceeding a certain critical value (PWcr) contributes disproportionately to precipitation variability. However, the frequency of PW exceeding PWcr decreases rapidly with PW, so changes in microphysical processes that shift the columnmore » PW statistics relative to PWcr even slightly have large impacts on precipitation variability. Furthermore, precipitation variance and extreme precipitation frequency are approximately linearly related to the difference between the mean and critical PW values. Thus observed precipitation statistics could be used to directly constrain model microphysical parameters as this study demonstrates using radar observations from DYNAMO field campaign.« less
How Do Microphysical Processes Influence Large-Scale Precipitation Variability and Extremes?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson; Ruby Leung, L.; Zhao, Chun
Convection permitting simulations using the Model for Prediction Across Scales-Atmosphere (MPAS-A) are used to examine how microphysical processes affect large-scale precipitation variability and extremes. An episode of the Madden-Julian Oscillation is simulated using MPAS-A with a refined region at 4-km grid spacing over the Indian Ocean. It is shown that cloud microphysical processes regulate the precipitable water (PW) statistics. Because of the non-linear relationship between precipitation and PW, PW exceeding a certain critical value (PWcr) contributes disproportionately to precipitation variability. However, the frequency of PW exceeding PWcr decreases rapidly with PW, so changes in microphysical processes that shift the columnmore » PW statistics relative to PWcr even slightly have large impacts on precipitation variability. Furthermore, precipitation variance and extreme precipitation frequency are approximately linearly related to the difference between the mean and critical PW values. Thus observed precipitation statistics could be used to directly constrain model microphysical parameters as this study demonstrates using radar observations from DYNAMO field campaign.« less
NASA Astrophysics Data System (ADS)
Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya
2012-05-01
Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.
A hadronic origin for ultra-high-frequency-peaked BL Lac objects
NASA Astrophysics Data System (ADS)
Cerruti, M.; Zech, A.; Boisson, C.; Inoue, S.
2015-03-01
Current Cherenkov telescopes have identified a population of ultra-high-frequency peaked BL Lac objects (UHBLs), also known as extreme blazars, that exhibit exceptionally hard TeV spectra, including 1ES 0229+200, 1ES 0347-121, RGB J0710+591, 1ES 1101-232, and 1ES 1218+304. Although one-zone synchrotron-self-Compton (SSC) models have been generally successful in interpreting the high-energy emission observed in other BL Lac objects, they are problematic for UHBLs, necessitating very large Doppler factors and/or extremely high minimum Lorentz factors of the emitting leptonic population. In this context, we have investigated alternative scenarios where hadronic emission processes are important, using a newly developed (lepto-)hadronic numerical code to systematically explore the physical parameters of the emission region that reproduces the observed spectra while avoiding the extreme values encountered in pure SSC models. Assuming a fixed Doppler factor δ = 30, two principal parameter regimes are identified, where the high-energy emission is due to: (1) proton-synchrotron radiation, with magnetic fields B ˜ 1-100 G and maximum proton energies Ep; max ≲ 1019 eV; and (2) synchrotron emission from p-γ-induced cascades as well as SSC emission from primary leptons, with B ˜ 0.1-1 G and Ep; max ≲ 1017 eV. This can be realized with plausible, sub-Eddington values for the total (kinetic plus magnetic) power of the emitting plasma, in contrast to hadronic interpretations for other blazar classes that often warrant highly super-Eddington values.
Jonker, Michiel T O
2016-06-01
Octanol-water partition coefficients (KOW ) are widely used in fate and effects modeling of chemicals. Still, high-quality experimental KOW data are scarce, in particular for very hydrophobic chemicals. This hampers reliable assessments of several fate and effect parameters and the development and validation of new models. One reason for the limited availability of experimental values may relate to the challenging nature of KOW measurements. In the present study, KOW values for 13 polycyclic aromatic hydrocarbons were determined with the gold standard "slow-stirring" method (log KOW 4.6-7.2). These values were then used as reference data for the development of an alternative method for measuring KOW . This approach combined slow stirring and equilibrium sampling of the extremely low aqueous concentrations with polydimethylsiloxane-coated solid-phase microextraction fibers, applying experimentally determined fiber-water partition coefficients. It resulted in KOW values matching the slow-stirring data very well. Therefore, the method was subsequently applied to a series of 17 moderately to extremely hydrophobic petrochemical compounds. The obtained KOW values spanned almost 6 orders of magnitude, with the highest value measuring 10(10.6) . The present study demonstrates that the hydrophobicity domain within which experimental KOW measurements are possible can be extended with the help of solid-phase microextraction and that experimentally determined KOW values can exceed the proposed upper limit of 10(9) . Environ Toxicol Chem 2016;35:1371-1377. © 2015 SETAC. © 2015 SETAC.
Physiologic regulation of body energy storage
NASA Technical Reports Server (NTRS)
Pitts, G. C.
1978-01-01
Both new and published data (rats, mice, and human beings) on three parameters - fat mass, fat-free body mass (FFBM), and total body mass in some cases - are evaluated. Steady state values of the parameters are analyzed for changes in response to specific perturbing agents and for their frequency distributions. Temporal sequences of values on individuals are examined for evidence of regulatory responses. The results lead to the hypothesis that the FFBM is regulated, but probably not as a unit, and that mass of fat is regulated with a high priority near the range extremes but with a much lower priority in the mid-range. Properties and advantages of such a mechanism are discussed.
Temperature histories of commercial flights at severe conditions from GASP data
NASA Technical Reports Server (NTRS)
Jasperson, W. H.; Nastrom, G. D.
1983-01-01
The thermal environment of commercial aircraft from a data set gathered during the Global Atmospheric Sampling Program (GASP) is studied. The data set covers a four-year period of measurements. The report presents plots of airplane location and speed and atmospheric temperature as functions of elapsed time for 35 extreme-condition flights, selected by minimum values of several temperature parameters. One of these parameters, the severity factor, is an approximation of the in-flight wing-tank temperature. Representative low-severity-factor flight histories may be useful for actual temperature-profile inputs to design and research studies. Comparison of the GASP atmospheric temperatures to interpolated temperatures from National Meteorological Center and Global Weather Central analysis fields shows that the analysis temperatures are slightly biased toward warmer than actual temperatures, particularly over oceans and at extreme conditions.
NASA Astrophysics Data System (ADS)
Mentaschi, Lorenzo; Vousdoukas, Michalis; Voukouvalas, Evangelos; Sartini, Ludovica; Feyen, Luc; Besio, Giovanni; Alfieri, Lorenzo
2016-09-01
Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are possible. In particular, the transformation can be carried out by means of simple statistical techniques such as low-pass filters based on the running mean and the standard deviation, and the fitting procedure is a stationary one with a few degrees of freedom and is easy to implement and control. An open-source MATLAB toolbox has been developed to cover this methodology, which is available at https://github.com/menta78/tsEva/ (Mentaschi et al., 2016).
A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events
NASA Astrophysics Data System (ADS)
Zorzetto, E.; Marani, M.
2017-12-01
The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.
A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation
NASA Astrophysics Data System (ADS)
Byun, K.; Hamlet, A. F.
2017-12-01
There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.
NASA Astrophysics Data System (ADS)
Pineda, Luis E.; Willems, Patrick
2017-04-01
Weather and climatic characterization of rainfall extremes is both of scientific and societal value for hydrometeorogical risk management, yet discrimination of local and large-scale forcing remains challenging in data-scarce and complex terrain environments. Here, we present an analysis framework that separate weather (seasonal) regimes and climate (inter-annual) influences using data-driven process identification. The approach is based on signal-to-noise separation methods and extreme value (EV) modeling of multisite rainfall extremes. The EV models use a semi-automatic parameter learning [1] for model identification across temporal scales. At weather scale, the EV models are combined with a state-based hidden Markov model [2] to represent the spatio-temporal structure of rainfall as persistent weather states. At climatic scale, the EV models are used to decode the drivers leading to the shift of weather patterns. The decoding is performed into a climate-to-weather signal subspace, built via dimension reduction of climate model proxies (e.g. sea surface temperature and atmospheric circulation) We apply the framework to the Western Andean Ridge (WAR) in Ecuador and Peru (0-6°S) using ground data from the second half of the 20th century. We find that the meridional component of winds is what matters for the in-year and inter-annual variability of high rainfall intensities alongside the northern WAR (0-2.5°S). There, low-level southerly winds are found as advection drivers for oceanic moist of the normal-rainy season and weak/moderate the El Niño (EN) type; but, the strong EN type and its unique moisture surplus is locally advected at lowlands in the central WAR. Moreover, the coastal ridges, south of 3°S dampen meridional airflows, leaving local hygrothermal gradients to control the in-year distribution of rainfall extremes and their anomalies. Overall, we show that the framework, which does not make any prior assumption on the explanatory power of the weather and climate drivers, allows identification of well-known features of the regional climate in a purely data-driven fashion. Thus, this approach shows potential for characterization of precipitation extremes in data-scarce and orographically complex regions in which model reconstructions are the only climate proxies References [1] Mínguez, R., F.J. Méndez, C. Izaguirre, M. Menéndez, and I.J. Losada (2010), Pseudooptimal parameter selection of non-stationary generalized extreme value models for environmental variables, Environ. Modell. Softw. 25, 1592-1607. [2] Pineda, L., P. Willems (2016), Multisite Downscaling of Seasonal Predictions to Daily Rainfall Characteristics over Pacific-Andean River Basins in Ecuador and Peru using a non-homogenous hidden Markov model, J. Hydrometeor, 17(2), 481-498, doi:10.1175/JHM-D-15-0040.1, http://journals.ametsoc.org/doi/full/10.1175/JHM-D-15-0040.1
SiC JFET Transistor Circuit Model for Extreme Temperature Range
NASA Technical Reports Server (NTRS)
Neudeck, Philip G.
2008-01-01
A technique for simulating extreme-temperature operation of integrated circuits that incorporate silicon carbide (SiC) junction field-effect transistors (JFETs) has been developed. The technique involves modification of NGSPICE, which is an open-source version of the popular Simulation Program with Integrated Circuit Emphasis (SPICE) general-purpose analog-integrated-circuit-simulating software. NGSPICE in its unmodified form is used for simulating and designing circuits made from silicon-based transistors that operate at or near room temperature. Two rapid modifications of NGSPICE source code enable SiC JFETs to be simulated to 500 C using the well-known Level 1 model for silicon metal oxide semiconductor field-effect transistors (MOSFETs). First, the default value of the MOSFET surface potential must be changed. In the unmodified source code, this parameter has a value of 0.6, which corresponds to slightly more than half the bandgap of silicon. In NGSPICE modified to simulate SiC JFETs, this parameter is changed to a value of 1.6, corresponding to slightly more than half the bandgap of SiC. The second modification consists of changing the temperature dependence of MOSFET transconductance and saturation parameters. The unmodified NGSPICE source code implements a T(sup -1.5) temperature dependence for these parameters. In order to mimic the temperature behavior of experimental SiC JFETs, a T(sup -1.3) temperature dependence must be implemented in the NGSPICE source code. Following these two simple modifications, the Level 1 MOSFET model of the NGSPICE circuit simulation program reasonably approximates the measured high-temperature behavior of experimental SiC JFETs properly operated with zero or reverse bias applied to the gate terminal. Modification of additional silicon parameters in the NGSPICE source code was not necessary to model experimental SiC JFET current-voltage performance across the entire temperature range from 25 to 500 C.
Methods for Combining Payload Parameter Variations with Input Environment
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Straayer, J. W.
1975-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.
Soft silicone rubber in phononic structures: Correct elastic moduli
NASA Astrophysics Data System (ADS)
Still, Tim; Oudich, M.; Auerhammer, G. K.; Vlassopoulos, D.; Djafari-Rouhani, B.; Fytas, G.; Sheng, P.
2013-09-01
We report on a combination of experiments to determine the elastic moduli of a soft poly (dimethylsiloxane) rubber that was utilized in a smart experiment on resonant phononic modes [Liu , ScienceSCIEAS0036-807510.1126/science.289.5485.1734 289, 1734 (2000)] and whose reported moduli became widely used as a model system in theoretical calculations of phononic materials. We found that the most peculiar hallmark of these values, an extremely low longitudinal sound velocity, is not supported by our experiments. Anyhow, performing theoretical band structure calculations, we can reproduce the surprising experimental findings of Liu even utilizing the correct mechanical parameters. Thus, the physical conclusions derived in the theoretical works do not require the use of an extremely low longitudinal velocity, but can be reproduced assuming only a low value of the shear modulus, in agreement with our experiments.
Kalenik, Tatiana K; Costa, Rui; Motkina, Elena V; Kosenko, Tamara A; Skripko, Olga V; Kadnikova, Irina A
2017-01-01
There is a need to develop new foods for participants of expeditions in extreme conditions, which must be self-sufficient. These foods should be light to carry, with a long shelf life, tasty and with high nutrient density. Currently, protein sources are limited mainly to dried and canned meat. In this work, a protein-rich dried concentrate suitable for extreme expeditions was developed using soya, tomato, milk whey and meat by-products. Protein concentrates were developed using minced beef liver and heart, dehydrated and mixed with a soya protein-lycopene coagulate (SPLC) obtained from a solution prepared with germi- nated soybeans and mixed with tomato paste in milk whey, and finally dried. The technological parameters of pressing SPLC and of drying the protein concentrate were optimized using response surface methodology. The optimized technological parameters to prepare the protein concentrates were obtained, with 70:30 being the ideal ratio of minced meat to SPLC. The developed protein concentrates are characterized by a high calorific value of 376 kcal/100 g of dry product, with a water content of 98 g·kg-1, and 641-644 g·kg-1 of proteins. The essential amino acid indices are 100, with minimum essential amino acid content constitut- ing 100-128% of the FAO standard, depending on the raw meat used. These concentrates are also rich in micronutrients such as β-carotene and vitamin C. Analysis of the nutrient content showed that these non-perishable concentrates present a high nutritional value and complement other widely available vegetable concentrates to prepare a two-course meal. The soups and porridges prepared with these concentrates can be classified as functional foods, and comply with army requirements applicable to food products for extreme conditions.
NASA Astrophysics Data System (ADS)
Raschke, Mathias
2016-02-01
In this short note, I comment on the research of Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) regarding the extreme value theory and statistics in the case of earthquake magnitudes. The link between the generalized extreme value distribution (GEVD) as an asymptotic model for the block maxima of a random variable and the generalized Pareto distribution (GPD) as a model for the peaks over threshold (POT) of the same random variable is presented more clearly. Inappropriately, Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014) have neglected to note that the approximations by GEVD and GPD work only asymptotically in most cases. This is particularly the case with truncated exponential distribution (TED), a popular distribution model for earthquake magnitudes. I explain why the classical models and methods of the extreme value theory and statistics do not work well for truncated exponential distributions. Consequently, these classical methods should be used for the estimation of the upper bound magnitude and corresponding parameters. Furthermore, I comment on various issues of statistical inference in Pisarenko et al. and propose alternatives. I argue why GPD and GEVD would work for various types of stochastic earthquake processes in time, and not only for the homogeneous (stationary) Poisson process as assumed by Pisarenko et al. (Pure Appl. Geophys 171:1599-1624, 2014). The crucial point of earthquake magnitudes is the poor convergence of their tail distribution to the GPD, and not the earthquake process over time.
Çuhadar, Serap; Köseoğlu, Mehmet; Çinpolat, Yasemin; Buğdaycı, Güler; Usta, Murat; Semerci, Tuna
2016-01-01
Extremely high glucose concentrations have been shown to interfere with creatinine assays especially with Jaffe method in peritoneal dialysate. Because diabetes is the fastest growing chronic disease in the world, laboratories study with varying glucose concentrations. We investigated whether different levels of glucose spiked in serum interfere with 21 routine chemistry and thyroid assays at glucose concentrations between 17-51 mmol/L. Baseline (group I) serum pool with glucose concentration of 5.55 (5.44-5.61) mmol/L was prepared from patient sera. Spiking with 20% dextrose solution, sample groups were obtained with glucose concentrations: 17.09, 34.52, and 50.95 mmol/L (group II, III, IV, respectively). Total of 21 biochemistry analytes and thyroid tests were studied on Abbott c8000 and i2000sr with commercial reagents. Bias from baseline value was checked statistically and clinically. Creatinine increased significantly by 8.74%, 31.66%, 55.31% at groups II, III, IV, respectively with P values of < 0.001. At the median glucose concentration of 50.95 mmol/L, calcium, albumin, chloride and FT4 biased significantly clinically (-0.85%, 1.63%, 0.65%, 7.4% with P values 0.138, 0.214, 0.004, < 0.001, respectively). Remaining assays were free of interference. Among the numerous biochemical parameters studied, only a few parameters are affected by dramatically increased glucose concentration. The creatinine measurements obtained in human sera with the Jaffe alkaline method at high glucose concentrations should be interpreted with caution. Other tests that were affected with extremely high glucose concentrations were calcium, albumin, chloride and FT4, hence results should be taken into consideration in patients with poor diabetic control.
2018-01-01
Natural hazards (events that may cause actual disasters) are established in the literature as major causes of various massive and destructive problems worldwide. The occurrences of earthquakes, floods and heat waves affect millions of people through several impacts. These include cases of hospitalisation, loss of lives and economic challenges. The focus of this study was on the risk reduction of the disasters that occur because of extremely high temperatures and heat waves. Modelling average maximum daily temperature (AMDT) guards against the disaster risk and may also help countries towards preparing for extreme heat. This study discusses the use of the r largest order statistics approach of extreme value theory towards modelling AMDT over the period of 11 years, that is, 2000–2010. A generalised extreme value distribution for r largest order statistics is fitted to the annual maxima. This is performed in an effort to study the behaviour of the r largest order statistics. The method of maximum likelihood is used in estimating the target parameters and the frequency of occurrences of the hottest days is assessed. The study presents a case study of South Africa in which the data for the non-winter season (September–April of each year) are used. The meteorological data used are the AMDT that are collected by the South African Weather Service and provided by Eskom. The estimation of the shape parameter reveals evidence of a Weibull class as an appropriate distribution for modelling AMDT in South Africa. The extreme quantiles for specified return periods are estimated using the quantile function and the best model is chosen through the use of the deviance statistic with the support of the graphical diagnostic tools. The Entropy Difference Test (EDT) is used as a specification test for diagnosing the fit of the models to the data.
Reliability analysis of a sensitive and independent stabilometry parameter set
Nagymáté, Gergely; Orlovits, Zsanett
2018-01-01
Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54–0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals. PMID:29664938
Reliability analysis of a sensitive and independent stabilometry parameter set.
Nagymáté, Gergely; Orlovits, Zsanett; Kiss, Rita M
2018-01-01
Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54-0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals.
NASA Astrophysics Data System (ADS)
Li, Yue; Bai, Xiao Yong; Jie Wang, Shi; Qin, Luo Yi; Chao Tian, Yi; Jie Luo, Guang
2017-05-01
Soil loss tolerance (T value) is one of the criteria in determining the necessity of erosion control measures and ecological restoration strategy. However, the validity of this criterion in subtropical karst regions is strongly disputed. In this study, T value is calculated based on soil formation rate by using a digital distribution map of carbonate rock assemblage types. Results indicated a spatial heterogeneity and diversity in soil loss tolerance. Instead of only one criterion, a minimum of three criteria should be considered when investigating the carbonate areas of southern China because the one region, one T value
concept may not be applicable to this region. T value is proportionate to the amount of argillaceous material, which determines the surface soil thickness of the formations in homogenous carbonate rock areas. Homogenous carbonate rock, carbonate rock intercalated with clastic rock areas and carbonate/clastic rock alternation areas have T values of 20, 50 and 100 t/(km2 a), and they are extremely, severely and moderately sensitive to soil erosion. Karst rocky desertification (KRD) is defined as extreme soil erosion and reflects the risks of erosion. Thus, the relationship between T value and erosion risk is determined using KRD as a parameter. The existence of KRD land is unrelated to the T value, although this parameter indicates erosion sensitivity. Erosion risk is strongly dependent on the relationship between real soil loss (RL) and T value rather than on either erosion intensity or the T value itself. If RL > > T, then the erosion risk is high despite of a low RL. Conversely, if T > > RL, then the soil is safe although RL is high. Overall, these findings may clarify the heterogeneity of T value and its effect on erosion risk in a karst environment.
Analyzing phenological extreme events over the past five decades in Germany
NASA Astrophysics Data System (ADS)
Schleip, Christoph; Menzel, Annette; Estrella, Nicole; Graeser, Philipp
2010-05-01
As climate change may alter the frequency and intensity of extreme temperatures, we analysed whether warming of the last 5 decades has already changed the statistics of phenological extreme events. In this context, two extreme value statistical concepts are discussed and applied to existing phenological datasets of German Weather Service (DWD) in order to derive probabilities of occurrence for extreme early or late phenological events. We analyse four phenological groups; "begin of flowering, "leaf foliation", "fruit ripening" and "leaf colouring" as well as DWD indicator phases of the "phenological year". Additionally we put an emphasis on a between-species analysis; a comparison of differences in extreme onsets between three common northern conifers. Furthermore we conducted a within-species analysis with different phases of horse chestnut throughout a year. The first statistical approach fits data to a Gaussian model using traditional statistical techniques, and then analyses the extreme quantile. The key point of this approach is the adoption of an appropriate probability density function (PDF) to the observed data and the assessment of the PDF parameters change in time. The full analytical description in terms of the estimated PDF for defined time steps of the observation period allows probability assessments of extreme values for e.g. annual or decadal time steps. Related with this approach is the possibility of counting out the onsets which fall in our defined extreme percentiles. The estimation of the probability of extreme events on the basis of the whole data set is in contrast to analyses with the generalized extreme value distribution (GEV). The second approach deals with the extreme PDFs itself and fits the GEV distribution to annual minima of phenological series to provide useful estimates about return levels. For flowering and leaf unfolding phases exceptionally early extremes are seen since the mid 1980s and especially for the single years 1961, 1990 and 2007 whereas exceptionally extreme late events are seen in the year 1970. Summer phases such as fruit ripening exhibit stronger shifts to early extremes than spring phases. Leaf colouring phases reveal increasing probability for late extremes. The with GEV estimated 100-year event of Picea, Pinus and Larix amount to extreme early events of about -27, -31.48 and -32.79 days, respectively. If we assume non-stationary minimum data we get a more extreme 100-year event of about -35.40 for Picea but associated with wider confidence intervals. The GEV is simply another probability distribution but for purposes of extreme analysis in phenology it should be considered as equally important as (if not more important than) the Gaussian PDF approach.
Shope, J.B.; Storlazzi, Curt; Erikson, Li H.; Hegermiller, C.A.
2015-01-01
Wave heights, periods, and directions were forecast for 2081–2100 using output from four coupled atmosphere–ocean global climate models for representative concentration pathway scenarios RCP4.5 and RCP8.5. Global climate model wind fields were used to drive the global WAVEWATCH-III wave model to generate hourly time-series of bulk wave parameters for 25 islands in the mid to western tropical Pacific. December–February 95th percentile extreme significant wave heights under both climate scenarios decreased by 2100 compared to 1976–2010 historical values. Trends under both scenarios were similar, with the higher-emission RCP8.5 scenario displaying a greater decrease in extreme significant wave heights than where emissions are reduced in the RCP4.5 scenario. Central equatorial Pacific Islands displayed the greatest departure from historical values; significant wave heights decreased there by as much as 0.32 m during December–February and associated wave directions rotated approximately 30° clockwise during June–August compared to hindcast data.
Use of NARCCAP results for extremes: British Columbia case studies
NASA Astrophysics Data System (ADS)
Murdock, T. Q.; Eckstrand, H.; Buerger, G.; Hiebert, J.
2011-12-01
Demand for projections of extremes has arisen out of local infrastructure vulnerability assessments and adaptation planning. Four preliminary analyses of extremes have been undertaken in British Columbia in the past two years in collaboration with users: BC Ministry of Transportation and Infrastructure, Engineers Canada, City of Castelgar, and Columbia Basin Trust. Projects have included analysis of extremes for stormwater management, highways, and community adaptation in different areas of the province. This need for projections of extremes has been met using an ensemble of Regional Climate Model (RCM) results from NARCCAP, in some cases supplemented by and compared to statistical downscaling. Before assessing indices of extremes, each RCM simulation in the NARCCAP ensemble driven by reanalysis (NCEP) was compared to historical observations to assess RCM skill. Next, the anomalies according to each RCM future projection were compared to those of their driving GCM to determine the "value added" by the RCMs. Selected results will be shown for several indices of extremes, including the Climdex set of indices that has been widely used elsewhere (e.g., Stardex) and specific parameters of interest defined by users. Finally, the need for threshold scaling of some indices and use of as large an ensemble as possible will be illustrated.
Synoptic and meteorological drivers of extreme ozone concentrations over Europe
NASA Astrophysics Data System (ADS)
Otero, Noelia Felipe; Sillmann, Jana; Schnell, Jordan L.; Rust, Henning W.; Butler, Tim
2016-04-01
The present work assesses the relationship between local and synoptic meteorological conditions and surface ozone concentration over Europe in spring and summer months, during the period 1998-2012 using a new interpolated data set of observed surface ozone concentrations over the European domain. Along with local meteorological conditions, the influence of large-scale atmospheric circulation on surface ozone is addressed through a set of airflow indices computed with a novel implementation of a grid-by-grid weather type classification across Europe. Drivers of surface ozone over the full distribution of maximum daily 8-hour average values are investigated, along with drivers of the extreme high percentiles and exceedances or air quality guideline thresholds. Three different regression techniques are applied: multiple linear regression to assess the drivers of maximum daily ozone, logistic regression to assess the probability of threshold exceedances and quantile regression to estimate the meteorological influence on extreme values, as represented by the 95th percentile. The relative importance of the input parameters (predictors) is assessed by a backward stepwise regression procedure that allows the identification of the most important predictors in each model. Spatial patterns of model performance exhibit distinct variations between regions. The inclusion of the ozone persistence is particularly relevant over Southern Europe. In general, the best model performance is found over Central Europe, where the maximum temperature plays an important role as a driver of maximum daily ozone as well as its extreme values, especially during warmer months.
Statistical assessment of changes in extreme maximum temperatures over Saudi Arabia, 1985-2014
NASA Astrophysics Data System (ADS)
Raggad, Bechir
2018-05-01
In this study, two statistical approaches were adopted in the analysis of observed maximum temperature data collected from fifteen stations over Saudi Arabia during the period 1985-2014. In the first step, the behavior of extreme temperatures was analyzed and their changes were quantified with respect to the Expert Team on Climate Change Detection Monitoring indices. The results showed a general warming trend over most stations, in maximum temperature-related indices, during the period of analysis. In the second step, stationary and non-stationary extreme-value analyses were conducted for the temperature data. The results revealed that the non-stationary model with increasing linear trend in its location parameter outperforms the other models for two-thirds of the stations. Additionally, the 10-, 50-, and 100-year return levels were found to change with time considerably and that the maximum temperature could start to reappear in the different T-year return period for most stations. This analysis shows the importance of taking account the change over time in the estimation of return levels and therefore justifies the use of the non-stationary generalized extreme value distribution model to describe most of the data. Furthermore, these last findings are in line with the result of significant warming trends found in climate indices analyses.
Charged boson stars and black holes with nonminimal coupling to gravity
NASA Astrophysics Data System (ADS)
Verbin, Y.; Brihaye, Y.
2018-02-01
We find new spherically symmetric charged boson star solutions of a complex scalar field coupled nonminimally to gravity by a "John-type" term of Horndeski theory, that is a coupling between the kinetic scalar term and Einstein tensor. We study the parameter space of the solutions and find two distinct families according to their position in parameter space. More widespread is the family of solutions (which we call branch 1) existing for a finite interval of the central value of the scalar field starting from zero and ending at some finite maximal value. This branch contains as a special case the charged boson stars of the minimally coupled theory. In some regions of parameter space we find a new second branch ("branch 2") of solutions which are more massive and more stable than those of branch 1. This second branch exists also in a finite interval of the central value of the scalar field, but its end points (either both or in some cases only one) are extremal Reissner-Nordström black hole solutions.
Plasma Charge Current for Controlling and Monitoring Electron Beam Welding with Beam Oscillation
Trushnikov, Dmitriy; Belenkiy, Vladimir; Shchavlev, Valeriy; Piskunov, Anatoliy; Abdullin, Aleksandr; Mladenov, Georgy
2012-01-01
Electron beam welding (EBW) shows certain problems with the control of focus regime. The electron beam focus can be controlled in electron-beam welding based on the parameters of a secondary signal. In this case, the parameters like secondary emissions and focus coil current have extreme relationships. There are two values of focus coil current which provide equal value signal parameters. Therefore, adaptive systems of electron beam focus control use low-frequency scanning of focus, which substantially limits the operation speed of these systems and has a negative effect on weld joint quality. The purpose of this study is to develop a method for operational control of the electron beam focus during welding in the deep penetration mode. The method uses the plasma charge current signal as an additional informational parameter. This parameter allows identification of the electron beam focus regime in electron-beam welding without application of additional low-frequency scanning of focus. It can be used for working out operational electron beam control methods focusing exactly on the welding. In addition, use of this parameter allows one to observe the shape of the keyhole during the welding process. PMID:23242276
Plasma charge current for controlling and monitoring electron beam welding with beam oscillation.
Trushnikov, Dmitriy; Belenkiy, Vladimir; Shchavlev, Valeriy; Piskunov, Anatoliy; Abdullin, Aleksandr; Mladenov, Georgy
2012-12-14
Electron beam welding (EBW) shows certain problems with the control of focus regime. The electron beam focus can be controlled in electron-beam welding based on the parameters of a secondary signal. In this case, the parameters like secondary emissions and focus coil current have extreme relationships. There are two values of focus coil current which provide equal value signal parameters. Therefore, adaptive systems of electron beam focus control use low-frequency scanning of focus, which substantially limits the operation speed of these systems and has a negative effect on weld joint quality. The purpose of this study is to develop a method for operational control of the electron beam focus during welding in the deep penetration mode. The method uses the plasma charge current signal as an additional informational parameter. This parameter allows identification of the electron beam focus regime in electron-beam welding without application of additional low-frequency scanning of focus. It can be used for working out operational electron beam control methods focusing exactly on the welding. In addition, use of this parameter allows one to observe the shape of the keyhole during the welding process.
Influence of thermal anisotropy on best-fit estimates of shock normals
NASA Technical Reports Server (NTRS)
Lepping, R. P.
1971-01-01
The influence of thermal anisotropy on the estimates of interplanetary shock parameters and the associated normals is discussed. A practical theorem is presented for quantitatively correcting for anisotropic effects by weighting the before and after magnetic fields by the same anisotropy parameter h. The quantity h depends only on the thermal anisotropies before and after the shock and on the angles between the magnetic fields and the shock normal. The theorem can be applied to most slow shocks, but in those cases h usually should be lower, and sometimes markedly lower, than unity. For the extreme values of h, little change results in the shock parameters or in the shock normal.
Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications
NASA Astrophysics Data System (ADS)
Wang, K.; Lettenmaier, D. P.
2017-12-01
Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.
NASA Astrophysics Data System (ADS)
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-08-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
Fienen, M.; Hunt, R.; Krabbenhoft, D.; Clemo, T.
2009-01-01
Flow path delineation is a valuable tool for interpreting the subsurface hydrogeochemical environment. Different types of data, such as groundwater flow and transport, inform different aspects of hydrogeologic parameter values (hydraulic conductivity in this case) which, in turn, determine flow paths. This work combines flow and transport information to estimate a unified set of hydrogeologic parameters using the Bayesian geostatistical inverse approach. Parameter flexibility is allowed by using a highly parameterized approach with the level of complexity informed by the data. Despite the effort to adhere to the ideal of minimal a priori structure imposed on the problem, extreme contrasts in parameters can result in the need to censor correlation across hydrostratigraphic bounding surfaces. These partitions segregate parameters into facies associations. With an iterative approach in which partitions are based on inspection of initial estimates, flow path interpretation is progressively refined through the inclusion of more types of data. Head observations, stable oxygen isotopes (18O/16O ratios), and tritium are all used to progressively refine flow path delineation on an isthmus between two lakes in the Trout Lake watershed, northern Wisconsin, United States. Despite allowing significant parameter freedom by estimating many distributed parameter values, a smooth field is obtained.
Heavy Tail Behavior of Rainfall Extremes across Germany
NASA Astrophysics Data System (ADS)
Castellarin, A.; Kreibich, H.; Vorogushyn, S.; Merz, B.
2017-12-01
Distributions are termed heavy-tailed if extreme values are more likely than would be predicted by probability distributions that have exponential asymptotic behavior. Heavy-tail behavior often leads to surprise, because historical observations can be a poor guide for the future. Heavy-tail behavior seems to be widespread for hydro-meteorological extremes, such as extreme rainfall and flood events. To date there have been only vague hints to explain under which conditions these extremes show heavy-tail behavior. We use an observational data set consisting of 11 climate variables at 1440 stations across Germany. This homogenized, gap-free data set covers 110 years (1901-2010) at daily resolution. We estimate the upper tail behavior, including its uncertainty interval, of daily precipitation extremes for the 1,440 stations at the annual and seasonal time scales. Different tail indicators are tested, including the shape parameter of the Generalized Extreme Value distribution, the upper tail ratio and the obesity index. In a further step, we explore to which extent the tail behavior can be explained by geographical and climate factors. A large number of characteristics is derived, such as station elevation, degree of continentality, aridity, measures for quantifying the variability of humidity and wind velocity, or event-triggering large-scale atmospheric situation. The link between the upper tail behavior and these characteristics is investigated via data mining methods capable of detecting non-linear relationships in large data sets. This exceptionally rich observational data set, in terms of number of stations, length of time series and number of explaining variables, allows insights into the upper tail behavior which is rarely possible given the typical observational data sets available.
Inflight thermodynamic properties
NASA Technical Reports Server (NTRS)
Brown, S. C.; Daniels, G. E.; Johnson, D. L.; Smith, O. E.
1973-01-01
The inflight thermodynamic parameters (temperature, pressure, and density) of the atmosphere are presented. Mean and extreme values of the thermodynamic parameters given here can be used in application of many aerospace problems, such as: (1) research and planning and engineering design of remote earth sensing systems; (2) vehicle design and development; and (3) vehicle trajectory analysis, dealing with vehicle thrust, dynamic pressure, aerodynamic drag, aerodynamic heating, vibration, structural and guidance limitations, and reentry analysis. Atmospheric density plays a very important role in most of the above problems. A subsection on reentry is presented, giving atmospheric models to be used for reentry heating, trajectory, etc., analysis.
Local instability driving extreme events in a pair of coupled chaotic electronic circuits
NASA Astrophysics Data System (ADS)
de Oliveira, Gilson F.; Di Lorenzo, Orlando; de Silans, Thierry Passerat; Chevrollier, Martine; Oriá, Marcos; Cavalcante, Hugo L. D. de Souza
2016-06-01
For a long time, extreme events happening in complex systems, such as financial markets, earthquakes, and neurological networks, were thought to follow power-law size distributions. More recently, evidence suggests that in many systems the largest and rarest events differ from the other ones. They are dragon kings, outliers that make the distribution deviate from a power law in the tail. Understanding the processes of formation of extreme events and what circumstances lead to dragon kings or to a power-law distribution is an open question and it is a very important one to assess whether extreme events will occur too often in a specific system. In the particular system studied in this paper, we show that the rate of occurrence of dragon kings is controlled by the value of a parameter. The system under study here is composed of two nearly identical chaotic oscillators which fail to remain in a permanently synchronized state when coupled. We analyze the statistics of the desynchronization events in this specific example of two coupled chaotic electronic circuits and find that modifying a parameter associated to the local instability responsible for the loss of synchronization reduces the occurrence of dragon kings, while preserving the power-law distribution of small- to intermediate-size events with the same scaling exponent. Our results support the hypothesis that the dragon kings are caused by local instabilities in the phase space.
A Firefly-Inspired Method for Protein Structure Prediction in Lattice Models
Maher, Brian; Albrecht, Andreas A.; Loomes, Martin; Yang, Xin-She; Steinhöfel, Kathleen
2014-01-01
We introduce a Firefly-inspired algorithmic approach for protein structure prediction over two different lattice models in three-dimensional space. In particular, we consider three-dimensional cubic and three-dimensional face-centred-cubic (FCC) lattices. The underlying energy models are the Hydrophobic-Polar (H-P) model, the Miyazawa–Jernigan (M-J) model and a related matrix model. The implementation of our approach is tested on ten H-P benchmark problems of a length of 48 and ten M-J benchmark problems of a length ranging from 48 until 61. The key complexity parameter we investigate is the total number of objective function evaluations required to achieve the optimum energy values for the H-P model or competitive results in comparison to published values for the M-J model. For H-P instances and cubic lattices, where data for comparison are available, we obtain an average speed-up over eight instances of 2.1, leaving out two extreme values (otherwise, 8.8). For six M-J instances, data for comparison are available for cubic lattices and runs with a population size of 100, where, a priori, the minimum free energy is a termination criterion. The average speed-up over four instances is 1.2 (leaving out two extreme values, otherwise 1.1), which is achieved for a population size of only eight instances. The present study is a test case with initial results for ad hoc parameter settings, with the aim of justifying future research on larger instances within lattice model settings, eventually leading to the ultimate goal of implementations for off-lattice models. PMID:24970205
A firefly-inspired method for protein structure prediction in lattice models.
Maher, Brian; Albrecht, Andreas A; Loomes, Martin; Yang, Xin-She; Steinhöfel, Kathleen
2014-01-07
We introduce a Firefly-inspired algorithmic approach for protein structure prediction over two different lattice models in three-dimensional space. In particular, we consider three-dimensional cubic and three-dimensional face-centred-cubic (FCC) lattices. The underlying energy models are the Hydrophobic-Polar (H-P) model, the Miyazawa-Jernigan (M-J) model and a related matrix model. The implementation of our approach is tested on ten H-P benchmark problems of a length of 48 and ten M-J benchmark problems of a length ranging from 48 until 61. The key complexity parameter we investigate is the total number of objective function evaluations required to achieve the optimum energy values for the H-P model or competitive results in comparison to published values for the M-J model. For H-P instances and cubic lattices, where data for comparison are available, we obtain an average speed-up over eight instances of 2.1, leaving out two extreme values (otherwise, 8.8). For six M-J instances, data for comparison are available for cubic lattices and runs with a population size of 100, where, a priori, the minimum free energy is a termination criterion. The average speed-up over four instances is 1.2 (leaving out two extreme values, otherwise 1.1), which is achieved for a population size of only eight instances. The present study is a test case with initial results for ad hoc parameter settings, with the aim of justifying future research on larger instances within lattice model settings, eventually leading to the ultimate goal of implementations for off-lattice models.
NASA Astrophysics Data System (ADS)
Yokozawa, M.; Kawai, Y.; Toda, M.
2016-12-01
The increase in extreme climate episodes associated with ongoing climate change may induce extensive damage to terrestrial ecosystems, changing plant functional traits that regulate ecosystem carbon budget. Over the last two decades, an advanced observational operation of tower-based eddy covariance has enhanced our ability to understand spatial and temporal features of ecosystem carbon exchange worldwide. In contrast, there remain several unresolved issues regarding plant function responses to extreme climate episodes and the resulting effects on the terrestrial carbon balance. In this work, we examined the effects of an extreme climatic event (typhoon) on plant functional traits of a cool-temperate forest in Japan using a model data fusion technique. We used a semi-process model to describes the time changes in net ecosystem exchange (NEE) of CO2 between atmosphere and ecosystem based on the distributions of foliage and size of an individual in a plant population, assuming the diameter profile and the pipe model theory (Shinozaki et al., 1964). The canopy photosynthesis model (Yokozawa et al., 1996) provides us the vertical distribution of gross photosynthetic rates within stand. It can allow us to examine the differences in photosynthetic rate with plant functional traits changed by climate disturbance. The DREAM(ZS) algorithm (ter Braak & Vrugt, 2008) was used to estimate the model parameters. To reduce the effects of heteroscedastic error, a generalized likelihood function was adopted (Schoup & Vrugt, 2010). The estimated annual parameter which represents the initial slope of light-photosynthetic rate curve, significantly changed after typhoon disturbance in 2004. Time changes in the profile of the maximum photosynthetic rate also shows the intensive response to the disturbance. After the disturbance, the values at upper foliage layer are higher than at lower foliage layer in contrast to that before disturbance. Specifically, just after disturbance in 2004b-5a, the value at uppermost foliage layer was estimated to be the highest value. It implies that the plant population recovered the damage by changing the distribution of leaves having different functional traits, i.e. resilient behavior.
High northern latitude temperature extremes, 1400-1999
NASA Astrophysics Data System (ADS)
Tingley, M. P.; Huybers, P.; Hughen, K. A.
2009-12-01
There is often an interest in determining which interval features the most extreme value of a reconstructed climate field, such as the warmest year or decade in a temperature reconstruction. Previous approaches to this type of question have not fully accounted for the spatial and temporal covariance in the climate field when assessing the significance of extreme values. Here we present results from applying BARSAT, a new, Bayesian approach to reconstructing climate fields, to a 600 year multiproxy temperature data set that covers land areas between 45N and 85N. The end result of the analysis is an ensemble of spatially and temporally complete realizations of the temperature field, each of which is consistent with the observations and the estimated values of the parameters that define the assumed spatial and temporal covariance functions. In terms of the spatial average temperature, 1990-1999 was the warmest decade in the 1400-1999 interval in each of 2000 ensemble members, while 1995 was the warmest year in 98% of the ensemble members. A similar analysis at each node of a regular 5 degree grid gives insight into the spatial distribution of warm temperatures, and reveals that 1995 was anomalously warm in Eurasia, whereas 1998 featured extreme warmth in North America. In 70% of the ensemble members, 1601 featured the coldest spatial average, indicating that the eruption of Huaynaputina in Peru in 1600 (with a volcanic explosivity index of 6) had a major cooling impact on the high northern latitudes. Repeating this analysis at each node reveals the varying impacts of major volcanic eruptions on the distribution of extreme cooling. Finally, we use the ensemble to investigate extremes in the time evolution of centennial temperature trends, and find that in more than half the ensemble members, the greatest rate of change in the spatial mean time series was a cooling centered at 1600. The largest rate of centennial scale warming, however, occurred in the 20th Century in more than 98% of the ensemble members.
Entropy-based derivation of generalized distributions for hydrometeorological frequency analysis
NASA Astrophysics Data System (ADS)
Chen, Lu; Singh, Vijay P.
2018-02-01
Frequency analysis of hydrometeorological and hydrological extremes is needed for the design of hydraulic and civil infrastructure facilities as well as water resources management. A multitude of distributions have been employed for frequency analysis of these extremes. However, no single distribution has been accepted as a global standard. Employing the entropy theory, this study derived five generalized distributions for frequency analysis that used different kinds of information encoded as constraints. These distributions were the generalized gamma (GG), the generalized beta distribution of the second kind (GB2), and the Halphen type A distribution (Hal-A), Halphen type B distribution (Hal-B) and Halphen type inverse B distribution (Hal-IB), among which the GG and GB2 distribution were previously derived by Papalexiou and Koutsoyiannis (2012) and the Halphen family was first derived using entropy theory in this paper. The entropy theory allowed to estimate parameters of the distributions in terms of the constraints used for their derivation. The distributions were tested using extreme daily and hourly rainfall data. Results show that the root mean square error (RMSE) values were very small, which indicated that the five generalized distributions fitted the extreme rainfall data well. Among them, according to the Akaike information criterion (AIC) values, generally the GB2 and Halphen family gave a better fit. Therefore, those general distributions are one of the best choices for frequency analysis. The entropy-based derivation led to a new way for frequency analysis of hydrometeorological extremes.
Improving the Statistical Modeling of the TRMM Extreme Precipitation Monitoring System
NASA Astrophysics Data System (ADS)
Demirdjian, L.; Zhou, Y.; Huffman, G. J.
2016-12-01
This project improves upon an existing extreme precipitation monitoring system based on the Tropical Rainfall Measuring Mission (TRMM) daily product (3B42) using new statistical models. The proposed system utilizes a regional modeling approach, where data from similar grid locations are pooled to increase the quality and stability of the resulting model parameter estimates to compensate for the short data record. The regional frequency analysis is divided into two stages. In the first stage, the region defined by the TRMM measurements is partitioned into approximately 27,000 non-overlapping clusters using a recursive k-means clustering scheme. In the second stage, a statistical model is used to characterize the extreme precipitation events occurring in each cluster. Instead of utilizing the block-maxima approach used in the existing system, where annual maxima are fit to the Generalized Extreme Value (GEV) probability distribution at each cluster separately, the present work adopts the peak-over-threshold (POT) method of classifying points as extreme if they exceed a pre-specified threshold. Theoretical considerations motivate the use of the Generalized-Pareto (GP) distribution for fitting threshold exceedances. The fitted parameters can be used to construct simple and intuitive average recurrence interval (ARI) maps which reveal how rare a particular precipitation event is given its spatial location. The new methodology eliminates much of the random noise that was produced by the existing models due to a short data record, producing more reasonable ARI maps when compared with NOAA's long-term Climate Prediction Center (CPC) ground based observations. The resulting ARI maps can be useful for disaster preparation, warning, and management, as well as increased public awareness of the severity of precipitation events. Furthermore, the proposed methodology can be applied to various other extreme climate records.
An overabundance of low-density Neptune-like planets
NASA Astrophysics Data System (ADS)
Cubillos, Patricio; Erkaev, Nikolai V.; Juvan, Ines; Fossati, Luca; Johnstone, Colin P.; Lammer, Helmut; Lendl, Monika; Odert, Petra; Kislyakova, Kristina G.
2017-04-01
We present a uniform analysis of the atmospheric escape rate of Neptune-like planets with estimated radius and mass (restricted to Mp < 30 M⊕). For each planet, we compute the restricted Jeans escape parameter, Λ, for a hydrogen atom evaluated at the planetary mass, radius, and equilibrium temperature. Values of Λ ≲ 20 suggest extremely high mass-loss rates. We identify 27 planets (out of 167) that are simultaneously consistent with hydrogen-dominated atmospheres and are expected to exhibit extreme mass-loss rates. We further estimate the mass-loss rates (Lhy) of these planets with tailored atmospheric hydrodynamic models. We compare Lhy to the energy-limited (maximum-possible high-energy driven) mass-loss rates. We confirm that 25 planets (15 per cent of the sample) exhibit extremely high mass-loss rates (Lhy > 0.1 M⌖ Gyr-1), well in excess of the energy-limited mass-loss rates. This constitutes a contradiction, since the hydrogen envelopes cannot be retained given the high mass-loss rates. We hypothesize that these planets are not truly under such high mass-loss rates. Instead, either hydrodynamic models overestimate the mass-loss rates, transit-timing-variation measurements underestimate the planetary masses, optical transit observations overestimate the planetary radii (due to high-altitude clouds), or Neptunes have consistently higher albedos than Jupiter planets. We conclude that at least one of these established estimations/techniques is consistently producing biased values for Neptune planets. Such an important fraction of exoplanets with misinterpreted parameters can significantly bias our view of populations studies, like the observed mass-radius distribution of exoplanets for example.
Lower extremity EMG-driven modeling of walking with automated adjustment of musculoskeletal geometry
Meyer, Andrew J.; Patten, Carolynn
2017-01-01
Neuromusculoskeletal disorders affecting walking ability are often difficult to manage, in part due to limited understanding of how a patient’s lower extremity muscle excitations contribute to the patient’s lower extremity joint moments. To assist in the study of these disorders, researchers have developed electromyography (EMG) driven neuromusculoskeletal models utilizing scaled generic musculoskeletal geometry. While these models can predict individual muscle contributions to lower extremity joint moments during walking, the accuracy of the predictions can be hindered by errors in the scaled geometry. This study presents a novel EMG-driven modeling method that automatically adjusts surrogate representations of the patient’s musculoskeletal geometry to improve prediction of lower extremity joint moments during walking. In addition to commonly adjusted neuromusculoskeletal model parameters, the proposed method adjusts model parameters defining muscle-tendon lengths, velocities, and moment arms. We evaluated our EMG-driven modeling method using data collected from a high-functioning hemiparetic subject walking on an instrumented treadmill at speeds ranging from 0.4 to 0.8 m/s. EMG-driven model parameter values were calibrated to match inverse dynamic moments for five degrees of freedom in each leg while keeping musculoskeletal geometry close to that of an initial scaled musculoskeletal model. We found that our EMG-driven modeling method incorporating automated adjustment of musculoskeletal geometry predicted net joint moments during walking more accurately than did the same method without geometric adjustments. Geometric adjustments improved moment prediction errors by 25% on average and up to 52%, with the largest improvements occurring at the hip. Predicted adjustments to musculoskeletal geometry were comparable to errors reported in the literature between scaled generic geometric models and measurements made from imaging data. Our results demonstrate that with appropriate experimental data, joint moment predictions for walking generated by an EMG-driven model can be improved significantly when automated adjustment of musculoskeletal geometry is included in the model calibration process. PMID:28700708
Probabilistic description of probable maximum precipitation
NASA Astrophysics Data System (ADS)
Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin
2017-04-01
Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.
Using modified fruit fly optimisation algorithm to perform the function test and case studies
NASA Astrophysics Data System (ADS)
Pan, Wen-Tsao
2013-06-01
Evolutionary computation is a computing mode established by practically simulating natural evolutionary processes based on the concept of Darwinian Theory, and it is a common research method. The main contribution of this paper was to reinforce the function of searching for the optimised solution using the fruit fly optimization algorithm (FOA), in order to avoid the acquisition of local extremum solutions. The evolutionary computation has grown to include the concepts of animal foraging behaviour and group behaviour. This study discussed three common evolutionary computation methods and compared them with the modified fruit fly optimization algorithm (MFOA). It further investigated the ability of the three mathematical functions in computing extreme values, as well as the algorithm execution speed and the forecast ability of the forecasting model built using the optimised general regression neural network (GRNN) parameters. The findings indicated that there was no obvious difference between particle swarm optimization and the MFOA in regards to the ability to compute extreme values; however, they were both better than the artificial fish swarm algorithm and FOA. In addition, the MFOA performed better than the particle swarm optimization in regards to the algorithm execution speed, and the forecast ability of the forecasting model built using the MFOA's GRNN parameters was better than that of the other three forecasting models.
Stochastic Generation of Spatiotemporal Rainfall Events for Flood Risk Assessment
NASA Astrophysics Data System (ADS)
Diederen, D.; Liu, Y.; Gouldby, B.; Diermanse, F.
2017-12-01
Current flood risk analyses that only consider peaks of hydrometeorological forcing variables have limitations regarding their representation of reality. Simplistic assumptions regarding antecedent conditions are required, often different sources of flooding are considered in isolation, and the complex temporal and spatial evolution of the events is not considered. Mid-latitude storms, governed by large scale climatic conditions, often exhibit a high degree of temporal dependency, for example. For sustainable flood risk management, that accounts appropriately for climate change, it is desirable for flood risk analyses to reflect reality more appropriately. Analysis of risk mitigation measures and comparison of their relative performance is therefore likely to be more robust and lead to improved solutions. We provide a new framework for the provision of boundary conditions to flood risk analyses that more appropriately reflects reality. The boundary conditions capture the temporal dependencies of complex storms whilst preserving the extreme values and associated spatial dependencies. We demonstrate the application of this framework to generate a synthetic rainfall events time series boundary condition set from reanalysis rainfall data (CFSR) on the continental scale. We define spatiotemporal clusters of rainfall as events, extract hydrological parameters for each event, generate synthetic parameter sets with a multivariate distribution with a focus on the joint tail probability [Heffernan and Tawn, 2004], and finally create synthetic events from the generated synthetic parameters. We highlight the stochastic integration of (a) spatiotemporal features, e.g. event occurrence intensity over space-time, or time to previous event, which we use for the spatial placement and sequencing of the synthetic events, and (b) value-specific parameters, e.g. peak intensity and event extent. We contrast this to more traditional approaches to highlight the significant improvements in terms of representing the reality of extreme flood events.
NASA Astrophysics Data System (ADS)
Otero, L. J.; Ortiz-Royero, J. C.; Ruiz-Merchan, J. K.; Higgins, A. E.; Henriquez, S. A.
2016-02-01
The aim of this study is to determine the contribution and importance of cold fronts and storms to extreme waves in different areas of the Colombian Caribbean in an attempt to determine the extent of the threat posed by the flood processes to which these coastal populations are exposed. Furthermore, the study wishes to establish the actions to which coastal engineering constructions should be subject. In the calculation of maritime constructions, the most important parameter is the height of the wave. For this reason, it is necessary to establish the design wave height to which a coastal engineering structure should be resistant. This wave height varies according to the return period considered. The significant height values for the areas focused on in the study were calculated in accordance with Gumbel's extreme value methodology. The methodology was evaluated using data from the reanalysis of the spectral National Oceanic and Atmospheric Administration (NOAA) WAVEWATCH III® (WW3) model for 15 points along the 1600 km of the Colombian Caribbean coastline (continental and insular) between the years 1979 and 2009. The results demonstrated that the extreme waves caused by tropical cyclones and those caused by cold fronts have different effects along the Colombian Caribbean coast. Storms and hurricanes are of greater importance in the Guajira Peninsula (Alta Guajira). In the central area (consisting of Baja Guajira, and the cities of Santa Marta, Barranquilla, and Cartagena), the strong impact of cold fronts on extreme waves is evident. However, in the southern region of the Colombian Caribbean coast (ranging from the Gulf of Morrosquillo to the Gulf of Urabá), the extreme values of wave heights are lower than in the previously mentioned regions, despite being dominated mainly by the passage of cold fronts. Extreme waves in the San Andrés and Providencia insular region present a different dynamic from that in the continental area due to their geographic location. The wave heights in the extreme regime are similar in magnitude to those found in Alta Guajira, but the extreme waves associated with the passage of cold fronts in this region have lower return periods than those associated with the hurricane season.
Modeling Earth's Ring Current Using The CIMI Model
NASA Astrophysics Data System (ADS)
Craven, J. D., II; Perez, J. D.; Buzulukova, N.; Fok, M. C. H.
2015-12-01
Earth's ring current is a result of the injection of charged particles trapped in the magnetosphere from solar storms. The enhancement of the ring current particles produces magnetic depressions and disturbances to the Earth's magnetic field known as geomagnetic storms, which have been modeled using the comprehensive inner magnetosphere-ionosphere (CIMI) model. The purpose of this model is to identify and understand the physical processes that control the dynamics of the geomagnetic storms. The basic procedure was to use the CIMI model for the simulation of 15 storms since 2009. Some of the storms were run multiple times, but with varying parameters relating to the dynamics of the Earth's magnetic field, particle fluxes, and boundary conditions of the inner-magnetosphere. Results and images were placed in the TWINS online catalog page for further analysis and discussion. Particular areas of interest were extreme storm events. A majority of storms simulated had average DST values of -100 nT; these extreme storms exceeded DST values of -200 nT. The continued use of the CIMI model will increase knowledge of the interactions and processes of the inner-magnetosphere as well as lead to a better understanding of extreme solar storm events for the future advancement of space weather physics.
Time-varying Concurrent Risk of Extreme Droughts and Heatwaves in California
NASA Astrophysics Data System (ADS)
Sarhadi, A.; Diffenbaugh, N. S.; Ausin, M. C.
2016-12-01
Anthropogenic global warming has changed the nature and the risk of extreme climate phenomena such as droughts and heatwaves. The concurrent of these nature-changing climatic extremes may result in intensifying undesirable consequences in terms of human health and destructive effects in water resources. The present study assesses the risk of concurrent extreme droughts and heatwaves under dynamic nonstationary conditions arising from climate change in California. For doing so, a generalized fully Bayesian time-varying multivariate risk framework is proposed evolving through time under dynamic human-induced environment. In this methodology, an extreme, Bayesian, dynamic copula (Gumbel) is developed to model the time-varying dependence structure between the two different climate extremes. The time-varying extreme marginals are previously modeled using a Generalized Extreme Value (GEV) distribution. Bayesian Markov Chain Monte Carlo (MCMC) inference is integrated to estimate parameters of the nonstationary marginals and copula using a Gibbs sampling method. Modelled marginals and copula are then used to develop a fully Bayesian, time-varying joint return period concept for the estimation of concurrent risk. Here we argue that climate change has increased the chance of concurrent droughts and heatwaves over decades in California. It is also demonstrated that a time-varying multivariate perspective should be incorporated to assess realistic concurrent risk of the extremes for water resources planning and management in a changing climate in this area. The proposed generalized methodology can be applied for other stochastic nature-changing compound climate extremes that are under the influence of climate change.
Association of physical examination with pulmonary artery catheter parameters in acute lung injury.
Grissom, Colin K; Morris, Alan H; Lanken, Paul N; Ancukiewicz, Marek; Orme, James F; Schoenfeld, David A; Thompson, B Taylor
2009-10-01
To correlate physical examination findings, central venous pressure, fluid output, and central venous oxygen saturation with pulmonary artery catheter parameters. Retrospective study. Data from the multicenter Fluid and Catheter Treatment Trial of the National Institutes of Health Acute Respiratory Distress Syndrome Network. Five hundred thirteen patients with acute lung injury randomized to treatment with a pulmonary artery catheter. Correlation of physical examination findings (capillary refill time >2 secs, knee mottling, or cool extremities), central venous pressure, fluid output, and central venous oxygen saturation with parameters from a pulmonary artery catheter. We determined association of baseline physical examination findings and on-study parameters of central venous pressure and central venous oxygen saturation with cardiac index <2.5 L/min/m2 and mixed venous oxygen saturation <60%. We determined correlation of baseline central venous oxygen saturation and mixed venous oxygen saturation and predictive value of a low central venous oxygen saturation for a low mixed venous oxygen saturation. Prevalence of cardiac index <2.5 and mixed venous oxygen saturation <60% was 8.1% and 15.5%, respectively. Baseline presence of all three physical examination findings had low sensitivity (12% and 8%), high specificity (98% and 99%), low positive predictive value (40% and 56%), but high negative predictive value (93% and 86%) for cardiac index <2.5 and mixed venous oxygen saturation <60%, respectively. Central venous oxygen saturation <70% predicted a mixed venous oxygen saturation <60% with a sensitivity 84%,specificity 70%, positive predictive value 31%, and negative predictive value of 96%. Low cardiac index correlated with cool extremities, high central venous pressure, and low 24-hr fluid output; and low mixed venous oxygen saturation correlated with knee mottling and high central venous pressure, but these correlations were not found to be clinically useful. In this subset of patients with acute lung injury, there is a high prior probability that cardiac index and mixed venous oxygen saturation are normal and physical examination findings of ineffective circulation are not useful for predicting low cardiac index or mixed venous oxygen saturation. Central venous oxygen saturation <70% does not accurately predict mixed venous oxygen saturation <60%, but a central venous oxygen saturation >or=70% may be useful to exclude mixed venous oxygen saturation <60%.
Why anthropic reasoning cannot predict Lambda.
Starkman, Glenn D; Trotta, Roberto
2006-11-17
We revisit anthropic arguments purporting to explain the measured value of the cosmological constant. We argue that different ways of assigning probabilities to candidate universes lead to totally different anthropic predictions. As an explicit example, we show that weighting different universes by the total number of possible observations leads to an extremely small probability for observing a value of Lambda equal to or greater than what we now measure. We conclude that anthropic reasoning within the framework of probability as frequency is ill-defined and that in the absence of a fundamental motivation for selecting one weighting scheme over another the anthropic principle cannot be used to explain the value of Lambda, nor, likely, any other physical parameters.
UCODE, a computer code for universal inverse modeling
Poeter, E.P.; Hill, M.C.
1999-01-01
This article presents the US Geological Survey computer program UCODE, which was developed in collaboration with the US Army Corps of Engineers Waterways Experiment Station and the International Ground Water Modeling Center of the Colorado School of Mines. UCODE performs inverse modeling, posed as a parameter-estimation problem, using nonlinear regression. Any application model or set of models can be used; the only requirement is that they have numerical (ASCII or text only) input and output files and that the numbers in these files have sufficient significant digits. Application models can include preprocessors and postprocessors as well as models related to the processes of interest (physical, chemical and so on), making UCODE extremely powerful for model calibration. Estimated parameters can be defined flexibly with user-specified functions. Observations to be matched in the regression can be any quantity for which a simulated equivalent value can be produced, thus simulated equivalent values are calculated using values that appear in the application model output files and can be manipulated with additive and multiplicative functions, if necessary. Prior, or direct, information on estimated parameters also can be included in the regression. The nonlinear regression problem is solved by minimizing a weighted least-squares objective function with respect to the parameter values using a modified Gauss-Newton method. Sensitivities needed for the method are calculated approximately by forward or central differences and problems and solutions related to this approximation are discussed. Statistics are calculated and printed for use in (1) diagnosing inadequate data or identifying parameters that probably cannot be estimated with the available data, (2) evaluating estimated parameter values, (3) evaluating the model representation of the actual processes and (4) quantifying the uncertainty of model simulated values. UCODE is intended for use on any computer operating system: it consists of algorithms programmed in perl, a freeware language designed for text manipulation and Fortran90, which efficiently performs numerical calculations.
Skomudek, Aleksandra; Gilowska, Iwona; Jasiński, Ryszard; Rożek-Piechura, Krystyna
2017-01-01
The elderly are particularly vulnerable to degenerative diseases, such as circulatory and respiratory system and vascular system diseases. The objective of this study was therefore to evaluate the distribution of temperature and the dynamics of venous blood flow in the lower limbs (LLs) and to assess the interdependence of these parameters in terms of the somatic components in males and females participating in activities at the University of the Third Age. The study included 60 females (mean age 67.4 years) and 40 males (mean age 67.5 years). A body composition assessment was performed using the bioimpedance technique - Tanita BC-418MA. The following parameters were examined: fat%, fat mass, fat-free mass, and total body water. The minimal, maximal, and mean temperature values and their distributions were examined using infrared thermographic camera VarioCAM Head. Measurements of the venous refilling time and the work of the LL venous pump were examined using a Rheo Dopplex II PPG. In males, the mean value of the right LL temperature was 30.58 and the mean value of the left LL was 30.28; the P -value was 0.805769. In females, the mean value of the right LL temperature was 29.58 and the mean value of the left limb was 29.52; the P -value was 0.864773. In males, the right limb blood flow was 34.17 and the left limb blood flow was 34.67; the P -value was 0.359137. In females, the right limb blood flow was 26.89 and the left limb blood flow was 26.09; the P -value was 0.796348. Research results showed that the temperature distribution and the dynamics of blood flow are not significantly different between the right and left extremities in both males and females. However, significant temperature differences were found between the gender groups. Significantly higher temperature values in both the right and left extremities were recorded in males than in females.
Modified Denavit-Hartenberg parameters for better location of joint axis systems in robot arms
NASA Technical Reports Server (NTRS)
Barker, L. K.
1986-01-01
The Denavit-Hartenberg parameters define the relative location of successive joint axis systems in a robot arm. A recent justifiable criticism is that one of these parameters becomes extremely large when two successive joints have near-parallel rotational axes. Geometrically, this parameter then locates a joint axis system at an excessive distance from the robot arm and, computationally, leads to an ill-conditioned transformation matrix. In this paper, a simple modification (which results from constraining a transverse vector between successive joint rotational axes to be normal to one of the rotational axes, instead of both) overcomes this criticism and favorably locates the joint axis system. An example is given for near-parallel rotational axes of the elbow and shoulder joints in a robot arm. The regular and modified parameters are extracted by an algebraic method with simulated measurement data. Unlike the modified parameters, extracted values of the regular parameters are very sensitive to measurement accuracy.
NASA Astrophysics Data System (ADS)
Li, Zhanling; Li, Zhanjie; Li, Chengcheng
2014-05-01
Probability modeling of hydrological extremes is one of the major research areas in hydrological science. Most basins in humid and semi-humid south and east of China are concerned for probability modeling analysis of high flow extremes. While, for the inland river basin which occupies about 35% of the country area, there is a limited presence of such studies partly due to the limited data availability and a relatively low mean annual flow. The objective of this study is to carry out probability modeling of high flow extremes in the upper reach of Heihe River basin, the second largest inland river basin in China, by using the peak over threshold (POT) method and Generalized Pareto Distribution (GPD), in which the selection of threshold and inherent assumptions for POT series are elaborated in details. For comparison, other widely used probability distributions including generalized extreme value (GEV), Lognormal, Log-logistic and Gamma are employed as well. Maximum likelihood estimate is used for parameter estimations. Daily flow data at Yingluoxia station from 1978 to 2008 are used. Results show that, synthesizing the approaches of mean excess plot, stability features of model parameters, return level plot and the inherent independence assumption of POT series, an optimum threshold of 340m3/s is finally determined for high flow extremes in Yingluoxia watershed. The resulting POT series is proved to be stationary and independent based on Mann-Kendall test, Pettitt test and autocorrelation test. In terms of Kolmogorov-Smirnov test, Anderson-Darling test and several graphical diagnostics such as quantile and cumulative density function plots, GPD provides the best fit to high flow extremes in the study area. The estimated high flows for long return periods demonstrate that, as the return period increasing, the return level estimates are probably more uncertain. The frequency of high flow extremes exhibits a very slight but not significant decreasing trend from 1978 to 2008, while the intensity of such flow extremes is comparatively increasing especially for the higher return levels.
A Survey of Uncontrolled Satellite reentry and Impact Prediction
1993-09-23
NORAD produces " element sets " which are mean values of the orbital elements that have been obtained by removing the periodic orbital variations in a...Final Element Set --a listing of the final orbit parameters. The eccentricity and mean motion data from the listing were used in the investigation...yielded altitude and orbital elements as a function of time. Computer run results for these simulations were extremely long and therefore the decision was
Emergence and space-time structure of lump solution to the (2+1)-dimensional generalized KP equation
NASA Astrophysics Data System (ADS)
Tan, Wei; Dai, Houping; Dai, Zhengde; Zhong, Wenyong
2017-11-01
A periodic breather-wave solution is obtained using homoclinic test approach and Hirota's bilinear method with a small perturbation parameter u0 for the (2+1)-dimensional generalized Kadomtsev-Petviashvili equation. Based on the periodic breather-wave, a lump solution is emerged by limit behaviour. Finally, three different forms of the space-time structure of the lump solution are investigated and discussed using the extreme value theory.
Linear mode stability of the Kerr-Newman black hole and its quasinormal modes.
Dias, Óscar J C; Godazgar, Mahdi; Santos, Jorge E
2015-04-17
We provide strong evidence that, up to 99.999% of extremality, Kerr-Newman black holes (KNBHs) are linear mode stable within Einstein-Maxwell theory. We derive and solve, numerically, a coupled system of two partial differential equations for two gauge invariant fields that describe the most general linear perturbations of a KNBH. We determine the quasinormal mode (QNM) spectrum of the KNBH as a function of its three parameters and find no unstable modes. In addition, we find that the lowest radial overtone QNMs that are connected continuously to the gravitational ℓ=m=2 Schwarzschild QNM dominate the spectrum for all values of the parameter space (m is the azimuthal number of the wave function and ℓ measures the number of nodes along the polar direction). Furthermore, the (lowest radial overtone) QNMs with ℓ=m approach Reω=mΩH(ext) and Imω=0 at extremality; this is a universal property for any field of arbitrary spin |s|≤2 propagating on a KNBH background (ω is the wave frequency and ΩH(ext) the black hole angular velocity at extremality). We compare our results with available perturbative results in the small charge or small rotation regimes and find good agreement.
Manual editing of automatically recorded data in an anesthesia information management system.
Wax, David B; Beilin, Yaakov; Hossain, Sabera; Lin, Hung-Mo; Reich, David L
2008-11-01
Anesthesia information management systems allow automatic recording of physiologic and anesthetic data. The authors investigated the prevalence of such data modification in an academic medical center. The authors queried their anesthesia information management system database of anesthetics performed in 2006 and tabulated the counts of data points for automatically recorded physiologic and anesthetic parameters as well as the subset of those data that were manually invalidated by clinicians (both with and without alternate values manually appended). Patient, practitioner, data source, and timing characteristics of recorded values were also extracted to determine their associations with editing of various parameters in the anesthesia information management system record. A total of 29,491 cases were analyzed, 19% of which had one or more data points manually invalidated. Among 58 attending anesthesiologists, each invalidated data in a median of 7% of their cases when working as a sole practitioner. A minority of invalidated values were manually appended with alternate values. Pulse rate, blood pressure, and pulse oximetry were the most commonly invalidated parameters. Data invalidation usually resulted in a decrease in parameter variance. Factors independently associated with invalidation included extreme physiologic values, American Society of Anesthesiologists physical status classification, emergency status, timing (phase of the procedure/anesthetic), presence of an intraarterial catheter, resident or certified registered nurse anesthetist involvement, and procedure duration. Editing of physiologic data automatically recorded in an anesthesia information management system is a common practice and results in decreased variability of intraoperative data. Further investigation may clarify the reasons for and consequences of this behavior.
A Single Mode Study of a Quasi-Geostrophic Convection-Driven Dynamo Model
NASA Astrophysics Data System (ADS)
Plumley, M.; Calkins, M. A.; Julien, K. A.; Tobias, S.
2017-12-01
Planetary magnetic fields are thought to be the product of hydromagnetic dynamo action. For Earth, this process occurs within the convecting, turbulent and rapidly rotating outer core, where the dynamics are characterized by low Rossby, low magnetic Prandtl and high Rayleigh numbers. Progress in studying dynamos has been limited by current computing capabilities and the difficulties in replicating the extreme values that define this setting. Asymptotic models that embrace these extreme parameter values and enforce the dominant balance of geostrophy provide an option for the study of convective flows with actual relevance to geophysics. The quasi-geostrophic dynamo model (QGDM) is a multiscale, fully-nonlinear Cartesian dynamo model that is valid in the asymptotic limit of low Rossby number. We investigate the QGDM using a simplified class of solutions that consist of a single horizontal wavenumber which enforces a horizontal structure on the solutions. This single mode study is used to explore multiscale time stepping techniques and analyze the influence of the magnetic field on convection.
Optimal regionalization of extreme value distributions for flood estimation
NASA Astrophysics Data System (ADS)
Asadi, Peiman; Engelke, Sebastian; Davison, Anthony C.
2018-01-01
Regionalization methods have long been used to estimate high return levels of river discharges at ungauged locations on a river network. In these methods, discharge measurements from a homogeneous group of similar, gauged, stations are used to estimate high quantiles at a target location that has no observations. The similarity of this group to the ungauged location is measured in terms of a hydrological distance measuring differences in physical and meteorological catchment attributes. We develop a statistical method for estimation of high return levels based on regionalizing the parameters of a generalized extreme value distribution. The group of stations is chosen by optimizing over the attribute weights of the hydrological distance, ensuring similarity and in-group homogeneity. Our method is applied to discharge data from the Rhine basin in Switzerland, and its performance at ungauged locations is compared to that of other regionalization methods. For gauged locations we show how our approach improves the estimation uncertainty for long return periods by combining local measurements with those from the chosen group.
Modelling probabilities of heavy precipitation by regional approaches
NASA Astrophysics Data System (ADS)
Gaal, L.; Kysely, J.
2009-09-01
Extreme precipitation events are associated with large negative consequences for human society, mainly as they may trigger floods and landslides. The recent series of flash floods in central Europe (affecting several isolated areas) on June 24-28, 2009, the worst one over several decades in the Czech Republic as to the number of persons killed and the extent of damage to buildings and infrastructure, is an example. Estimates of growth curves and design values (corresponding e.g. to 50-yr and 100-yr return periods) of precipitation amounts, together with their uncertainty, are important in hydrological modelling and other applications. The interest in high quantiles of precipitation distributions is also related to possible climate change effects, as climate model simulations tend to project increased severity of precipitation extremes in a warmer climate. The present study compares - in terms of Monte Carlo simulation experiments - several methods to modelling probabilities of precipitation extremes that make use of ‘regional approaches’: the estimation of distributions of extremes takes into account data in a ‘region’ (‘pooling group’), in which one may assume that the distributions at individual sites are identical apart from a site-specific scaling factor (the condition is referred to as ‘regional homogeneity’). In other words, all data in a region - often weighted in some way - are taken into account when estimating the probability distribution of extremes at a given site. The advantage is that sampling variations in the estimates of model parameters and high quantiles are to a large extent reduced compared to the single-site analysis. We focus on the ‘region-of-influence’ (ROI) method which is based on the identification of unique pooling groups (forming the database for the estimation) for each site under study. The similarity of sites is evaluated in terms of a set of site attributes related to the distributions of extremes. The issue of the size of the region is linked with a built-in test on regional homogeneity of data. Once a pooling group is delineated, weights based on a dissimilarity measure are assigned to individual sites involved in a pooling group, and all (weighted) data are employed in the estimation of model parameters and high quantiles at a given location. The ROI method is compared with the Hosking-Wallis (HW) regional frequency analysis, which is based on delineating fixed regions (instead of flexible pooling groups) and assigning unit weights to all sites in a region. The comparison of the performance of the individual regional models makes use of data on annual maxima of 1-day precipitation amounts at 209 stations covering the Czech Republic, with altitudes ranging from 150 to 1490 m a.s.l. We conclude that the ROI methodology is superior to the HW analysis, particularly for very high quantiles (100-yr return values). Another advantage of the ROI approach is that subjective decisions - unavoidable when fixed regions in the HW analysis are formed - may efficiently be suppressed, and almost all settings of the ROI method may be justified by results of the simulation experiments. The differences between (any) regional method and single-site analysis are very pronounced and suggest that the at-site estimation is highly unreliable. The ROI method is then applied to estimate high quantiles of precipitation amounts at individual sites. The estimates and their uncertainty are compared with those from a single-site analysis. We focus on the eastern part of the Czech Republic, i.e. an area with complex orography and a particularly pronounced role of Mediterranean cyclones in producing precipitation extremes. The design values are compared with precipitation amounts recorded during the recent heavy precipitation events, including the one associated with the flash flood on June 24, 2009. We also show that the ROI methodology may easily be transferred to the analysis of precipitation extremes in climate model outputs. It efficiently reduces (random) variations in the estimates of parameters of the extreme value distributions in individual gridboxes that result from large spatial variability of heavy precipitation, and represents a straightforward tool for ‘weighting’ data from neighbouring gridboxes within the estimation procedure. The study is supported by the Grant Agency of AS CR under project B300420801.
Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E
2007-09-01
Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci. Copyright (c) 2007 Wiley-Liss, Inc.
Origami structures for tunable thermal expansion
NASA Astrophysics Data System (ADS)
Boatti, Elisa; Bertoldi, Katia
Materials with engineered thermal expansion, capable of achieving targeted and extreme area/volume changes in response to variations in temperature, are important for a number of aerospace, optical, energy, and microelectronic applications. While most of the proposed structures with tunable coefficient of thermal expansion consist of bi-material 2D or 3D lattices, here we propose a periodic metastructure based on a bilayer Miura-Ori origami fold. We combine experiments and simulations to demonstrate that by tuning the geometrical and mechanical parameters an extremely broad range of thermal expansion coefficients can be obtained, spanning both negative and positive values. Additionally, the thermal properties along different directions can be adjusted independently. Differently from all previously reported systems, the proposed structure is non-porous.
NASA Astrophysics Data System (ADS)
Hwang, Seonhong; Kim, Seunghyeon; Son, Jongsang; Kim, Youngho
2012-02-01
Manual wheelchair users are at a high risk of pain and injuries to the upper extremities due to mechanical inefficiency of wheelchair propulsion motion. The kinetic analysis of the upper extremities during manual wheelchair propulsion in various conditions needed to be investigated. We developed and calibrated a wheelchair dynamometer for measuring kinetic parameters during propulsion. We utilized the dynamometer to investigate and compare the propulsion torque and power values of experienced and novice users under four different conditions. Experienced wheelchair users generated lower torques with more power than novice users and reacted alertly and sensitively to changing conditions. We expect that these basic methods and results may help to quantitatively evaluate the mechanical efficiency of manual wheelchair propulsion.
Absolute measurement of undulator radiation in the extreme ultraviolet
NASA Astrophysics Data System (ADS)
Maezawa, H.; Mitani, S.; Suzuki, Y.; Kanamori, H.; Tamamushi, S.; Mikuni, A.; Kitamura, H.; Sasaki, T.
1983-04-01
The spectral brightness of undulator radiation emitted by the model PMU-1 incorporated in the SOR-RING, the dedicated synchrotron radiation source in Tokyo, has been studied in the extreme ultraviolet region from 21.6 to 72.9 eV as a function of the electron energy γ, the field parameter K, and the angle of observation ϴ in the absolute scale. A series of measurements covering the first and the second harmonic component of undulator radiation was compared with the fundamental formula λ n= {λ 0}/{2nγ 2}( {1+K 2}/{2}+γϴ 2 and the effects of finite emittance were studied. The brightness at the first peak was smaller than the theoretical value, while an enhanced second harmonic component was observed.
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Ribatet, M.; Stübi, R.; Weihs, P.; Holawe, F.; Peter, T.; Davison, A. C.
2009-04-01
Over the last few decades negative trends in stratospheric ozone have been studied because of the direct link between decreasing stratospheric ozone and increasing surface UV-radiation. Recently a discussion on ozone recovery has begun. Long-term measurements of total ozone extending back earlier than 1958 are limited and only available from a few stations in the northern hemisphere. The world's longest total ozone record is available from Arosa, Switzerland (Staehelin et al., 1998a,b). At this site total ozone measurements have been made since late 1926 through the present day. Within this study (Rieder et al., 2009) new tools from extreme value theory (e.g. Coles, 2001; Ribatet, 2007) are applied to select mathematically well-defined thresholds for extreme low and extreme high total ozone. A heavy-tail focused approach is used by fitting the Generalized Pareto Distribution (GPD) to the Arosa time series. Asymptotic arguments (Pickands, 1975) justify the use of the GPD for modeling exceedances over a sufficiently high (or below a sufficiently low) threshold (Coles, 2001). More precisely, the GPD is the limiting distribution of normalized excesses over a threshold, as the threshold approaches the endpoint of the distribution. In practice, GPD parameters are fitted, to exceedances by maximum likelihood or other methods - such as the probability weighted moments. A preliminary step consists in defining an appropriate threshold for which the asymptotic GPD approximation holds. Suitable tools for threshold selection as the MRL-plot (mean residual life plot) and TC-plot (stability plot) from the POT-package (Ribatet, 2007) are presented. The frequency distribution of extremes in low (termed ELOs) and high (termed EHOs) total ozone and their influence on the long-term changes in total ozone are analyzed. Further it is shown that from the GPD-model the distribution of so-called ozone mini holes (e.g. Bojkov and Balis, 2001) can be precisely estimated and that the "extremes concept" provides new information on the data distribution and variability within the Arosa record as well as on the influence of ELOs and EHOs on the long-term trends of the ozone time series. References: Bojkov, R. D., and Balis, D.S.: Characteristics of episodes with extremely low ozone values in the northern middle latitudes 1975-2000, Ann. Geophys., 19, 797-807, 2001. Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Pickands, J.: Statistical inference using extreme order statistics, Ann. Stat., 3, 1, 119-131, 1975. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder, H.E., Staehelin, J., Maeder, J.A., Stübi, R., Weihs, P., Holawe, F., and M. Ribatet: From ozone mini holes and mini highs towards extreme value theory: New insights from extreme events and non stationarity, submitted to J. Geophys. Res., 2009. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa (Switzerland), 1929-1996, J. Geophys. Res., 103(D7), 8389-8400, doi:10.1029/97JD03650, 1998a. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998b.
NASA Astrophysics Data System (ADS)
Mannattil, Manu; Pandey, Ambrish; Verma, Mahendra K.; Chakraborty, Sagar
2017-12-01
Constructing simpler models, either stochastic or deterministic, for exploring the phenomenon of flow reversals in fluid systems is in vogue across disciplines. Using direct numerical simulations and nonlinear time series analysis, we illustrate that the basic nature of flow reversals in convecting fluids can depend on the dimensionless parameters describing the system. Specifically, we find evidence of low-dimensional behavior in flow reversals occurring at zero Prandtl number, whereas we fail to find such signatures for reversals at infinite Prandtl number. Thus, even in a single system, as one varies the system parameters, one can encounter reversals that are fundamentally different in nature. Consequently, we conclude that a single general low-dimensional deterministic model cannot faithfully characterize flow reversals for every set of parameter values.
A delay differential model of ENSO variability: Extreme values and stability analysis
NASA Astrophysics Data System (ADS)
Zaliapin, I.; Ghil, M.
2009-04-01
We consider a delay differential equation (DDE) model for El-Niño Southern Oscillation (ENSO) variability [Ghil et al. (2008), Nonlin. Proc. Geophys., 15, 417-433.] The model combines two key mechanisms that participate in ENSO dynamics: delayed negative feedback and seasonal forcing. Toy models of this type were shown to capture major features of the ENSO phenomenon [Jin et al., Science (1994); Tziperman et al., Science (1994)]; they provide a convenient paradigm for explaining interannual ENSO variability and shed new light on its dynamical properties. So far, though, DDE model studies of ENSO have been limited to linear stability analysis of steady-state solutions, which are not typical in forced systems, case studies of particular trajectories, or one-dimensional scenarios of transition to chaos, varying a single parameter while the others are kept fixed. In this work we take several steps toward a comprehensive analysis of DDE models relevant for ENSO phenomenology and illustrate the complexity of phase-parameter space structure for even such a simple model of climate dynamics. We formulate an initial value problem for our model and prove the existence, uniqueness, and continuous dependence theorem. We then use this theoretical result to perform detailed numerical stability analyses of the model in the three-dimensional space of its physically relevant parameters: strength of seasonal forcing b, atmosphere-ocean coupling ΰ, and propagation period ? of oceanic waves across the Tropical Pacific. Two regimes of variability, stable and unstable, are reported; they are separated by a sharp neutral curve in the (b,?) plane at constant ΰ. The detailed structure of the neutral curve becomes very irregular and possibly fractal, while individual trajectories within the unstable region become highly complex and possibly chaotic, as the atmosphere-ocean coupling ΰ increases. In the unstable regime, spontaneous transitions occur in the mean temperature (i.e., thermocline depth), period, and extreme annual values, for purely periodic, seasonal forcing. The model reproduces the Devils bleachers characterizing other ENSO models, such as nonlinear, coupled systems of partial differential equations; some of the features of this behavior have been documented in general circulation models, as well as in observations. We analyze the values of annual extremes and their location within an annual cycle and report the phase-locking phenomenon, which is connected to the occurrence of El-Niño events during the boreal (Northern Hemisphere) winter. We report existence of multiple solutions and study their basins of attraction in a space of initial conditions. We also present a model-based justification for the observed quasi-biennial oscillation in Tropical Pacific SSTs. We expect similar behavior in much more detailed and realistic models, where it is harder to describe its causes as completely. The basic mechanisms used in our model (delayed feedback and forcing) may be relevant to other natural systems in which internal instabilities interact with external forcing and give rise to extreme events.
Complex networks: Effect of subtle changes in nature of randomness
NASA Astrophysics Data System (ADS)
Goswami, Sanchari; Biswas, Soham; Sen, Parongama
2011-03-01
In two different classes of network models, namely, the Watts Strogatz type and the Euclidean type, subtle changes have been introduced in the randomness. In the Watts Strogatz type network, rewiring has been done in different ways and although the qualitative results remain the same, finite differences in the exponents are observed. In the Euclidean type networks, where at least one finite phase transition occurs, two models differing in a similar way have been considered. The results show a possible shift in one of the phase transition points but no change in the values of the exponents. The WS and Euclidean type models are equivalent for extreme values of the parameters; we compare their behaviour for intermediate values.
Extreme air-sea surface turbulent fluxes in mid latitudes - estimation, origins and mechanisms
NASA Astrophysics Data System (ADS)
Gulev, Sergey; Natalia, Tilinina
2014-05-01
Extreme turbulent heat fluxes in the North Atlantic and North Pacific mid latitudes were estimated from the modern era and first generation reanalyses (NCEP-DOE, ERA-Interim, MERRA NCEP-CFSR, JRA-25) for the period from 1979 onwards. We used direct surface turbulent flux output as well as reanalysis state variables from which fluxes have been computed using COARE-3 bulk algorithm. For estimation of extreme flux values we analyzed surface flux probability density distribution which was approximated by Modified Fisher-Tippett distribution. In all reanalyses extreme turbulent heat fluxes amount to 1500-2000 W/m2 (for the 99th percentile) and can exceed 2000 W/m2 for higher percentiles in the western boundary current extension (WBCE) regions. Different reanalyses show significantly different shape of MFT distribution, implying considerable differences in the estimates of extreme fluxes. The highest extreme turbulent latent heat fluxes are diagnosed in NCEP-DOE, ERA-Interim and NCEP-CFSR reanalyses with the smallest being in MERRA. These differences may not necessarily reflect the differences in mean values. Analysis shows that differences in statistical properties of the state variables are the major source of differences in the shape of PDF of fluxes and in the estimates of extreme fluxes while the contribution of computational schemes used in different reanalyses is minor. The strongest differences in the characteristics of probability distributions of surface fluxes and extreme surface flux values between different reanalyses are found in the WBCE extension regions and high latitudes. In the next instance we analyzed the mechanisms responsible for forming surface turbulent fluxes and their potential role in changes of midlatitudinal heat balance. Midlatitudinal cyclones were considered as the major mechanism responsible for extreme turbulent fluxes which are typically occur during the cold air outbreaks in the rear parts of cyclones when atmospheric conditions provide locally high winds and air-sea temperature gradients. For this purpose we linked characteristics of cyclone activity over the midlatitudinal oceans with the extreme surface turbulent heat fluxes. Cyclone tracks and parameters of cyclone life cycle (deepening rates, propagation velocities, life time and clustering) were derived from the same reanalyses using state of the art numerical tracking algorithm. The main questions addressed in this study are (i) through which mechanisms extreme surface fluxes are associated with cyclone activity? and (ii) which types of cyclones are responsible for forming extreme turbulent fluxes? Our analysis shows that extreme surface fluxes are typically associated not with cyclones themselves but rather with cyclone-anticyclone interaction zones. This implies that North Atlantic and North Pacific series of intense cyclones do not result in the anomalous surface fluxes. Alternatively, extreme fluxes are most frequently associated with blocking situations, particularly with the intensification of the Siberian and North American Anticyclones providing cold-air outbreaks over WBC regions.
NASA Astrophysics Data System (ADS)
Wilks, Daniel S.
1993-10-01
Performance of 8 three-parameter probability distributions for representing annual extreme and partial duration precipitation data at stations in the northeastern and southeastern United States is investigated. Particular attention is paid to fidelity on the right tail, through use of a bootstrap procedure simulating extrapolation on the right tail beyond the data. It is found that the beta-κ distribution best describes the extreme right tail of annual extreme series, and the beta-P distribution is best for the partial duration data. The conventionally employed two-parameter Gumbel distribution is found to substantially underestimate probabilities associated with the larger precipitation amounts for both annual extreme and partial duration data. Fitting the distributions using left-censored data did not result in improved fits to the right tail.
Quantifying variability in fast and slow solar wind: From turbulence to extremes
NASA Astrophysics Data System (ADS)
Tindale, E.; Chapman, S. C.; Moloney, N.; Watkins, N. W.
2017-12-01
Fast and slow solar wind exhibit variability across a wide range of spatiotemporal scales, with evolving turbulence producing fluctuations on sub-hour timescales and the irregular solar cycle modulating the system over many years. Here, we apply the data quantile-quantile (DQQ) method [Tindale and Chapman 2016, 2017] to over 20 years of Wind data, to study the time evolution of the statistical distribution of plasma parameters in fast and slow solar wind. This model-independent method allows us to simultaneously explore the evolution of fluctuations across all scales. We find a two-part functional form for the statistical distributions of the interplanetary magnetic field (IMF) magnitude and its components, with each region of the distribution evolving separately over the solar cycle. Up to a value of 8nT, turbulent fluctuations dominate the distribution of the IMF, generating the approximately lognormal shape found by Burlaga [2001]. The mean of this core-turbulence region tracks solar cycle activity, while its variance remains constant, independent of the fast or slow state of the solar wind. However, when we test the lognormality of this core-turbulence component over time, we find the model provides a poor description of the data at solar maximum, where sharp peaks in the distribution dominate over the lognormal shape. At IMF values higher than 8nT, we find a separate, extremal distribution component, whose moments are sensitive to solar cycle phase, the peak activity of the cycle and the solar wind state. We further investigate these `extremal' values using burst analysis, where a burst is defined as a continuous period of exceedance over a predefined threshold. This form of extreme value statistics allows us to study the stochastic process underlying the time series, potentially supporting a probabilistic forecast of high-energy events. Tindale, E., and S.C. Chapman (2016), Geophys. Res. Lett., 43(11) Tindale, E., and S.C. Chapman (2017), submitted Burlaga, L.F. (2001), J. Geophys. Res., 106(A8)
Jastrzembski, Tiffany S.; Charness, Neil
2009-01-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048
Jastrzembski, Tiffany S; Charness, Neil
2007-12-01
The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; M-sub(age) = 20) and older (N = 20; M-sub(age) = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies.
Durrieu, Gilles; Pham, Quang-Khoai; Foltête, Anne-Sophie; Maxime, Valérie; Grama, Ion; Tilly, Véronique Le; Duval, Hélène; Tricot, Jean-Marie; Naceur, Chiraz Ben; Sire, Olivier
2016-07-01
Water quality can be evaluated using biomarkers such as tissular enzymatic activities of endemic species. Measurement of molluscs bivalves activity at high frequency (e.g., valvometry) during a long time period is another way to record the animal behavior and to evaluate perturbations of the water quality in real time. As the pollution affects the activity of oysters, we consider the valves opening and closing velocities to monitor the water quality assessment. We propose to model the huge volume of velocity data collected in the framework of valvometry using a new nonparametric extreme values statistical model. The objective is to estimate the tail probabilities and the extreme quantiles of the distribution of valve closing velocity. The tail of the distribution function of valve closing velocity is modeled by a Pareto distribution with parameter t,τ , beyond a threshold τ according to the time t of the experiment. Our modeling approach reveals the dependence between the specific activity of two enzymatic biomarkers (Glutathione-S-transferase and acetylcholinesterase) and the continuous recording of oyster valve velocity, proving the suitability of this tool for water quality assessment. Thus, valvometry allows in real-time in situ analysis of the bivalves behavior and appears as an effective early warning tool in ecological risk assessment and marine environment monitoring.
Can quantile mapping improve precipitation extremes from regional climate models?
NASA Astrophysics Data System (ADS)
Tani, Satyanarayana; Gobiet, Andreas
2015-04-01
The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.
NASA Astrophysics Data System (ADS)
Dimitrova, S.; Mustafa, F. R.; Stoilova, I.; Babayev, E. S.; Kazimov, E. A.
2009-02-01
This collaborative study is based on the analysis and comparison of results of coordinated experimental investigations conducted in Bulgaria and Azerbaijan for revealing a possible influence of solar activity changes and related geomagnetic activity variations on the human cardio-vascular state. Arterial blood pressure and heart rate of 86 healthy volunteers were measured on working days during a period of comparatively high solar and geomagnetic activity (2799 measurements in autumn 2001 and spring 2002) in Sofia. Daily experimental investigations of parameters of cardio-vascular health state were performed in Azerbaijan with a permanent group of examined persons. Heart rate and electrocardiograms were digitally registered (in total 1532 records) for seven functionally healthy persons on working days and Saturdays, in the Laboratory of Heliobiology at the Medical Center INAM in Baku, from 15.07.2006 to 13.11.2007. Obtained digital recordings were subjected to medical, statistical and spectral analyses. Special attention was paid to effects of solar extreme events, particularly those of November 2001 and December 2006. The statistical method of the analysis of variance (ANOVA) and post hoc analysis were applied to check the significance of the influence of geomagnetic activity on the cardio-vascular parameters under consideration. Results revealed statistically significant increments for the mean systolic and diastolic blood pressure values of the group with geomagnetic activity increase. Arterial blood pressure values started increasing two days prior to geomagnetic storms and kept their high values up to two days after the storms. Heart rate reaction was ambiguous and not significant for healthy persons examined (for both groups) under conditions with geomagnetic activity changes. It is concluded that heart rate for healthy persons at middle latitudes can be considered as a more stable physiological parameter which is not so sensitive to environmental changes while the dynamics of arterial blood pressure reveals a compensatory reaction of the human organism for adaptation.
Berthele, H; Sella, O; Lavarde, M; Mielcarek, C; Pense-Lheritier, A-M; Pirnay, S
2014-02-01
Ethanol, pH and water activity are three well-known parameters that can influence the preservation of cosmetic products. With the new constraints regarding the antimicrobial effectiveness and the restrictive use of preservatives, a D-optimal design was set up to evaluate the influence of these three parameters on the microbiological conservation. To monitor the effectiveness of the different combination of these set parameters, a challenge test in compliance with the International standard ISO 11930: 2012 was implemented. The formulations established in our study could support wide variations of ethanol concentration, pH values and glycerin concentration without noticeable effects on the stability of the products. In the conditions of the study, determining the value of a single parameter, with the tested concentration, could not guarantee microbiological conservation. However, a high concentration of ethanol associated with an extreme pH could inhibit bacteria growth from the first day (D0). Besides, it appears that despite an aw above 0.6 (even 0.8) and without any preservatives incorporated in formulas, it was possible to guarantee the microbiological stability of the cosmetic product when maintaining the right combination of the selected parameters. Following the analysis of the different values obtained during the experimentation, there seems to be a correlation between the aw and the selected parameters aforementioned. An application of this relationship could be to define the aw of cosmetic products by using the formula, thus avoiding the evaluation of this parameter with a measuring device. © 2013 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Shim, Je-Myung; Kwon, Hae-Yeon; Kim, Ha-Roo; Kim, Bo-In; Jung, Ju-Hyeon
2013-12-01
[Purpose] The aim of this study was to assess the effect of Nordic pole walking on the electromyographic activities of upper extremity and lower extremity muscles. [Subjects and Methods] The subjects were randomly divided into two groups as follows: without Nordic pole walking group (n=13) and with Nordic pole walking group (n=13). The EMG data were collected by measurement while the subjects walking on a treadmill for 30 minutes by measuring from one heel strike to the next. [Results] Both the average values and maximum values of the muscle activity of the upper extremity increased in both the group that used Nordic poles and the group that did not use Nordic poles, and the values showed statistically significant differences. There was an increase in the average value for muscle activity of the latissimus dorsi, but the difference was not statistically significant, although there was a statistically significant increase in its maximum value. The average and maximum values for muscle activity of the lower extremity did not show large differences in either group, and the values did not show any statistically significant differences. [Conclusion] The use of Nordic poles by increased muscle activity of the upper extremity compared with regular walking but did not affect the lower extremity.
Shim, Je-myung; Kwon, Hae-yeon; Kim, Ha-roo; Kim, Bo-in; Jung, Ju-hyeon
2014-01-01
[Purpose] The aim of this study was to assess the effect of Nordic pole walking on the electromyographic activities of upper extremity and lower extremity muscles. [Subjects and Methods] The subjects were randomly divided into two groups as follows: without Nordic pole walking group (n=13) and with Nordic pole walking group (n=13). The EMG data were collected by measurement while the subjects walking on a treadmill for 30 minutes by measuring from one heel strike to the next. [Results] Both the average values and maximum values of the muscle activity of the upper extremity increased in both the group that used Nordic poles and the group that did not use Nordic poles, and the values showed statistically significant differences. There was an increase in the average value for muscle activity of the latissimus dorsi, but the difference was not statistically significant, although there was a statistically significant increase in its maximum value. The average and maximum values for muscle activity of the lower extremity did not show large differences in either group, and the values did not show any statistically significant differences. [Conclusion] The use of Nordic poles by increased muscle activity of the upper extremity compared with regular walking but did not affect the lower extremity. PMID:24409018
Application of short-data methods on extreme surge levels
NASA Astrophysics Data System (ADS)
Feng, X.
2014-12-01
Tropical cyclone-induced storm surges are among the most destructive natural hazards that impact the United States. Unfortunately for academic research, the available time series for extreme surge analysis are very short. The limited data introduces uncertainty and affects the accuracy of statistical analyses of extreme surge levels. This study deals with techniques applicable to data sets less than 20 years, including simulation modelling and methods based on the parameters of the parent distribution. The verified water levels from water gauges spread along the Southwest and Southeast Florida Coast, as well as the Florida Keys, are used in this study. Methods to calculate extreme storm surges are described and reviewed, including 'classical' methods based on the generalized extreme value (GEV) distribution and the generalized Pareto distribution (GPD), and approaches designed specifically to deal with short data sets. Incorporating global-warming influence, the statistical analysis reveals enhanced extreme surge magnitudes and frequencies during warm years, while reduced levels of extreme surge activity are observed in the same study domain during cold years. Furthermore, a non-stationary GEV distribution is applied to predict the extreme surge levels with warming sea surface temperatures. The non-stationary GEV distribution indicates that with 1 Celsius degree warming in sea surface temperature from the baseline climate, the 100-year return surge level in Southwest and Southeast Florida will increase by up to 40 centimeters. The considered statistical approaches for extreme surge estimation based on short data sets will be valuable to coastal stakeholders, including urban planners, emergency managers, and the hurricane and storm surge forecasting and warning system.
NASA Astrophysics Data System (ADS)
Bargaoui, Zoubeida Kebaili; Bardossy, Andràs
2015-10-01
The paper aims to develop researches on the spatial variability of heavy rainfall events estimation using spatial copula analysis. To demonstrate the methodology, short time resolution rainfall time series from Stuttgart region are analyzed. They are constituted by rainfall observations on continuous 30 min time scale recorded over a network composed by 17 raingages for the period July 1989-July 2004. The analysis is performed aggregating the observations from 30 min up to 24 h. Two parametric bivariate extreme copula models, the Husler-Reiss model and the Gumbel model are investigated. Both involve a single parameter to be estimated. Thus, model fitting is operated for every pair of stations for a giving time resolution. A rainfall threshold value representing a fixed rainfall quantile is adopted for model inference. Generalized maximum pseudo-likelihood estimation is adopted with censoring by analogy with methods of univariate estimation combining historical and paleoflood information with systematic data. Only pairs of observations greater than the threshold are assumed as systematic data. Using the estimated copula parameter, a synthetic copula field is randomly generated and helps evaluating model adequacy which is achieved using Kolmogorov Smirnov distance test. In order to assess dependence or independence in the upper tail, the extremal coefficient which characterises the tail of the joint bivariate distribution is adopted. Hence, the extremal coefficient is reported as a function of the interdistance between stations. If it is less than 1.7, stations are interpreted as dependent in the extremes. The analysis of the fitted extremal coefficients with respect to stations inter distance highlights two regimes with different dependence structures: a short spatial extent regime linked to short duration intervals (from 30 min to 6 h) with an extent of about 8 km and a large spatial extent regime related to longer rainfall intervals (from 12 h to 24 h) with an extent of 34 to 38 km.
Significant events in low-level flow conditions hazardous to aircraft
NASA Technical Reports Server (NTRS)
Alexander, M. B.; Camp, D. W.
1983-01-01
Atmospheric parameters recorded during high surface winds are analyzed to determine magnitude, frequency, duration, and simultaneity of occurrence of low level flow conditions known to be hazardous to the ascent and descent of conventional aircraft and the space shuttle. Graphic and tabular presentations of mean and extreme values and simultaneous occurrences of turbulence (gustiness and a gust factor), wind shear (speed and direction), and vertical motion (updrafts and downdrafts), along with associated temperature inversions are included as function of tower height, layer and/or distance for six 5 sec intervals (one interval every 100 sec) of parameters sampled simultaneously at the rate of 10 speeds, directions and temperatures per second during an approximately 10 min period.
NASA Astrophysics Data System (ADS)
Cook, L. M.; Samaras, C.; McGinnis, S. A.
2017-12-01
Intensity-duration-frequency (IDF) curves are a common input to urban drainage design, and are used to represent extreme rainfall in a region. As rainfall patterns shift into a non-stationary regime as a result of climate change, these curves will need to be updated with future projections of extreme precipitation. Many regions have begun to update these curves to reflect the trends from downscaled climate models; however, few studies have compared the methods for doing so, as well as the uncertainty that results from the selection of the native grid scale and temporal resolution of the climate model. This study examines the variability in updated IDF curves for Pittsburgh using four different methods for adjusting gridded regional climate model (RCM) outputs into station scale precipitation extremes: (1) a simple change factor applied to observed return levels, (2) a naïve adjustment of stationary and non-stationary Generalized Extreme Value (GEV) distribution parameters, (3) a transfer function of the GEV parameters from the annual maximum series, and (4) kernel density distribution mapping bias correction of the RCM time series. Return level estimates (rainfall intensities) and confidence intervals from these methods for the 1-hour to 48-hour duration are tested for sensitivity to the underlying spatial and temporal resolution of the climate ensemble from the NA-CORDEX project, as well as, the future time period for updating. The first goal is to determine if uncertainty is highest for: (i) the downscaling method, (ii) the climate model resolution, (iii) the climate model simulation, (iv) the GEV parameters, or (v) the future time period examined. Initial results of the 6-hour, 10-year return level adjusted with the simple change factor method using four climate model simulations of two different spatial resolutions show that uncertainty is highest in the estimation of the GEV parameters. The second goal is to determine if complex downscaling methods and high-resolution climate models are necessary for updating, or if simpler methods and lower resolution climate models will suffice. The final results can be used to inform the most appropriate method and climate model resolutions to use for updating IDF curves for urban drainage design.
VizieR Online Data Catalog: 2nd and 3d parameters of HB of globular clusters (Gratton+, 2010)
NASA Astrophysics Data System (ADS)
Gratton, R. G.; Carretta, E.; Bragaglia, A.; Lucatello, S.; S'orazii, V.
2010-05-01
The second parameter (the first being metallicity) defining the distribution of stars on the horizontal branch (HB) of globular clusters (GCs) has long been one of the major open issues in our understanding of the evolution of normal stars. Large photometric and spectroscopic databases are now available: they include large and homogeneous sets of colour-magnitude diagrams, cluster ages, and homogeneous data about chemical compositions from our FLAMES survey. We use these databases to re-examine this issue. Methods. We use the photometric data to derive median and extreme (i.e., the values including 90% of the distribution) colours and magnitudes of stars along the HB for about a hundred GCs. We transform these into median and extreme masses of stars on the HB, using the models developed by the Pisa group, and taking into account evolutionary effects. We compare these masses with those expected at the tip of the red giant branch (RGB) to derive the total mass lost by the stars. (11 data files).
Guiavarc'h, Yann P; van Loey, Ann M; Hendrickx, Marc E
2005-02-01
The possibilities and limitations of single- and multicomponent time-temperature integrators (TTIs) for evaluating the impact of thermal processes on a target food attribute with a Ztarget value different from the zTTI value(s) of the TTI is far from sufficiently documented. In this study, several thousand time-temperature profiles were generated by heat transfer simulations based on a wide range of product and process thermal parameters and considering a Ztarget value of 10 degrees C and a reference temperature of 121.1 degrees C, both currently used to assess the safety of food sterilization processes. These simulations included 15 different Ztarget=10 degrees CF121.1 degrees C values in the range 3 to 60 min. The integration of the time-temperature profiles with ZTTI values of 5.5 to 20.5 degrees C in steps of 1 degrees C allowed generation of a large database containing for each combination of product and process parameters the correction factor to apply to the process value FmultiTTI, which was derived from a single- or multicomponent TTI, to obtain the target process value 10 degrees CF121.1 degrees C. The table and the graph results clearly demonstrated that multicomponent TTIs with z-values close to 10 degrees C can be used as an extremely efficient approach when a single-component TTI with a z-value of 10 degrees C is not available. In particular, a two-component TTI with z1 and z2 values respectively above and below the Ztarget value (10 degrees C in this study) would be the best option for the development of a TTI to assess the safety of sterilized foods. Whatever process and product parameters are used, such a TTI allows proper evaluation of the process value 10 degrees CF121.1 degrees C.
Herrero-Medrano, J M; Mathur, P K; ten Napel, J; Rashidi, H; Alexandri, P; Knol, E F; Mulder, H A
2015-04-01
Robustness is an important issue in the pig production industry. Since pigs from international breeding organizations have to withstand a variety of environmental challenges, selection of pigs with the inherent ability to sustain their productivity in diverse environments may be an economically feasible approach in the livestock industry. The objective of this study was to estimate genetic parameters and breeding values across different levels of environmental challenge load. The challenge load (CL) was estimated as the reduction in reproductive performance during different weeks of a year using 925,711 farrowing records from farms distributed worldwide. A wide range of levels of challenge, from favorable to unfavorable environments, was observed among farms with high CL values being associated with confirmed situations of unfavorable environment. Genetic parameters and breeding values were estimated in high- and low-challenge environments using a bivariate analysis, as well as across increasing levels of challenge with a random regression model using Legendre polynomials. Although heritability estimates of number of pigs born alive were slightly higher in environments with extreme CL than in those with intermediate levels of CL, the heritabilities of number of piglet losses increased progressively as CL increased. Genetic correlations among environments with different levels of CL suggest that selection in environments with extremes of low or high CL would result in low response to selection. Therefore, selection programs of breeding organizations that are commonly conducted under favorable environments could have low response to selection in commercial farms that have unfavorable environmental conditions. Sows that had experienced high levels of challenge at least once during their productive life were ranked according to their EBV. The selection of pigs using EBV ignoring environmental challenges or on the basis of records from only favorable environments resulted in a sharp decline in productivity as the level of challenge increased. In contrast, selection using the random regression approach resulted in limited change in productivity with increasing levels of challenge. Hence, we demonstrate that the use of a quantitative measure of environmental CL and a random regression approach can be comprehensively combined for genetic selection of pigs with enhanced ability to maintain high productivity in harsh environments.
The comparison study among several data transformations in autoregressive modeling
NASA Astrophysics Data System (ADS)
Setiyowati, Susi; Waluyo, Ramdhani Try
2015-12-01
In finance, the adjusted close of stocks are used to observe the performance of a company. The extreme prices, which may increase or decrease drastically, are often become particular concerned since it can impact to bankruptcy. As preventing action, the investors have to observe the future (forecasting) stock prices comprehensively. For that purpose, time series analysis could be one of statistical methods that can be implemented, for both stationary and non-stationary processes. Since the variability process of stocks prices tend to large and also most of time the extreme values are always exist, then it is necessary to do data transformation so that the time series models, i.e. autoregressive model, could be applied appropriately. One of popular data transformation in finance is return model, in addition to ratio of logarithm and some others Tukey ladder transformation. In this paper these transformations are applied to AR stationary models and non-stationary ARCH and GARCH models through some simulations with varying parameters. As results, this work present the suggestion table that shows transformations behavior for some condition of parameters and models. It is confirmed that the better transformation is obtained, depends on type of data distributions. In other hands, the parameter conditions term give significant influence either.
NASA Astrophysics Data System (ADS)
Yin, Shui-qing; Wang, Zhonglei; Zhu, Zhengyuan; Zou, Xu-kai; Wang, Wen-ting
2018-07-01
Extreme precipitation can cause flooding and may result in great economic losses and deaths. The return level is a commonly used measure of extreme precipitation events and is required for hydrological engineer designs, including those of sewerage systems, dams, reservoirs and bridges. In this paper, we propose a two-step method to estimate the return level and its uncertainty for a study region. In the first step, we use the generalized extreme value distribution, the L-moment method and the stationary bootstrap to estimate the return level and its uncertainty at the site with observations. In the second step, a spatial model incorporating the heterogeneous measurement errors and covariates is trained to estimate return levels at sites with no observations and to improve the estimates at sites with limited information. The proposed method is applied to the daily rainfall data from 273 weather stations in the Haihe river basin of North China. We compare the proposed method with two alternatives: the first one is based on the ordinary Kriging method without measurement error, and the second one smooths the estimated location and scale parameters of the generalized extreme value distribution by the universal Kriging method. Results show that the proposed method outperforms its counterparts. We also propose a novel approach to assess the two-step method by comparing it with the at-site estimation method with a series of reduced length of observations. Estimates of the 2-, 5-, 10-, 20-, 50- and 100-year return level maps and the corresponding uncertainties are provided for the Haihe river basin, and a comparison with those released by the Hydrology Bureau of Ministry of Water Resources of China is made.
Assessment of Wind Parameter Sensitivity on Extreme and Fatigue Wind Turbine Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, Amy N; Sethuraman, Latha; Jonkman, Jason
Wind turbines are designed using a set of simulations to ascertain the structural loads that the turbine could encounter. While mean hub-height wind speed is considered to vary, other wind parameters such as turbulence spectra, sheer, veer, spatial coherence, and component correlation are fixed or conditional values that, in reality, could have different characteristics at different sites and have a significant effect on the resulting loads. This paper therefore seeks to assess the sensitivity of different wind parameters on the resulting ultimate and fatigue loads on the turbine during normal operational conditions. Eighteen different wind parameters are screened using anmore » Elementary Effects approach with radial points. As expected, the results show a high sensitivity of the loads to the turbulence standard deviation in the primary wind direction, but the sensitivity to wind shear is often much greater. To a lesser extent, other wind parameters that drive loads include the coherence in the primary wind direction and veer.« less
A spatial and seasonal description of return-levels for the Berlin-Brandenburg region (Germany)
NASA Astrophysics Data System (ADS)
Fischer, Madlen; Rust, Henning W.; Ulbrich, Uwe
2016-04-01
Extreme precipitation events have a strong impact on the environment, society and economy. Besides the direct effect, e.g. damage due to hail, extreme precipitation can cause flood events, mudslides and increased erosion, which in turn lead to serious damage. Typically, return levels derived from annual maxima of daily precipitation sums are used for the design of hydraulic structures or for risk assessment in insurance companies. Seasonally or monthly resolved return levels are rarely considered, although they provide additional information: the higher temporal resolution can be beneficial for risk management, e.g. for agriculture or tourism sector. In addition, annual return levels derived from monthly maxima offer lower uncertainties, since a larger data basis are used for estimation. Here, the generalized extreme value distribution (GEV) is used to calculate monthly resolved return levels for 323 stations in the region Berlin-Brandenburg (Germany). Instead of estimating the parameters of the GEV for each month separately, the seasonal variation is captured by harmonic functions. This natural approach is particularly suitable for an efficient characterization of the seasonal variation of extreme precipitation. In a first step, a statistical model is developed for each station separately to estimate the monthly return levels. Besides the seasonal smoothness, also smoothness in space is exploited here. We use functions of longitude, latitude and altitude to describe the spatial variation of GEV parameters in the second step. Thus, uncertainty is reduced at gauges with short time series and estimates for ungauged sites can be obtained in a meaningful way.
Extreme values in the Chinese and American stock markets based on detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Cao, Guangxi; Zhang, Minjia
2015-10-01
This paper focuses on the comparative analysis of extreme values in the Chinese and American stock markets based on the detrended fluctuation analysis (DFA) algorithm using the daily data of Shanghai composite index and Dow Jones Industrial Average. The empirical results indicate that the multifractal detrended fluctuation analysis (MF-DFA) method is more objective than the traditional percentile method. The range of extreme value of Dow Jones Industrial Average is smaller than that of Shanghai composite index, and the extreme value of Dow Jones Industrial Average is more time clustering. The extreme value of the Chinese or American stock markets is concentrated in 2008, which is consistent with the financial crisis in 2008. Moreover, we investigate whether extreme events affect the cross-correlation between the Chinese and American stock markets using multifractal detrended cross-correlation analysis algorithm. The results show that extreme events have nothing to do with the cross-correlation between the Chinese and American stock markets.
NASA Technical Reports Server (NTRS)
Klemas, V. (Principal Investigator); Wethe, C.
1975-01-01
The author has identified the following significant results. Results of the analysis of data collected during the summer of 1974 demonstrate that the ERTS Data Collection Platform (DCP) is quite responsive to changing water parameters and that this information can be successfully transmitted under all weather conditions. The monitoring of on-site probe outputs reveals a rapid response to changing water temperature, salinity, and turbidity conditions on incoming tides as the tidal salt wedge passes the probe location. The changes in water properties were corroborated by simultaneously sampling the water for subsequent laboratory analysis. Fluctuations observed in the values of salinity, conductivity, temperature and water depth over short time intervals were extremely small. Due to the nature of the probe, 10% to 20% fluctuations were observed in the turbidity values. The use of the average of the values observed during an overpass provided acceptable results. Good quality data was obtained from the satellite on each overpass regardless of weather conditions. Continued use of the DCP will help provide an indication of the accuracy of the probes and transmission system during long term use.
Hays, Ron D; Spritzer, Karen L; Amtmann, Dagmar; Lai, Jin-Shei; Dewitt, Esi Morgan; Rothrock, Nan; Dewalt, Darren A; Riley, William T; Fries, James F; Krishnan, Eswar
2013-11-01
To create upper-extremity and mobility subdomain scores from the Patient-Reported Outcomes Measurement Information System (PROMIS) physical functioning adult item bank. Expert reviews were used to identify upper-extremity and mobility items from the PROMIS item bank. Psychometric analyses were conducted to assess empirical support for scoring upper-extremity and mobility subdomains. Data were collected from the U.S. general population and multiple disease groups via self-administered surveys. The sample (N=21,773) included 21,133 English-speaking adults who participated in the PROMIS wave 1 data collection and 640 Spanish-speaking Latino adults recruited separately. Not applicable. We used English- and Spanish-language data and existing PROMIS item parameters for the physical functioning item bank to estimate upper-extremity and mobility scores. In addition, we fit graded response models to calibrate the upper-extremity items and mobility items separately, compare separate to combined calibrations, and produce subdomain scores. After eliminating items because of local dependency, 16 items remained to assess upper extremity and 17 items to assess mobility. The estimated correlation between upper extremity and mobility was .59 using existing PROMIS physical functioning item parameters (r=.60 using parameters calibrated separately for upper-extremity and mobility items). Upper-extremity and mobility subdomains shared about 35% of the variance in common, and produced comparable scores whether calibrated separately or together. The identification of the subset of items tapping these 2 aspects of physical functioning and scored using the existing PROMIS parameters provides the option of scoring these subdomains in addition to the overall physical functioning score. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Otero, L. J.; Ortiz-Royero, J. C.; Ruiz-Merchan, J. K.; Higgins, A. E.; Henriquez, S. A.
2015-05-01
On Friday, 7 March 2009, a 200 m-long section of the tourist pier in Puerto Colombia collapsed under the impact of the waves generated by a cold front in the area. The aim of this study is to determine the contribution and importance of cold fronts and storms on extreme waves in different areas of the Colombian Caribbean to determine the degree of the threat posed by the flood processes to which these coastal populations are exposed and the actions to which coastal engineering constructions should be subject. In the calculation of maritime constructions, the most important parameter is the wave's height; therefore, it is necessary to definitively know the design wave height to which a coastal engineering structure should be resistant. This wave height varies according to the return period considered. Using Gumbel's extreme value methodology, the significant height values for the study area were calculated. The methodology was evaluated using data from the re-analysis of the spectral NOAA Wavewatch III (WW3) model for 15 points along the 1600 km of the Colombia Caribbean coast (continental and insular) of the last 15 years. The results demonstrated that the extreme waves caused by tropical cyclones and cold fronts have different effects along the Colombian Caribbean coast. Storms and hurricanes are of greater importance in the Guajira Peninsula (Alta Guajira). In the central area formed by Baja Guajira, Santa Marta, Barranquilla, and Cartagena, the strong influence of cold fronts on extreme waves is evident. On the other hand, in the southern region of the Colombian Caribbean coast, from the Gulf of Morrosquillo to the Gulf of Urabá, even though extreme waves are lower than in the previous regions, extreme waves are dominated mainly by the passage of cold fronts. Extreme waves in the San Andrés and Providencia insular region present a different dynamic from that in the continental area due to its geographic location. The wave heights in the extreme regime are similar in magnitude to those found in Alta Guajira, but the extreme waves associated with the passage of cold fronts in this region have lower return periods than the extreme waves associated with hurricane season. These results are of great importance when evaluating the threat of extreme waves in the coastal and port infrastructure, for purposes of the design of new constructions, and in the coastal flood processes due to run-up because, according to the site of interest in the coast, the forces that shape extreme waves are not the same.
Statistical methods for the analysis of climate extremes
NASA Astrophysics Data System (ADS)
Naveau, Philippe; Nogaj, Marta; Ammann, Caspar; Yiou, Pascal; Cooley, Daniel; Jomelli, Vincent
2005-08-01
Currently there is an increasing research activity in the area of climate extremes because they represent a key manifestation of non-linear systems and an enormous impact on economic and social human activities. Our understanding of the mean behavior of climate and its 'normal' variability has been improving significantly during the last decades. In comparison, climate extreme events have been hard to study and even harder to predict because they are, by definition, rare and obey different statistical laws than averages. In this context, the motivation for this paper is twofold. Firstly, we recall the basic principles of Extreme Value Theory that is used on a regular basis in finance and hydrology, but it still does not have the same success in climate studies. More precisely, the theoretical distributions of maxima and large peaks are recalled. The parameters of such distributions are estimated with the maximum likelihood estimation procedure that offers the flexibility to take into account explanatory variables in our analysis. Secondly, we detail three case-studies to show that this theory can provide a solid statistical foundation, specially when assessing the uncertainty associated with extreme events in a wide range of applications linked to the study of our climate. To cite this article: P. Naveau et al., C. R. Geoscience 337 (2005).
Janssen, Insa; Lang, Gernot; Navarro-Ramirez, Rodrigo; Jada, Ajit; Berlin, Connor; Hilis, Aaron; Zubkov, Micaella; Gandevia, Lena; Härtl, Roger
2017-11-01
Recently, novel mobile intraoperative fan-beam computed tomography (CT) was introduced, allowing for real-time navigation and immediate intraoperative evaluation of neural decompression in spine surgery. This study sought to investigate whether intraoperatively assessed neural decompression during minimally invasive spine surgery (MISS) has a predictive value for clinical and radiographic outcome. A retrospective study of patients undergoing intraoperative CT (iCT)-guided extreme lateral interbody fusion or transforaminal lumbar interbody fusion was conducted. 1) Preoperative, 2) intraoperative (after cage implantation, 3) postoperative, and 4) follow-up radiographic and clinical parameters obtained from radiography or CT were quantified. Thirty-four patients (41 spinal segments) were analyzed. iCT-based navigation was successfully accomplished in all patients. Radiographic parameters showed significant improvement from preoperatively to intraoperatively after cage implantation in both MISS procedures (extreme lateral interbody fusion/transforaminal lumbar interbody fusion) (P ≤ 0.05). Radiologic parameters for both MISS fusion procedures did not show significant differences to the assessed radiographic measures at follow-up (P > 0.05). Radiologic outcome values did not decrease when compared intraoperatively (after cage implantation) to latest follow-up. Intraoperative fan-beam CT is capable of assessing neural decompression intraoperatively with high accuracy, allowing for precise prediction of radiologic outcome and earliest possible feedback during MISS fusion procedures. These findings are highly valuable for routine practice and future investigations toward finding a threshold for neural decompression that translates into clinical improvement. If sufficient neural decompression has been confirmed with iCT imaging studies, additional postoperative and/or follow-up imaging studies might no longer be required if patients remain asymptomatic. Copyright © 2017 Elsevier Inc. All rights reserved.
Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Cheong, R. Y.; Gabda, D.
2017-09-01
Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.
Valuing happiness is associated with bipolar disorder.
Ford, Brett Q; Mauss, Iris B; Gruber, June
2015-04-01
Although people who experience happiness tend to have better psychological health, people who value happiness to an extreme tend to have worse psychological health, including more depression. We propose that the extreme valuing of happiness may be a general risk factor for mood disturbances, both depressive and manic. To test this hypothesis, we examined the relationship between the extreme valuing of happiness and risk for, diagnosis of, and illness course for bipolar disorder (BD). Supporting our hypothesis, the extreme valuing of happiness was associated with a measure of increased risk for developing BD (Studies 1 and 2), increased likelihood of past diagnosis of BD (Studies 2 and 3), and worse prospective illness course in BD (Study 3), even when controlling for current mood symptoms (Studies 1-3). These findings indicate that the extreme valuing of happiness is associated with and even predicts BD. Taken together with previous evidence, these findings suggest that the extreme valuing of happiness is a general risk factor for mood disturbances. More broadly, what emotions people strive to feel may play a critical role in psychological health. (c) 2015 APA, all rights reserved).
Valuing happiness is associated with bipolar disorder
Ford, Brett Q.; Mauss, Iris B.; Gruber, June
2015-01-01
While people who experience happiness tend to have better psychological health, people who value happiness to an extreme tend to have worse psychological health, including more depression. We propose that the extreme valuing of happiness may be a general risk factor for mood disturbances, both depressive and manic. To test this hypothesis, we examined the relationship between the extreme valuing of happiness and risk for, diagnosis of, and illness course for Bipolar Disorder (BD). Supporting our hypothesis, the extreme valuing of happiness was associated with a measure of increased risk for developing BD (Studies 1–2), increased likelihood of past diagnosis of BD (Studies 2–3), and worse prospective illness course in BD (Study 3), even when controlling for current mood symptoms (Studies 1–3). These findings indicate that the extreme valuing of happiness is associated with and even predicts BD. Taken together with previous evidence, these findings suggest that the extreme valuing of happiness is a general risk factor for mood disturbances. More broadly, what emotions people strive to feel may play a critical role in psychological health. PMID:25603134
NASA Astrophysics Data System (ADS)
Sousa, S. G.; Santos, N. C.; Mortier, A.; Tsantaki, M.; Adibekyan, V.; Delgado Mena, E.; Israelian, G.; Rojas-Ayala, B.; Neves, V.
2015-04-01
Aims: In this work we derive new precise and homogeneous parameters for 37 stars with planets. For this purpose, we analyze high resolution spectra obtained by the NARVAL spectrograph for a sample composed of bright planet host stars in the northern hemisphere. The new parameters are included in the SWEET-Cat online catalogue. Methods: To ensure that the catalogue is homogeneous, we use our standard spectroscopic analysis procedure, ARES+MOOG, to derive effective temperatures, surface gravities, and metallicities. These spectroscopic stellar parameters are then used as input to compute the stellar mass and radius, which are fundamental for the derivation of the planetary mass and radius. Results: We show that the spectroscopic parameters, masses, and radii are generally in good agreement with the values available in online databases of exoplanets. There are some exceptions, especially for the evolved stars. These are analyzed in detail focusing on the effect of the stellar mass on the derived planetary mass. Conclusions: We conclude that the stellar mass estimations for giant stars should be managed with extreme caution when using them to compute the planetary masses. We report examples within this sample where the differences in planetary mass can be as high as 100% in the most extreme cases. Based on observations obtained at the Telescope Bernard Lyot (USR5026) operated by the Observatoire Midi-Pyrénées and the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique of France (Run ID L131N11 - OPTICON_2013A_027).
Meteorological risks are drivers of environmental innovation in agro-ecosystem management
NASA Astrophysics Data System (ADS)
Gobin, Anne; Van de Vyver, Hans; Vanwindekens, Frédéric; Planchon, Viviane; Verspecht, Ann; Frutos de Cachorro, Julia; Buysse, Jeroen
2016-04-01
Extreme weather events such as droughts, heat waves and rain storms are projected to increase both in frequency and magnitude with climate change. The research hypothesis of the MERINOVA project is that meteorological risks act as drivers of environmental innovation in agro-ecosystem management which is being tested using a chain of risk approach. The project comprises of five major parts that reflect the chain of risks: the hazard, its impact on different agro-ecosystems, vulnerability, risk management and risk communication. Generalized Extreme Value (GEV) theory was used to model annual maxima of meteorological variables based on a location-, scale- and shape-parameter that determine the center of the distribution, the deviation of the location-parameter and the upper tail decay, respectively. Spatial interpolation of GEV-derived return levels has yielded maps of temperature extremes, precipitation deficits and wet periods. The degree of temporal overlap between extreme weather conditions and sensitive periods in the agro-ecosystem was determined using a bio-physically based modelling framework that couples phenological models, a soil water balance, crop growth and environmental models. 20-year return values for frost, heat stress, drought, waterlogging and field access during different crop stages were related to arable yields. The spatial extent of vulnerability is developed on different layers of spatial information that include inter alia meteorology, soil-landscapes, crop cover and management. The level of vulnerability and resilience of an agro-ecosystem is also determined by risk management. The types of agricultural risk and their relative importance differ across sectors and farm types as elucidated by questionnaires and focus groups. Risk types are distinguished according to production, market, institutional, financial and liability risks. A portfolio of potential strategies was identified at farm, market and policy level. In conclusion, MERINOVA provides for a robust and flexible framework by demonstrating its performance across Belgian agro-ecosystems, and by ensuring its relevance to policy makers and practitioners. A strong expert and end-user network is established to help disseminate and exploit project results to meet user needs.
NASA Technical Reports Server (NTRS)
Merchant, D. H.
1976-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.
Nearly extremal apparent horizons in simulations of merging black holes
NASA Astrophysics Data System (ADS)
Lovelace, Geoffrey; Scheel, Mark; Owen, Robert; Giesler, Matthew; Katebi, Reza; Szilagyi, Bela; Chu, Tony; Demos, Nicholas; Hemberger, Daniel; Kidder, Lawrence; Pfeiffer, Harald; Afshari, Nousha; SXS Collaboration
2015-04-01
The spin S of a Kerr black hole is bounded by the surface area A of its apparent horizon: 8 πS <= A . We present recent results (arXiv:1411.7297) for the extremality of apparent horizons for merging, rapidly rotating black holes with equal masses and equal spins aligned with the orbital angular momentum. Measuring the area and (using approximate Killing vectors) the spin on the individual and common apparent horizons, we find that the inequality 8 πS < A is satisfied but is very close to equality on the common apparent horizon at the instant it first appears--even for initial spins as large as S /M2 = 0 . 994 . We compute the smallest value e0 that Booth and Fairhurst's extremality parameter can take for any scaling of the horizon's null normal vectors, concluding that the common horizons are at least moderately close to extremal just after they appear. We construct binary-black-hole initial data with marginally trapped surfaces with 8 πS > A and e0 > 1 , but these surfaces are always surrounded by apparent horizons with 8 πS < A and e0 < 1 .
NASA Astrophysics Data System (ADS)
Pavlov, Volodymyr S.; Bezsmernyi, Yurii O.; Zlepko, Sergey M.; Bezsmertna, Halyna V.
2017-08-01
The given paper analyzes principles of interaction and analysis of the reflected optical radiation from biotissue in the process of assessment of regional hemodynamics state in patients with local hypertensive- ischemic pain syndrome of amputation stumps of lower extremities, applying the method of photoplethysmography. The purpose is the evaluation of Laser photoplethysmography (LPPG) diagnostic value in examination of patients with chronic ischemia of lower extremities. Photonic device is developed to determine the level of the peripheral blood circulation, which determines the basic parameters of peripheral blood circulation and saturation level. Device consists of two sensors: infrared sensor, which contains the infrared laser radiation source and photodetector, and red sensor, which contains the red radiation source and photodetector. LPPG method allows to determined pulsatility of blood flow in different areas of the foot and lower leg, the degree of compensation and conservation perspectives limb. Surgical treatment of local hypertensive -ischemic pain syndrome of amputation stumps of lower extremities by means of semiclosed fasciotomy in combination with revasculating osteotrepanation enabled to improve considerably regional hemodynamics in the tissues of the stump and decrease pain and hypostatic disorders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.
Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (H s) and either energy period (T e) or peak period (T p) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmentalmore » contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less
Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.; ...
2016-01-06
Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (H s) and either energy period (T e) or peak period (T p) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmentalmore » contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less
Zhang, Jiangshe; Ding, Weifu
2017-01-01
With the development of the economy and society all over the world, most metropolitan cities are experiencing elevated concentrations of ground-level air pollutants. It is urgent to predict and evaluate the concentration of air pollutants for some local environmental or health agencies. Feed-forward artificial neural networks have been widely used in the prediction of air pollutants concentration. However, there are some drawbacks, such as the low convergence rate and the local minimum. The extreme learning machine for single hidden layer feed-forward neural networks tends to provide good generalization performance at an extremely fast learning speed. The major sources of air pollutants in Hong Kong are mobile, stationary, and from trans-boundary sources. We propose predicting the concentration of air pollutants by the use of trained extreme learning machines based on the data obtained from eight air quality parameters in two monitoring stations, including Sham Shui Po and Tap Mun in Hong Kong for six years. The experimental results show that our proposed algorithm performs better on the Hong Kong data both quantitatively and qualitatively. Particularly, our algorithm shows better predictive ability, with R2 increased and root mean square error values decreased respectively. PMID:28125034
Extreme-value dependence: An application to exchange rate markets
NASA Astrophysics Data System (ADS)
Fernandez, Viviana
2007-04-01
Extreme value theory (EVT) focuses on modeling the tail behavior of a loss distribution using only extreme values rather than the whole data set. For a sample of 10 countries with dirty/free float regimes, we investigate whether paired currencies exhibit a pattern of asymptotic dependence. That is, whether an extremely large appreciation or depreciation in the nominal exchange rate of one country might transmit to another. In general, after controlling for volatility clustering and inertia in returns, we do not find evidence of extreme-value dependence between paired exchange rates. However, for asymptotic-independent paired returns, we find that tail dependency of exchange rates is stronger under large appreciations than under large depreciations.
NASA Astrophysics Data System (ADS)
Piecuch, C. G.; Huybers, P. J.; Tingley, M.
2015-12-01
Tide gauge records of mean sea level are some of the most valuable instrumental time series of oceanic variability and change. Yet these time series sometimes have short record lengths and intermittently missing values. Such issues can limit the utility of the data, for example, precluding rigorous analyses of return periods of extreme mean sea level events and whether they are unprecedented. With a view to filling gaps in the tide gauge mean sea level time series, we describe a hierarchical Bayesian modeling approach. The model, which is predicated on the notion of conditional probabilities, comprises three levels: a process level, which casts mean sea level as a field with spatiotemporal covariance; a data level, which represents tide gauge observations as noisy, biased versions of the true process; and a prior level, which gives prior functional forms to model parameters. Using Bayes' rule, this technique gives estimates of the posterior probability of the process and the parameters given the observations. To demonstrate the approach, we apply it to 2,967 station-years of annual mean sea level observations over 1856-2013 from 70 tide gauges along the United States East Coast from Florida to Maine (i.e., 26.8% record completeness). The model overcomes the data paucity by sharing information across space and time. The result is an ensemble of realizations, each member of which is a possible history of sea level changes at these locations over this period, which is consistent with and equally likely given the tide gauge data and underlying model assumptions. Using the ensemble of histories furnished by the Bayesian model, we identify extreme events of mean sea level change in the tide gauge time series. Specifically, we use the model to address the particular hypothesis (with rigorous uncertainty quantification) that a recently reported interannual sea level rise during 2008-2010 was unprecedented in the instrumental record along the northeast coast of North America, and that it had a return period of 850 years. Preliminary analysis suggests that this event was likely unprecedented on the coast of Maine in the last century.
Hurricane Risk Variability along the Gulf of Mexico Coastline
Trepanier, Jill C.; Ellis, Kelsey N.; Tucker, Clay S.
2015-01-01
Hurricane risk characteristics are examined across the U. S. Gulf of Mexico coastline using a hexagonal tessellation. Using an extreme value model, parameters are collected representing the rate or λ (frequency), the scale or σ (range), and the shape or ξ (intensity) of the extreme wind distribution. These latent parameters and the 30-year return level are visualized across the grid. The greatest 30-year return levels are located toward the center of the Gulf of Mexico, and for inland locations, along the borders of Louisiana, Mississippi, and Alabama. Using a geographically weighted regression model, the relationship of these parameters to sea surface temperature (SST) is found to assess sensitivity to change. It is shown that as SSTs increase near the coast, the frequency of hurricanes in these grids decrease significantly. This reinforces the importance of SST in areas of likely tropical cyclogenesis in determining the number of hurricanes near the coast, along with SSTs along the lifespan of the storm, rather than simply local SST. The range of hurricane wind speeds experienced near Florida is shown to increase with increasing SSTs (insignificant), suggesting that increased temperatures may allow hurricanes to maintain their strength as they pass over the Florida peninsula. The modifiable areal unit problem is assessed using multiple grid sizes. Moran’s I and the local statistic G are calculated to examine spatial autocorrelation in the parameters. This research opens up future questions regarding rapid intensification and decay close to the coast and the relationship to changing SSTs. PMID:25767885
NASA Astrophysics Data System (ADS)
Pakula, Anna; Tomczewski, Slawomir; Skalski, Andrzej; Biało, Dionizy; Salbut, Leszek
2010-05-01
This paper presents novel application of Low Coherence Interferometry (LCI) in measurements of characteristic parameters as circular pitch, foot diameter, heads diameter, in extremely small cogged wheels (cogged wheel diameter lower than θ=3 mm and module m = 0.15) produced from metal and ceramics. The most interesting issue concerning small diameter cogged wheels occurs during their production. The characteristic parameters of the wheel depend strongly on the manufacturing process and while inspecting small diameter wheels the shrinkage during the cast varies with the slight change of fabrication process. In the paper the LCI interferometric Twyman - Green setup with pigtailed high power light emitting diode, for cogged wheels measurement, is described. Due to its relatively big field of view the whole wheel can be examined in one measurement, without the necessity of numerical stitching. For purposes of small cogged wheel's characteristic parameters measurement the special binarization algorithm was developed and successfully applied. At the end the results of measurement of heads and foot diameters of two cogged wheels obtained by proposed LCI setup are presented and compared with the results obtained by the commercial optical profiler. The results of examination of injection moulds used for fabrication of measured cogged wheels are also presented. Additionally, the value of cogged wheels shrinkage is calculated as a conclusion for obtained results. Proposed method is suitable for complex measurements of small diameter cogged wheels with low module especially when there are no measurements standards for such objects.
Hurricane risk variability along the Gulf of Mexico coastline.
Trepanier, Jill C; Ellis, Kelsey N; Tucker, Clay S
2015-01-01
Hurricane risk characteristics are examined across the U. S. Gulf of Mexico coastline using a hexagonal tessellation. Using an extreme value model, parameters are collected representing the rate or λ (frequency), the scale or σ (range), and the shape or ξ (intensity) of the extreme wind distribution. These latent parameters and the 30-year return level are visualized across the grid. The greatest 30-year return levels are located toward the center of the Gulf of Mexico, and for inland locations, along the borders of Louisiana, Mississippi, and Alabama. Using a geographically weighted regression model, the relationship of these parameters to sea surface temperature (SST) is found to assess sensitivity to change. It is shown that as SSTs increase near the coast, the frequency of hurricanes in these grids decrease significantly. This reinforces the importance of SST in areas of likely tropical cyclogenesis in determining the number of hurricanes near the coast, along with SSTs along the lifespan of the storm, rather than simply local SST. The range of hurricane wind speeds experienced near Florida is shown to increase with increasing SSTs (insignificant), suggesting that increased temperatures may allow hurricanes to maintain their strength as they pass over the Florida peninsula. The modifiable areal unit problem is assessed using multiple grid sizes. Moran's I and the local statistic G are calculated to examine spatial autocorrelation in the parameters. This research opens up future questions regarding rapid intensification and decay close to the coast and the relationship to changing SSTs.
Climate Impacts on Extreme Energy Consumption of Different Types of Buildings
Li, Mingcai; Shi, Jun; Guo, Jun; Cao, Jingfu; Niu, Jide; Xiong, Mingming
2015-01-01
Exploring changes of building energy consumption and its relationships with climate can provide basis for energy-saving and carbon emission reduction. Heating and cooling energy consumption of different types of buildings during 1981-2010 in Tianjin city, was simulated by using TRNSYS software. Daily or hourly extreme energy consumption was determined by percentile methods, and the climate impact on extreme energy consumption was analyzed. The results showed that days of extreme heating consumption showed apparent decrease during the recent 30 years for residential and large venue buildings, whereas days of extreme cooling consumption increased in large venue building. No significant variations were found for the days of extreme energy consumption for commercial building, although a decreasing trend in extreme heating energy consumption. Daily extreme energy consumption for large venue building had no relationship with climate parameters, whereas extreme energy consumption for commercial and residential buildings was related to various climate parameters. Further multiple regression analysis suggested heating energy consumption for commercial building was affected by maximum temperature, dry bulb temperature, solar radiation and minimum temperature, which together can explain 71.5 % of the variation of the daily extreme heating energy consumption. The daily extreme cooling energy consumption for commercial building was only related to the wet bulb temperature (R2= 0.382). The daily extreme heating energy consumption for residential building was affected by 4 climate parameters, but the dry bulb temperature had the main impact. The impacts of climate on hourly extreme heating energy consumption has a 1-3 hour delay in all three types of buildings, but no delay was found in the impacts of climate on hourly extreme cooling energy consumption for the selected buildings. PMID:25923205
Climate impacts on extreme energy consumption of different types of buildings.
Li, Mingcai; Shi, Jun; Guo, Jun; Cao, Jingfu; Niu, Jide; Xiong, Mingming
2015-01-01
Exploring changes of building energy consumption and its relationships with climate can provide basis for energy-saving and carbon emission reduction. Heating and cooling energy consumption of different types of buildings during 1981-2010 in Tianjin city, was simulated by using TRNSYS software. Daily or hourly extreme energy consumption was determined by percentile methods, and the climate impact on extreme energy consumption was analyzed. The results showed that days of extreme heating consumption showed apparent decrease during the recent 30 years for residential and large venue buildings, whereas days of extreme cooling consumption increased in large venue building. No significant variations were found for the days of extreme energy consumption for commercial building, although a decreasing trend in extreme heating energy consumption. Daily extreme energy consumption for large venue building had no relationship with climate parameters, whereas extreme energy consumption for commercial and residential buildings was related to various climate parameters. Further multiple regression analysis suggested heating energy consumption for commercial building was affected by maximum temperature, dry bulb temperature, solar radiation and minimum temperature, which together can explain 71.5 % of the variation of the daily extreme heating energy consumption. The daily extreme cooling energy consumption for commercial building was only related to the wet bulb temperature (R2= 0.382). The daily extreme heating energy consumption for residential building was affected by 4 climate parameters, but the dry bulb temperature had the main impact. The impacts of climate on hourly extreme heating energy consumption has a 1-3 hour delay in all three types of buildings, but no delay was found in the impacts of climate on hourly extreme cooling energy consumption for the selected buildings.
Friedel, M.J.
2008-01-01
A regularized joint inverse procedure is presented and used to estimate the magnitude of extreme rainfall events in ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. Since streamflow measurements reflect temporal and spatial rainfall information, peak-flow discharge is hypothesized to represent a similarity measure suitable for regionalization. To test this hypothesis, peak-flow discharge values determined from streamflow recurrence information (10-year, 25-year, and 100-year) collected outside the study basins are used to develop regional (country-wide) regression equations. Peak-flow discharge derived from these equations together with preferred spatial parameter relations as soft prior information are used to constrain the simultaneous calibration of 20 tributary basin models. The nonlinear range of uncertainty in estimated parameter values (1 curve number and 3 recurrent rainfall amounts for each model) is determined using an inverse calibration-constrained Monte Carlo approach. Cumulative probability distributions for rainfall amounts indicate differences among basins for a given return period and an increase in magnitude and range among basins with increasing return interval. Comparison of the estimated median rainfall amounts for all return periods were reasonable but larger (3.2-26%) than rainfall estimates computed using the frequency-duration (traditional) approach and individual rain gauge data. The observed 25-year recurrence rainfall amount at La Hachadura in the Paz River basin during Hurricane Mitch (1998) is similar in value to, but outside and slightly less than, the estimated rainfall confidence limits. The similarity in joint inverse and traditionally computed rainfall events, however, suggests that the rainfall observation may likely be due to under-catch and not model bias. ?? Springer Science+Business Media B.V. 2007.
Shope, James B.; Storlazzi, Curt; Erikson, Li; Hegermiller, Christie
2016-01-01
Waves are the dominant influence on coastal morphology and ecosystem structure of tropical Pacific islands. Wave heights, periods, and directions for the 21st century were projected using near-surface wind fields from four atmosphere-ocean coupled global climate models (GCM) under representative concentration pathways (RCP) 4.5 and 8.5. GCM-derived wind fields forced the global WAVEWATCH-III wave model to generate hourly time-series of bulk wave parameters around 25 islands in the mid to western tropical Pacific Ocean for historical (1976–2005), mid-, and end-of-century time periods. Extreme significant wave heights decreased (~10.0%) throughout the 21st century under both climate scenarios compared to historical wave conditions and the higher radiative forcing 8.5 scenario displayed a greater and more widespread decrease in extreme significant wave heights compared to the lower forcing 4.5 scenario. An exception was for the end-of-century June–August season. Offshore of islands in the central equatorial Pacific, extreme significant wave heights displayed the largest changes from historical values. The frequency of extreme events during December–February decreased under RCP 8.5, whereas the frequency increased under RCP 4.5. Mean wave directions often rotated more than 30° clockwise at several locations during June–August, which could indicate a weakening of the trade winds’ influence on extreme wave directions and increasing dominance of Southern Ocean swell or eastern shift of storm tracks. The projected changes in extreme wave heights, directions of extreme events, and frequencies at which extreme events occur will likely result in changes to the morphology and sustainability of island nations.
The spatial return level of aggregated hourly extreme rainfall in Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Shaffie, Mardhiyyah; Eli, Annazirin; Wan Zin, Wan Zawiah; Jemain, Abdul Aziz
2015-07-01
This paper is intended to ascertain the spatial pattern of extreme rainfall distribution in Peninsular Malaysia at several short time intervals, i.e., on hourly basis. Motivation of this research is due to historical records of extreme rainfall in Peninsular Malaysia, whereby many hydrological disasters at this region occur within a short time period. The hourly periods considered are 1, 2, 3, 6, 12, and 24 h. Many previous hydrological studies dealt with daily rainfall data; thus, this study enables comparison to be made on the estimated performances between daily and hourly rainfall data analyses so as to identify the impact of extreme rainfall at a shorter time scale. Return levels based on the time aggregate considered are also computed. Parameter estimation using L-moment method for four probability distributions, namely, the generalized extreme value (GEV), generalized logistic (GLO), generalized Pareto (GPA), and Pearson type III (PE3) distributions were conducted. Aided with the L-moment diagram test and mean square error (MSE) test, GLO was found to be the most appropriate distribution to represent the extreme rainfall data. At most time intervals (10, 50, and 100 years), the spatial patterns revealed that the rainfall distribution across the peninsula differ for 1- and 24-h extreme rainfalls. The outcomes of this study would provide additional information regarding patterns of extreme rainfall in Malaysia which may not be detected when considering only a higher time scale such as daily; thus, appropriate measures for shorter time scales of extreme rainfall can be planned. The implementation of such measures would be beneficial to the authorities to reduce the impact of any disastrous natural event.
Dimitrijevic, I M; Kocic, M N; Lazovic, M P; Mancic, D D; Marinkovic, O K; Zlatanovic, D S
2016-08-01
Lumbosacral radiculopathy is a pathological process that refers to the dysfunction of one or more spinal nerve roots in the lumbosacral region of the spine. Some studies have shown that infrared thermography can estimate the severity of the clinical manifestation of unilateral lumbosacral radiculopathy. This study aimed to examine the correlation of the regional thermal deficit of the affected lower extremity with pain intensity, mobility of the lumbar spine, and functional status in patients with unilateral lumbosacral radiculopathy. This cross-sectional study was conducted at the Clinic for Physical Medicine and Rehabilitation of the Clinical Center Niš, Serbia. A total of 69 patients with unilateral lumbosacral radiculopathy of discogenic origin were recruited, with the following clinical parameters evaluated: (1) pain intensity by using a visual analogue scale, separately at rest and during active movement; (2) mobility of the lumbar spine by Schober test and the fingertip-to-floor test; and (3) functional status by the Oswestry Disability Index. Temperature differences between the symmetrical regions of the lower extremities were detected by infrared thermography. A quantitative analysis of thermograms determined the regions of interest with maximum thermal deficit. Correlation of maximum thermal deficit with each tested parameter was then determined. A significant and strong positive correlation was found between the regional thermal deficit and pain intensity at rest, as well as pain during active movements (rVAS - rest=0.887, rVAS - activity=0.890; P<0.001). The regional thermal deficit significantly and strongly correlated with the Oswestry Disability Index score and limited mobility of the lumbar spine (P<0.001). In patients with unilateral lumbosacral radiculopathy, the values of regional thermal deficit of the affected lower extremity are correlated with pain intensity, mobility of the lumbar spine, and functional status of the patient.
Statistic analysis of annual total ozone extremes for the period 1964-1988
NASA Technical Reports Server (NTRS)
Krzyscin, Janusz W.
1994-01-01
Annual extremes of total column amount of ozone (in the period 1964-1988) from a network of 29 Dobson stations have been examined using the extreme value analysis. The extremes have been calculated as the highest deviation of daily mean total ozone from its long-term monthly mean, normalized by the monthly standard deviations. The extremes have been selected from the direct-Sun total ozone observations only. The extremes resulting from abrupt changes in ozone (day to day changes greater than 20 percent) have not been considered. The ordered extremes (maxima in ascending way, minima in descending way) have been fitted to one of three forms of the Fisher-Tippet extreme value distribution by the nonlinear least square method (Levenberg-Marguard method). We have found that the ordered extremes from a majority of Dobson stations lie close to Fisher-Tippet type III. The extreme value analysis of the composite annual extremes (combined from averages of the annual extremes selected at individual stations) has shown that the composite maxima are fitted by the Fisher-Tippet type III and the composite minima by the Fisher-Tippet type I. The difference between the Fisher-Tippet types of the composite extremes seems to be related to the ozone downward trend. Extreme value prognoses for the period 1964-2014 (derived from the data taken at: all analyzed stations, the North American, and the European stations) have revealed that the prognostic extremes are close to the largest annual extremes in the period 1964-1988 and there are only small regional differences in the prognoses.
Savic, Radovan; Ondrasek, Gabrijel; Blagojevic, Bosko; Bubalo Kovacic, Marina; Zemunac, Rados
2017-12-29
Waters are among to the most vulnerable environmental resources exposed to the impact of various point and non-point pollutants from rural/urban activities. Systematic and long-term monitoring of hydro-resources is therefore of crucial importance for sustainable water management, although such practice is lacking across many (agro-)hydro-ecosystems. In the presented study, for the first time, the spatial distribution (covering almost 9000 ha) and temporal variation (2006-2013) in certain quality parameters was characterized in drainage watercourses Tatarnica and Subic, whose catchment is rural and suburban areas close to the city of Novi Sad, Republic of Serbia. Based on majority of observed parameters, both watercourses belonged to I and II water quality classes, with occasional presence of certain parameters (e.g., suspended solids, total phosphorus; ammonium) at extreme values exacerbating both watercourses to classes IV and V. The value of the synthetic pollution index (i.e., a combined effect of all considered parameters) showed a higher degree of water pollution in watercourse Subic (on average 2.00) than Tatarnica (on average 0.72). Also, cluster analysis for watercourse Tatarnica detected two groups of parameters (mostly related to nutrients and organic matter), indicating more complex impacts on water quality during the observed period, in which elucidation thus established water quality monitoring program would be of great importance.
Alley, William M.
1984-01-01
Several two- to six-parameter regional water balance models are examined by using 50-year records of monthly streamflow at 10 sites in New Jersey. These models include variants of the Thornthwaite-Mather model, the Palmer model, and the more recent Thomas abcd model. Prediction errors are relatively similar among the models. However, simulated values of state variables such as soil moisture storage differ substantially among the models, and fitted parameter values for different models sometimes indicated an entirely different type of basin response to precipitation. Some problems in parameter identification are noted, including difficulties in identifying an appropriate time lag factor for the Thornthwaite-Mather-type model for basins with little groundwater storage, very high correlations between upper and lower storages in the Palmer-type model, and large sensitivity of parameter a of the abcd model to bias in estimates of precipitation and potential evapotranspiration. Modifications to the threshold concept of the Thornthwaite-Mather model were statistically valid for the six stations in northern New Jersey. The abcd model resulted in a simulated seasonal cycle of groundwater levels similar to fluctuations observed in nearby wells but with greater persistence. These results suggest that extreme caution should be used in attaching physical significance to model parameters and in using the state variables of the models in indices of drought and basin productivity.
Improving the performance of extreme learning machine for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Li, Jiaojiao; Du, Qian; Li, Wei; Li, Yunsong
2015-05-01
Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.
NASA Astrophysics Data System (ADS)
Guimarães Nobre, Gabriela; Arnbjerg-Nielsen, Karsten; Rosbjerg, Dan; Madsen, Henrik
2016-04-01
Traditionally, flood risk assessment studies have been carried out from a univariate frequency analysis perspective. However, statistical dependence between hydrological variables, such as extreme rainfall and extreme sea surge, is plausible to exist, since both variables to some extent are driven by common meteorological conditions. Aiming to overcome this limitation, multivariate statistical techniques has the potential to combine different sources of flooding in the investigation. The aim of this study was to apply a range of statistical methodologies for analyzing combined extreme hydrological variables that can lead to coastal and urban flooding. The study area is the Elwood Catchment, which is a highly urbanized catchment located in the city of Port Phillip, Melbourne, Australia. The first part of the investigation dealt with the marginal extreme value distributions. Two approaches to extract extreme value series were applied (Annual Maximum and Partial Duration Series), and different probability distribution functions were fit to the observed sample. Results obtained by using the Generalized Pareto distribution demonstrate the ability of the Pareto family to model the extreme events. Advancing into multivariate extreme value analysis, first an investigation regarding the asymptotic properties of extremal dependence was carried out. As a weak positive asymptotic dependence between the bivariate extreme pairs was found, the Conditional method proposed by Heffernan and Tawn (2004) was chosen. This approach is suitable to model bivariate extreme values, which are relatively unlikely to occur together. The results show that the probability of an extreme sea surge occurring during a one-hour intensity extreme precipitation event (or vice versa) can be twice as great as what would occur when assuming independent events. Therefore, presuming independence between these two variables would result in severe underestimation of the flooding risk in the study area.
NASA Astrophysics Data System (ADS)
Barlow, Nathaniel S.; Weinstein, Steven J.; Faber, Joshua A.
2017-07-01
An accurate closed-form expression is provided to predict the bending angle of light as a function of impact parameter for equatorial orbits around Kerr black holes of arbitrary spin. This expression is constructed by assuring that the weak- and strong-deflection limits are explicitly satisfied while maintaining accuracy at intermediate values of impact parameter via the method of asymptotic approximants (Barlow et al 2017 Q. J. Mech. Appl. Math. 70 21-48). To this end, the strong deflection limit for a prograde orbit around an extremal black hole is examined, and the full non-vanishing asymptotic behavior is determined. The derived approximant may be an attractive alternative to computationally expensive elliptical integrals used in black hole simulations.
Gravitational waves from plunges into Gargantua
NASA Astrophysics Data System (ADS)
Compère, Geoffrey; Fransen, Kwinten; Hertog, Thomas; Long, Jiang
2018-05-01
We analytically compute time domain gravitational waveforms produced in the final stages of extreme mass ratio inspirals of non-spinning compact objects into supermassive nearly extremal Kerr black holes. Conformal symmetry relates all corotating equatorial orbits in the geodesic approximation to circular orbits through complex conformal transformations. We use this to obtain the time domain Teukolsky perturbations for generic equatorial corotating plunges in closed form. The resulting gravitational waveforms consist of an intermediate polynomial ringdown phase in which the decay rate depends on the impact parameters, followed by an exponential quasi-normal mode decay. The waveform amplitude exhibits critical behavior when the orbital angular momentum tends to a minimal value determined by the innermost stable circular orbit. We show that either near-critical or large angular momentum leads to a significant extension of the LISA observable volume of gravitational wave sources of this kind.
Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith
2018-01-02
Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour and identifies the sets of rate parameters of interest.
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.
2010-10-01
In this study the frequency of days with extreme low (termed ELOs) and extreme high (termed EHOs) total ozone values and their influence on mean values and trends are analyzed for the world's longest total ozone record (Arosa, Switzerland). The results show (i) an increase in ELOs and (ii) a decrease in EHOs during the last decades and (iii) that the overall trend during the 1970s and 1980s in total ozone is strongly dominated by changes in these extreme events. After removing the extremes, the time series shows a strongly reduced trend (reduction by a factor of 2.5 for trend in annual mean). Excursions in the frequency of extreme events reveal "fingerprints" of dynamical factors such as ENSO or NAO, and chemical factors, such as cold Arctic vortex ozone losses, as well as major volcanic eruptions of the 20th century (Gunung Agung, El Chichón, Mt. Pinatubo). Furthermore, atmospheric loading of ozone depleting substances leads to a continuous modification of column ozone in the Northern Hemisphere also with respect to extreme values (partly again in connection with polar vortex contributions). Application of extreme value theory allows the identification of many more such "fingerprints" than conventional time series analysis of annual and seasonal mean values. The analysis shows in particular the strong influence of dynamics, revealing that even moderate ENSO and NAO events have a discernible effect on total ozone. Overall the approach to extremal modelling provides new information on time series properties, variability, trends and the influence of dynamics and chemistry, complementing earlier analyses focusing only on monthly (or annual) mean values.
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.
2010-05-01
In this study the frequency of days with extreme low (termed ELOs) and extreme high (termed EHOs) total ozone values and their influence on mean values and trends are analyzed for the world's longest total ozone record (Arosa, Switzerland). The results show (a) an increase in ELOs and (b) a decrease in EHOs during the last decades and (c) that the overall trend during the 1970s and 1980s in total ozone is strongly dominated by changes in these extreme events. After removing the extremes, the time series shows a strongly reduced trend (reduction by a factor of 2.5 for trend in annual mean). Excursions in the frequency of extreme events reveal "fingerprints" of dynamical factors such as ENSO or NAO, and chemical factors, such as cold Arctic vortex ozone losses, as well as major volcanic eruptions of the 20th century (Gunung Agung, El Chichón, Mt. Pinatubo). Furthermore, atmospheric loading of ozone depleting substances leads to a continuous modification of column ozone in the Northern Hemisphere also with respect to extreme values (partly again in connection with polar vortex contributions). Application of extreme value theory allows the identification of many more such "fingerprints" than conventional time series analysis of annual and seasonal mean values. The analysis shows in particular the strong influence of dynamics, revealing that even moderate ENSO and NAO events have a discernible effect on total ozone. Overall the approach to extremal modelling provides new information on time series properties, variability, trends and the influence of dynamics and chemistry, complementing earlier analyses focusing only on monthly (or annual) mean values.
Impacts of climate change on surface water quality in relation to drinking water production.
Delpla, I; Jung, A-V; Baures, E; Clement, M; Thomas, O
2009-11-01
Besides climate change impacts on water availability and hydrological risks, the consequences on water quality is just beginning to be studied. This review aims at proposing a synthesis of the most recent existing interdisciplinary literature on the topic. After a short presentation about the role of the main factors (warming and consequences of extreme events) explaining climate change effects on water quality, the focus will be on two main points. First, the impacts on water quality of resources (rivers and lakes) modifying parameters values (physico-chemical parameters, micropollutants and biological parameters) are considered. Then, the expected impacts on drinking water production and quality of supplied water are discussed. The main conclusion which can be drawn is that a degradation trend of drinking water quality in the context of climate change leads to an increase of at risk situations related to potential health impact.
Sudakov, S K; Nazarova, G A; Alekseeva, E V; Bashkatova, V G
2013-07-01
We compared individual anxiety assessed by three standard tests, open-field test, elevated plus-maze test, and Vogel conflict drinking test, in the same animals. No significant correlations between the main anxiety parameters were found in these three experimental models. Groups of animals with high and low anxiety rats were formed by a single parameter and subsequent selection of two extreme groups (10%). It was found that none of the tests could be used for reliable estimation of individual anxiety in rats. The individual anxiety level with high degree of confidence was determined in high-anxiety and low-anxiety rats demonstrating behavioral parameters above and below the mean values in all tests used. Therefore, several tests should be used for evaluation of the individual anxiety or sensitivity to emotional stress.
NASA Astrophysics Data System (ADS)
Galliano, Frédéric
2018-05-01
This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.
Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution
NASA Astrophysics Data System (ADS)
Rajulapati, C. R.; Mujumdar, P. P.
2017-12-01
Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.
Task parameters affecting ergonomic demands and productivity of HVAC duct installation.
Mitropoulos, Panagiotis; Hussain, Sanaa; Guarascio-Howard, Linda; Memarian, Babak
2014-01-01
Mechanical installation workers experience work-related musculoskeletal disorders (WMSDs) at high rates. (1) Quantify the ergonomic demands during HVAC installation, (2) identify the tasks and task parameters that generated extreme ergonomic demands, and (3) propose improvements to reduce the WMSDs among mechanical workers. The study focused on installation of rectangular ductwork components using ladders, and analyzed five operations by two mechanical contractors. Using continuous time observational assessment, the videotaped operations were analyzed along two dimensions: (1) the production tasks and durations, and (2) the ergonomic demands for four body regions (neck, arms/shoulders, back, and knees). The analysis identified tasks with low portion of productive time and high portion of extreme postures, and task parameters that generated extreme postures. Duct alignment was the task with the highest portion of extreme postures. The position of the ladder (angle and distance from the duct) was a task parameter that strongly influenced the extreme postures for back, neck and shoulders. Other contributing factors included the difficulty to reach the hand tools when working on the ladder, the congestion of components in the ceiling, and the space between the duct and the ceiling. The identified tasks and factors provide directions for improvement.
NASA Astrophysics Data System (ADS)
Beranová, Romana; Kyselý, Jan; Hanel, Martin
2018-04-01
The study compares characteristics of observed sub-daily precipitation extremes in the Czech Republic with those simulated by Hadley Centre Regional Model version 3 (HadRM3) and Rossby Centre Regional Atmospheric Model version 4 (RCA4) regional climate models (RCMs) driven by reanalyses and examines diurnal cycles of hourly precipitation and their dependence on intensity and surface temperature. The observed warm-season (May-September) maxima of short-duration (1, 2 and 3 h) amounts show one diurnal peak in the afternoon, which is simulated reasonably well by RCA4, although the peak occurs too early in the model. HadRM3 provides an unrealistic diurnal cycle with a nighttime peak and an afternoon minimum coinciding with the observed maximum for all three ensemble members, which suggests that convection is not captured realistically. Distorted relationships of the diurnal cycles of hourly precipitation to daily maximum temperature in HadRM3 further evidence that underlying physical mechanisms are misrepresented in this RCM. Goodness-of-fit tests indicate that generalised extreme value distribution is an applicable model for both observed and RCM-simulated precipitation maxima. However, the RCMs are not able to capture the range of the shape parameter estimates of distributions of short-duration precipitation maxima realistically, leading to either too many (nearly all for HadRM3) or too few (RCA4) grid boxes in which the shape parameter corresponds to a heavy tail. This means that the distributions of maxima of sub-daily amounts are distorted in the RCM-simulated data and do not match reality well. Therefore, projected changes of sub-daily precipitation extremes in climate change scenarios based on RCMs not resolving convection need to be interpreted with caution.
NASA Astrophysics Data System (ADS)
Setty, V.; Sharma, A.
2013-12-01
Characterization of extreme conditions of space weather is essential for potential mitigation strategies. The non-equilibrium nature of magnetosphere makes such efforts complicated and new techniques to understand its extreme event distribution are required. The heavy tail distribution in such systems can be a modeled using Stable distribution whose stability parameter is a measure of scaling in the cumulative distribution and is related to the Hurst exponent. This exponent can be readily measured in stationary time series using several techniques and detrended fluctuation analysis (DFA) is widely used in the presence of non-stationarities. However DFA has severe limitations in cases with non-linear and atypical trends. We propose a new technique that utilizes nonlinear dynamical predictions as a measure of trends and estimates the Hurst exponents. Furthermore, such a measure provides us with a new way to characterize predictability, as perfectly detrended data have no long term memory akin to Gaussian noise Ab initio calculation of weekly Hurst exponents using the auroral electrojet index AL over a span of few decades shows that these exponents are time varying and so is its fractal structure. Such time series data with time varying Hurst exponents are modeled well using multifractional Brownian motion and it is shown that DFA estimates a single time averaged value for Hurst exponent in such data. Our results show that using time varying Hurst exponent structure, we can (a) Estimate stability parameter, -a measure of scaling in heavy tails, (b) Define and identify epochs when the magnetosphere switches between regimes with and without extreme events, and, (c) Study the dependence of the Hurst exponents on the solar activity.
Complex multifractal nature in Mycobacterium tuberculosis genome
Mandal, Saurav; Roychowdhury, Tanmoy; Chirom, Keilash; Bhattacharya, Alok; Brojen Singh, R. K.
2017-01-01
The mutifractal and long range correlation (C(r)) properties of strings, such as nucleotide sequence can be a useful parameter for identification of underlying patterns and variations. In this study C(r) and multifractal singularity function f(α) have been used to study variations in the genomes of a pathogenic bacteria Mycobacterium tuberculosis. Genomic sequences of M. tuberculosis isolates displayed significant variations in C(r) and f(α) reflecting inherent differences in sequences among isolates. M. tuberculosis isolates can be categorised into different subgroups based on sensitivity to drugs, these are DS (drug sensitive isolates), MDR (multi-drug resistant isolates) and XDR (extremely drug resistant isolates). C(r) follows significantly different scaling rules in different subgroups of isolates, but all the isolates follow one parameter scaling law. The richness in complexity of each subgroup can be quantified by the measures of multifractal parameters displaying a pattern in which XDR isolates have highest value and lowest for drug sensitive isolates. Therefore C(r) and multifractal functions can be useful parameters for analysis of genomic sequences. PMID:28440326
Complex multifractal nature in Mycobacterium tuberculosis genome
NASA Astrophysics Data System (ADS)
Mandal, Saurav; Roychowdhury, Tanmoy; Chirom, Keilash; Bhattacharya, Alok; Brojen Singh, R. K.
2017-04-01
The mutifractal and long range correlation (C(r)) properties of strings, such as nucleotide sequence can be a useful parameter for identification of underlying patterns and variations. In this study C(r) and multifractal singularity function f(α) have been used to study variations in the genomes of a pathogenic bacteria Mycobacterium tuberculosis. Genomic sequences of M. tuberculosis isolates displayed significant variations in C(r) and f(α) reflecting inherent differences in sequences among isolates. M. tuberculosis isolates can be categorised into different subgroups based on sensitivity to drugs, these are DS (drug sensitive isolates), MDR (multi-drug resistant isolates) and XDR (extremely drug resistant isolates). C(r) follows significantly different scaling rules in different subgroups of isolates, but all the isolates follow one parameter scaling law. The richness in complexity of each subgroup can be quantified by the measures of multifractal parameters displaying a pattern in which XDR isolates have highest value and lowest for drug sensitive isolates. Therefore C(r) and multifractal functions can be useful parameters for analysis of genomic sequences.
Rainfall extremes from TRMM data and the Metastatistical Extreme Value Distribution
NASA Astrophysics Data System (ADS)
Zorzetto, Enrico; Marani, Marco
2017-04-01
A reliable quantification of the probability of weather extremes occurrence is essential for designing resilient water infrastructures and hazard mitigation measures. However, it is increasingly clear that the presence of inter-annual climatic fluctuations determines a substantial long-term variability in the frequency of occurrence of extreme events. This circumstance questions the foundation of the traditional extreme value theory, hinged on stationary Poisson processes or on asymptotic assumptions to derive the Generalized Extreme Value (GEV) distribution. We illustrate here, with application to daily rainfall, a new approach to extreme value analysis, the Metastatistical Extreme Value Distribution (MEVD). The MEVD relaxes the above assumptions and is based on the whole distribution of daily rainfall events, thus allowing optimal use of all available observations. Using a global dataset of rain gauge observations, we show that the MEVD significantly outperforms the Generalized Extreme Value distribution, particularly for long average recurrence intervals and when small samples are available. The latter property suggests MEVD to be particularly suited for applications to satellite rainfall estimates, which only cover two decades, thus making extreme value estimation extremely challenging. Here we apply MEVD to the TRMM TMPA 3B42 product, an 18-year dataset of remotely-sensed daily rainfall providing a quasi-global coverage. Our analyses yield a global scale mapping of daily rainfall extremes and of their distributional tail properties, bridging the existing large gaps in ground-based networks. Finally, we illustrate how our global-scale analysis can provide insight into how properties of local rainfall regimes affect tail estimation uncertainty when using the GEV or MEVD approach. We find a dependence of the estimation uncertainty, for both the GEV- and MEV-based approaches, on the average annual number and on the inter-annual variability of rainy days. In particular, estimation uncertainty decreases 1) as the mean annual number of wet days increases, and 2) as the variability in the number of rainy days, expressed by its coefficient of variation, decreases. We tentatively explain this behavior in terms of the assumptions underlying the two approaches.
Exact solutions of a two parameter flux model and cryobiological applications.
Benson, James D; Chicone, Carmen C; Critser, John K
2005-06-01
Solute-solvent transmembrane flux models are used throughout biological sciences with applications in plant biology, cryobiology (transplantation and transfusion medicine), as well as circulatory and kidney physiology. Using a standard two parameter differential equation model of solute and solvent transmembrane flux described by Jacobs [The simultaneous measurement of cell permeability to water and to dissolved substances, J. Cell. Comp. Physiol. 2 (1932) 427-444], we determine the functions that describe the intracellular water volume and moles of intracellular solute for every time t and every set of initial conditions. Here, we provide several novel biophysical applications of this theory to important biological problems. These include using this result to calculate the value of cell volume excursion maxima and minima along with the time at which they occur, a novel result that is of significant relevance to the addition and removal of permeating solutes during cryopreservation. We also present a methodology that produces extremely accurate sum of squares estimates when fitting data for cellular permeability parameter values. Finally, we show that this theory allows a significant increase in both accuracy and speed of finite element methods for multicellular volume simulations, which has critical clinical biophysical applications in cryosurgical approaches to cancer treatment.
Dimova, Violeta; Oertel, Bruno G; Lötsch, Jörn
2017-01-01
Skin sensitivity to sensory stimuli varies among different body areas. A standardized clinical quantitative sensory testing (QST) battery, established for the diagnosis of neuropathic pain, was used to assess whether the magnitude of differences between test sites reaches clinical significance. Ten different sensory QST measures derived from thermal and mechanical stimuli were obtained from 21 healthy volunteers (10 men) and used to create somatosensory profiles bilateral from the dorsum of the hands (the standard area for the assessment of normative values for the upper extremities as proposed by the German Research Network on Neuropathic Pain) and bilateral at volar forearms as a neighboring nonstandard area. The parameters obtained were statistically compared between test sites. Three of the 10 QST parameters differed significantly with respect to the "body area," that is, warmth detection, thermal sensory limen, and mechanical pain thresholds. After z-transformation and interpretation according to the QST battery's standard instructions, 22 abnormal values were obtained at the hand. Applying the same procedure to parameters assessed at the nonstandard site forearm, that is, z-transforming them to the reference values for the hand, 24 measurements values emerged as abnormal, which was not significantly different compared with the hand (P=0.4185). Sensory differences between neighboring body areas are statistically significant, reproducing prior knowledge. This has to be considered in scientific assessments where a small variation of the tested body areas may not be an option. However, the magnitude of these differences was below the difference in sensory parameters that is judged as abnormal, indicating a robustness of the QST instrument against protocol deviations with respect to the test area when using the method of comparison with a 95 % confidence interval of a reference dataset.
Applied extreme-value statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kinnison, R.R.
1983-05-01
The statistical theory of extreme values is a well established part of theoretical statistics. Unfortunately, it is seldom part of applied statistics and is infrequently a part of statistical curricula except in advanced studies programs. This has resulted in the impression that it is difficult to understand and not of practical value. In recent environmental and pollution literature, several short articles have appeared with the purpose of documenting all that is necessary for the practical application of extreme value theory to field problems (for example, Roberts, 1979). These articles are so concise that only a statistician can recognise all themore » subtleties and assumptions necessary for the correct use of the material presented. The intent of this text is to expand upon several recent articles, and to provide the necessary statistical background so that the non-statistician scientist can recognize and extreme value problem when it occurs in his work, be confident in handling simple extreme value problems himself, and know when the problem is statistically beyond his capabilities and requires consultation.« less
Meteorological risks are drivers of environmental innovation in agro-ecosystem management
NASA Astrophysics Data System (ADS)
Gobin, Anne; Van de Vijver, Hans; Vanwindekens, Frédéric; de Frutos Cachorro, Julia; Verspecht, Ann; Planchon, Viviane; Buyse, Jeroen
2017-04-01
Agricultural crop production is to a great extent determined by weather conditions. The research hypothesis is that meteorological risks act as drivers of environmental innovation in agro-ecosystem management. The methodology comprised five major parts: the hazard, its impact on different agro-ecosystems, vulnerability, risk management and risk communication. Generalized Extreme Value (GEV) theory was used to model annual maxima of meteorological variables based on a location-, scale- and shape-parameter that determine the center of the distribution, the deviation of the location-parameter and the upper tail decay, respectively. Spatial interpolation of GEV-derived return levels resulted in spatial temperature extremes, precipitation deficits and wet periods. The temporal overlap between extreme weather conditions and sensitive periods in the agro-ecosystem was realised using a bio-physically based modelling framework that couples phenology, a soil water balance and crop growth. 20-year return values for drought and waterlogging during different crop stages were related to arable yields. The method helped quantify agricultural production risks and rate both weather and crop-based agricultural insurance. The spatial extent of vulnerability is developed on different layers of geo-information to include meteorology, soil-landscapes, crop cover and management. Vulnerability of agroecosystems was mapped based on rules set by experts' knowledge and implemented by Fuzzy Inference System modelling and Geographical Information System tools. The approach was applied for cropland vulnerability to heavy rain and grassland vulnerability to drought. The level of vulnerability and resilience of an agro-ecosystem was also determined by risk management which differed across sectors and farm types. A calibrated agro-economic model demonstrated a marked influence of climate adapted land allocation and crop management on individual utility. The "chain of risk" approach allowed for investigating the hypothesis that meteorological risks act as drivers for agricultural innovation. Risk types were quantified in terms of probability and distribution, and further distinguished according to production type. Examples of strategies and options were provided at field, farm and policy level using different modelling methods.
Extremophiles: from abyssal to terrestrial ecosystems and possibly beyond
NASA Astrophysics Data System (ADS)
Canganella, Francesco; Wiegel, Juergen
2011-04-01
The anthropocentric term "extremophile" was introduced more than 30 years ago to describe any organism capable of living and growing under extreme conditions—i.e., particularly hostile to human and to the majority of the known microorganisms as far as temperature, pH, and salinity parameters are concerned. With the further development of studies on microbial ecology and taxonomy, more "extreme" environments were found and more extremophiles were described. Today, many different extremophiles have been isolated from habitats characterized by hydrostatic pressure, aridity, radiations, elevated temperatures, extreme pH values, high salt concentrations, and high solvent/metal concentrations, and it is well documented that these microorganisms are capable of thriving under extreme conditions better than any other organism living on Earth. Extremophiles have also been investigated as far as the search for life in other planets is concerned and even to evaluate the hypothesis that life on Earth came originally from space. Extremophiles are interesting for basic and applied sciences. Particularly fascinating are their structural and physiological features allowing them to stand extremely selective environmental conditions. These properties are often due to specific biomolecules (DNA, lipids, enzymes, osmolites, etc.) that have been studied for years as novel sources for biotechnological applications. In some cases (DNA polymerase, thermostable enzymes), the search was successful and the final application was achieved, but certainly further exploitations are next to come.
NASA Astrophysics Data System (ADS)
Balasis, Georgios; Potirakis, Stelios M.; Papadimitriou, Constantinos; Zitis, Pavlos I.; Eftaxias, Konstantinos
2015-04-01
The field of study of complex systems considers that the dynamics of complex systems are founded on universal principles that may be used to describe a great variety of scientific and technological approaches of different types of natural, artificial, and social systems. We apply concepts of the nonextensive statistical physics, on time-series data of observable manifestations of the underlying complex processes ending up to different extreme events, in order to support the suggestion that a dynamical analogy characterizes the generation of a single magnetic storm, solar flare, earthquake (in terms of pre-seismic electromagnetic signals) , epileptic seizure, and economic crisis. The analysis reveals that all the above mentioned different extreme events can be analyzed within similar mathematical framework. More precisely, we show that the populations of magnitudes of fluctuations included in all the above mentioned pulse-like-type time series follow the traditional Gutenberg-Richter law as well as a nonextensive model for earthquake dynamics, with similar nonextensive q-parameter values. Moreover, based on a multidisciplinary statistical analysis we show that the extreme events are characterized by crucial common symptoms, namely: (i) high organization, high compressibility, low complexity, high information content; (ii) strong persistency; and (iii) existence of clear preferred direction of emerged activities. These symptoms clearly discriminate the appearance of the extreme events under study from the corresponding background noise.
A nonstationary analysis for the Northern Adriatic extreme sea levels
NASA Astrophysics Data System (ADS)
Masina, Marinella; Lamberti, Alberto
2013-09-01
The historical data from the Trieste, Venice, Porto Corsini, and Rimini tide gauges have been used to investigate the spatial and temporal changes in extreme high water levels in the Northern Adriatic. A detailed analysis of annual mean sea level evolution at the three longest operating stations shows a coherent behavior both on a regional and global scale. A slight increase in magnitude of extreme water elevations, after the removal of the regularized annual mean sea level necessary to eliminate the effect of local subsidence and sea level rise, is found at the Venice and Porto Corsini stations. It seems to be mainly associated with a wind regime change occurred in the 1990s, due to an intensification of Bora wind events after their decrease in frequency and intensity during the second half of the 20th century. The extreme values, adjusted for the annual mean sea level trend, are modeled using a time-dependent GEV distribution. The inclusion of seasonality in the GEV parameters considerably improves the data fitting. The interannual fluctuations of the detrended monthly maxima exhibit a significant correlation with the variability of the large-scale atmospheric circulation represented by the North Atlantic Oscillation and Arctic Oscillation indices. The different coast exposure to the Bora and Sirocco winds and their seasonal character explain the various seasonal patterns of extreme sea levels observed at the tide gauges considered in the present analysis.
Ajtić, J; Brattich, E; Sarvan, D; Djurdjevic, V; Hernández-Ceballos, M A
2018-05-01
Relationships between the beryllium-7 activity concentrations in surface air and meteorological parameters (temperature, atmospheric pressure, and precipitation), teleconnection indices (Arctic Oscillation, North Atlantic Oscillation, and Scandinavian pattern) and number of sunspots are investigated using two multivariate statistical techniques: hierarchical cluster and factor analysis. The beryllium-7 surface measurements over 1995-2011, at four sampling sites located in the Scandinavian Peninsula, are obtained from the Radioactivity Environmental Monitoring Database. In all sites, the statistical analyses show that the beryllium-7 concentrations are strongly linked to temperature. Although the beryllium-7 surface concentration exhibits the well-characterised spring/summer maximum, our study shows that extremely high beryllium-7 concentrations, defined as the values exceeding the 90 th percentile in the data records for each site, also occur over the October-March period. Two types of autumn/winter extremes are distinguished: type-1 when the number of extremes in a given month is less than three, and type-2 when at least three extremes occur in a month. Factor analysis performed for these autumn/winter events shows a weaker effect of temperature and a stronger impact of the transport and production signal on the beryllium-7 concentrations. Further, the majority of the type-2 extremes are associated with a very high monthly Scandinavian teleconnection index. The type-2 extremes that occurred in January, February and March are also linked to sudden stratospheric warmings of the Arctic vortex. Our results indicate that the Scandinavian teleconnection index might be a good indicator of the meteorological conditions facilitating extremely high beryllium-7 surface concentrations over Scandinavia during autumn and winter. Copyright © 2018 Elsevier Ltd. All rights reserved.
Statistical analysis of multivariate atmospheric variables. [cloud cover
NASA Technical Reports Server (NTRS)
Tubbs, J. D.
1979-01-01
Topics covered include: (1) estimation in discrete multivariate distributions; (2) a procedure to predict cloud cover frequencies in the bivariate case; (3) a program to compute conditional bivariate normal parameters; (4) the transformation of nonnormal multivariate to near-normal; (5) test of fit for the extreme value distribution based upon the generalized minimum chi-square; (6) test of fit for continuous distributions based upon the generalized minimum chi-square; (7) effect of correlated observations on confidence sets based upon chi-square statistics; and (8) generation of random variates from specified distributions.
Spatiotemporal variability of extreme temperature frequency and amplitude in China
NASA Astrophysics Data System (ADS)
Zhang, Yuanjie; Gao, Zhiqiu; Pan, Zaitao; Li, Dan; Huang, Xinhui
2017-03-01
Temperature extremes in China are examined based on daily maximum and minimum temperatures from station observations and multiple global climate models. The magnitude and frequency of extremes are expressed in terms of return values and periods, respectively, estimated by the fitted Generalized Extreme Value (GEV) distribution of annual extreme temperatures. The observations suggest that changes in temperature extremes considerably exceed changes in the respective climatological means during the past five decades, with greater amplitude of increases in cold extremes than in warm extremes. The frequency of warm (cold) extremes increases (decreases) over most areas, with an increasingly faster rate as the extremity level rises. Changes in warm extremes are more dependent on the varying shape of GEV distribution than the location shift, whereas changes in cold extremes are more closely associated with the location shift. The models simulate the overall pattern of temperature extremes during 1961-1981 reasonably well in China, but they show a smaller asymmetry between changes in warm and cold extremes primarily due to their underestimation of increases in cold extremes especially over southern China. Projections from a high emission scenario show the multi-model median change in warm and cold extremes by 2040 relative to 1971 will be 2.6 °C and 2.8 °C, respectively, with the strongest changes in cold extremes shifting southward. By 2040, warm extremes at the 1971 20-year return values would occur about every three years, while the 1971 cold extremes would occur once in > 500 years.
Determination of the Characteristic Values and Variation Ratio for Sensitive Soils
NASA Astrophysics Data System (ADS)
Milutinovici, Emilia; Mihailescu, Daniel
2017-12-01
In 2008, Romania adopted Eurocode 7, part II, regarding the geotechnical investigations - called SR EN1997-2/2008. However a previous standard already existed in Romania, by using the mathematical statistics in determination of the calculation values, the requirements of Eurocode can be taken into consideration. The setting of characteristics and calculations values of the geotechnical parameters was finally issued in Romania at the end of 2010 at standard NP122-2010 - “Norm regarding determination of the characteristic and calculation values of the geotechnical parameters”. This standard allows using of data already known from analysed area and setting the calculation values of geotechnical parameters. However, this possibility exist, it is not performed easy in Romania, considering that there isn’t any centralized system of information coming from the geotechnical studies performed for various objectives of private or national interests. Every company performing geotechnical studies tries to organize its own data base, but unfortunately none of them use existing centralized data. When determining the values of calculation, an important role is played by the variation ratio of the characteristic values of a geotechnical parameter. There are recommendations in the mentioned Norm, that could be taken into account, regarding the limits of the variation ratio, but these values are mentioned for Quaternary age soils only, normally consolidated, with a content of organic material < 5%. All of the difficult soils are excluded from the Norm even if they exist and affect the construction foundations on more than a half of the Romania’s surface. A type of difficult soil, extremely widespread on the Romania’s territory, is the contractile soil (with high swelling and contractions, very sensitive to the seasonal moisture variations). This type of material covers and influences the construction foundations in one over third of Romania’s territory. This work is proposing to be a step in determination of limits of the variation ratios for the contractile soils category, for the most used geotechnical parameters in the Romanian engineering practice, namely: the index of consistency and the cohesion.
Identifying and Clarifying Organizational Values.
ERIC Educational Resources Information Center
Seevers, Brenda S.
2000-01-01
Of the 14 organizational values ranked by a majority of 146 New Mexico Cooperative Extension educators as extremely valued, 9 were extremely evident in organizational policies and procedures. A values audit such as this forms an important initial step in strategic planning. (SK)
Atanasov, Nenad; Poposka, Anastasika; Samardziski, Milan; Kamnar, Viktor
2014-01-01
Radiographic examination of extremities in surgical lengthening and/or correction of deformities is of crucial importance for the assessment of new bone formation. The purpose of this study is to confirm the diagnostic value of radiography in precise detection of bone parameters in various lengthening or correction stages in patients treated by limb-lengthening and deformity correction. 50 patients were treated by the Ilizarov method of limb lengthening or deformity correction at the University Orthopaedic Surgery Clinic in Skopje, and analysed over the period from 2006 to 2012. The patients were divided into two groups. The first group consisted of 27 patients with limb-lengthening because of congenital shortening. The second group consisted of 23 patients treated for acquired limb deformities. The results in both groups were received in three stages of new bone formation and were based on the appearance of 3 radiographic parameters at the distraction/compression site. The differences between the presence of all radiographic bone parameters in different stages of new bone formation were statistically signficant in both groups, especially the presence of the cortical margin in the first group (Cochran Q=34.43, df=2, p=0.00000). The comparative analysis between the two groups showed a statistically significant difference in the presence of initial bone elements and cystic formations only in the first stage. Almost no statistical significance in the differences between both groups of patients with regard to 3 radiographic parameters in 3 stages of new bone formation, indicates a minor influence of the etiopathogenetic background on the new bone formation in patients treated by gradual lengthening or correction of limb deformities.
NASA Astrophysics Data System (ADS)
Veneziano, D.; Langousis, A.; Lepore, C.
2009-12-01
The annual maximum of the average rainfall intensity in a period of duration d, Iyear(d), is typically assumed to have generalized extreme value (GEV) distribution. The shape parameter k of that distribution is especially difficult to estimate from either at-site or regional data, making it important to constraint k using theoretical arguments. In the context of multifractal representations of rainfall, we observe that standard theoretical estimates of k from extreme value (EV) and extreme excess (EE) theories do not apply, while estimates from large deviation (LD) theory hold only for very small d. We then propose a new theoretical estimator based on fitting GEV models to the numerically calculated distribution of Iyear(d). A standard result from EV and EE theories is that k depends on the tail behavior of the average rainfall in d, I(d). This result holds if Iyear(d) is the maximum of a sufficiently large number n of variables, all distributed like I(d); therefore its applicability hinges on whether n = 1yr/d is large enough and the tail of I(d) is sufficiently well known. One typically assumes that at least for small d the former condition is met, but poor knowledge of the upper tail of I(d) remains an obstacle for all d. In fact, in the case of multifractal rainfall, also the first condition is not met because, irrespective of d, 1yr/d is too small (Veneziano et al., 2009, WRR, in press). Applying large deviation (LD) theory to this multifractal case, we find that, as d → 0, Iyear(d) approaches a GEV distribution whose shape parameter kLD depends on a region of the distribution of I(d) well below the upper tail, is always positive (in the EV2 range), is much larger than the value predicted by EV and EE theories, and can be readily found from the scaling properties of I(d). The scaling properties of rainfall can be inferred also from short records, but the limitation remains that the result holds under d → 0 not for finite d. Therefore, for different reasons, none of the above asymptotic theories applies to Iyear(d). In practice, one is interested in the distribution of Iyear(d) over a finite range of averaging durations d and return periods T. Using multifractal representations of rainfall, we have numerically calculated the distribution of Iyear(d) and found that, although not GEV, the distribution can be accurately approximated by a GEV model. The best-fitting parameter k depends on d, but is insensitive to the scaling properties of rainfall and the range of return periods T used for fitting. We have obtained a default expression for k(d) and compared it with estimates from historical rainfall records. The theoretical function tracks well the empirical dependence on d, although it generally overestimates the empirical k values, possibly due to deviations of rainfall from perfect scaling. This issue is under investigation.
Applications of Extreme Value Theory in Public Health.
Thomas, Maud; Lemaitre, Magali; Wilson, Mark L; Viboud, Cécile; Yordanov, Youri; Wackernagel, Hans; Carrat, Fabrice
2016-01-01
We present how Extreme Value Theory (EVT) can be used in public health to predict future extreme events. We applied EVT to weekly rates of Pneumonia and Influenza (P&I) deaths over 1979-2011. We further explored the daily number of emergency department visits in a network of 37 hospitals over 2004-2014. Maxima of grouped consecutive observations were fitted to a generalized extreme value distribution. The distribution was used to estimate the probability of extreme values in specified time periods. An annual P&I death rate of 12 per 100,000 (the highest maximum observed) should be exceeded once over the next 30 years and each year, there should be a 3% risk that the P&I death rate will exceed this value. Over the past 10 years, the observed maximum increase in the daily number of visits from the same weekday between two consecutive weeks was 1133. We estimated at 0.37% the probability of exceeding a daily increase of 1000 on each month. The EVT method can be applied to various topics in epidemiology thus contributing to public health planning for extreme events.
Environmental hazard assessment of cheese manufacturing effluent treated for hydrogen production.
Karadima, Constantina; Theodoropoulos, Chris; Iliopoulou-Georgudaki, Joan
2009-09-01
Toxicity of effluents after treatment in an anaerobic fermentation system for hydrogen production is evaluated with three biotests: The zebrafish Danio rerio embryo test, the Thamnotoxkit F and the Daphtoxkit F(TM) magna. Samples were classified from "very" to "extremely toxic". Average toxicity values for zebrafish were 1.55% (24 h) and 0.75% (48 h), for Thamnocephalus 0.69% (24 h) and for Daphnia 2.51% (24 h) and 1.82% (48 h). Statistical analysis between physicochemical parameters and LC(50) values revealed that PO(4)(-3), SO(4)(-2), NH(3)N and NO(3)(-) have the major contribution to toxicity. Based on results, this treatment is considered an environmentally ineffective way of managing the specific wastes.
Are historical values of ionospheric parameters from ionosondes overestimated?
NASA Astrophysics Data System (ADS)
Laštovička, J.; Koucká Knížová, P.; Kouba, D.
2012-04-01
Ionogram-scaled values from pre-digital ionosonde times had been derived from ionograms under the assumption of the vertical reflection of ordinary mode of sounding radio waves. Classical ionosondes were unable to distinguish between the vertical and oblique reflections and in the case of the Es-layer also between the ordinary and extraordinary mode reflections due to mirror-like reflections. However, modern digisondes determine clearly the oblique or extraordinary mode reflections. Evaluating the Pruhonice digisonde ionograms in "classical" and in "correct" way we found for seven summers (2004-2010) that among strong foEs (> 6 MHz) only 10% of foEs values were correct and 90% were artificially enhanced in average by 1 MHz, in extreme cases by more than 3 MHz (some oblique reflections). 34% of all reflections were oblique reflections. With other ionospheric parameters like foF2 or foE the problem is less severe because non-mirror reflection makes delay of the extraordinary mode with respect to the ordinary mode and they are separated on ionograms, and oblique reflections are less frequent than with the patchy Es layer. At high latitudes another problem is caused by the z-mode, which is sometimes difficult to be distinguished from the ordinary mode.
Light Curves of Lucy Targets: Leucus and Polymele
NASA Astrophysics Data System (ADS)
Buie, Marc W.; Zangari, Amanda M.; Marchi, Simone; Levison, Harold F.; Mottola, Stefano
2018-06-01
We present new observations from 2016 of two Jupiter Trojan asteroids that are targets for the Lucy Discovery mission. The extremely long rotation period of (11351) Leucus is confirmed and refined to a secure value of 445.732 ± 0.021 hr with photometric parameters of H r = 11.046 ± 0.003 and G r = 0.58 ± 0.02 in the SDSS r‧ filter. This leads to a geometric albedo of p V = 4.7%. The amplitude of the light curve was measured to be 0.61 mag, unchanged from the value of one-fourth of a revolution earlier, suggesting a low obliquity. The first light-curve observations for (15094) Polymele are also presented. This object is revealed to have a much shorter rotation period of 5.8607 ± 0.0005 hr with a very low amplitude of 0.09 mag. Its photometric parameters are H r = 11.691 ± 0.002 and G r = 0.22 ± 0.02. These values lead to a refined geometric albedo of p V = 7.3%. This object is either nearly spherical or was being viewed nearly pole-on in 2016. Further observations are required to fully determine the spin pole orientation and convex-hull shapes.
Kodejska, Milos; Mokry, Pavel; Linhart, Vaclav; Vaclavik, Jan; Sluka, Tomas
2012-12-01
An adaptive system for the suppression of vibration transmission using a single piezoelectric actuator shunted by a negative capacitance circuit is presented. It is known that by using a negative-capacitance shunt, the spring constant of a piezoelectric actuator can be controlled to extreme values of zero or infinity. Because the value of spring constant controls a force transmitted through an elastic element, it is possible to achieve a reduction of transmissibility of vibrations through the use of a piezoelectric actuator by reducing its effective spring constant. Narrow frequency range and broad frequency range vibration isolation systems are analyzed, modeled, and experimentally investigated. The problem of high sensitivity of the vibration control system to varying operational conditions is resolved by applying an adaptive control to the circuit parameters of the negative capacitor. A control law that is based on the estimation of the value of the effective spring constant of a shunted piezoelectric actuator is presented. An adaptive system which achieves a self-adjustment of the negative capacitor parameters is presented. It is shown that such an arrangement allows the design of a simple electronic system which offers a great vibration isolation efficiency under variable vibration conditions.
NASA Astrophysics Data System (ADS)
Smith, N.; Sandal, G. M.; Leon, G. R.; Kjærgaard, A.
2017-08-01
Land-based extreme environments (e.g. polar expeditions, Antarctic research stations, confinement chambers) have often been used as analog settings for spaceflight. These settings share similarities with the conditions experienced during space missions, including confinement, isolation and limited possibilities for evacuation. To determine the utility of analog settings for understanding human spaceflight, researchers have examined the extent to which the individual characteristics (e.g., personality) of people operating in extreme environments can be generalized across contexts (Sandal, 2000) [1]. Building on previous work, and utilising new and pre-existing data, the present study examined the extent to which personal value motives could be generalized across extreme environments. Four populations were assessed; mountaineers (N =59), military personnel (N = 25), Antarctic over-winterers (N = 21) and Mars simulation participants (N = 12). All participants completed the Portrait Values Questionnaire (PVQ; Schwartz; 2) capturing information on 10 personal values. Rank scores suggest that all groups identified Self-direction, Stimulation, Universalism and Benevolence as important values and acknowledged Power and Tradition as being low priorities. Results from difference testing suggest the extreme environment groups were most comparable on Self-direction, Stimulation, Benevolence, Tradition and Security. There were significant between-group differences on five of the ten values. Overall, findings pinpointed specific values that may be important for functioning in challenging environments. However, the differences that emerged on certain values highlight the importance of considering the specific population when comparing results across extreme settings. We recommend that further research examine the impact of personal value motives on indicators of adjustment, group working, and performance. Information from such studies could then be used to aid selection and training processes for personnel operating in extreme settings, and in space.
NASA Astrophysics Data System (ADS)
Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto
2016-04-01
Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the empirical records, as well as variations in their size, constitute the two most important factors that may significantly affect the accuracy of the obtained results. Acknowledgments The research project was implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and co-financed by the European Social Fund (ESF) and the Greek State. The work conducted by Roberto Deidda was funded under the Sardinian Regional Law 7/2007 (funding call 2013).
Extreme event statistics in a drifting Markov chain
NASA Astrophysics Data System (ADS)
Kindermann, Farina; Hohmann, Michael; Lausch, Tobias; Mayer, Daniel; Schmidt, Felix; Widera, Artur
2017-07-01
We analyze extreme event statistics of experimentally realized Markov chains with various drifts. Our Markov chains are individual trajectories of a single atom diffusing in a one-dimensional periodic potential. Based on more than 500 individual atomic traces we verify the applicability of the Sparre Andersen theorem to our system despite the presence of a drift. We present detailed analysis of four different rare-event statistics for our system: the distributions of extreme values, of record values, of extreme value occurrence in the chain, and of the number of records in the chain. We observe that, for our data, the shape of the extreme event distributions is dominated by the underlying exponential distance distribution extracted from the atomic traces. Furthermore, we find that even small drifts influence the statistics of extreme events and record values, which is supported by numerical simulations, and we identify cases in which the drift can be determined without information about the underlying random variable distributions. Our results facilitate the use of extreme event statistics as a signal for small drifts in correlated trajectories.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor
2018-01-01
Accurate prediction of water quality parameters (WQPs) is an important task in the management of water resources. Artificial neural networks (ANNs) are frequently applied for dissolved oxygen (DO) prediction, but often only their interpolation performance is checked. The aims of this research, beside interpolation, were the determination of extrapolation performance of ANN model, which was developed for the prediction of DO content in the Danube River, and the assessment of relationship between the significance of inputs and prediction error in the presence of values which were of out of the range of training. The applied ANN is a polynomial neural network (PNN) which performs embedded selection of most important inputs during learning, and provides a model in the form of linear and non-linear polynomial functions, which can then be used for a detailed analysis of the significance of inputs. Available dataset that contained 1912 monitoring records for 17 water quality parameters was split into a "regular" subset that contains normally distributed and low variability data, and an "extreme" subset that contains monitoring records with outlier values. The results revealed that the non-linear PNN model has good interpolation performance (R 2 =0.82), but it was not robust in extrapolation (R 2 =0.63). The analysis of extrapolation results has shown that the prediction errors are correlated with the significance of inputs. Namely, the out-of-training range values of the inputs with low importance do not affect significantly the PNN model performance, but their influence can be biased by the presence of multi-outlier monitoring records. Subsequently, linear PNN models were successfully applied to study the effect of water quality parameters on DO content. It was observed that DO level is mostly affected by temperature, pH, biological oxygen demand (BOD) and phosphorus concentration, while in extreme conditions the importance of alkalinity and bicarbonates rises over pH and BOD. Copyright © 2017 Elsevier B.V. All rights reserved.
Accelerated testing of an optimized closing system for automotive fuel tank
NASA Astrophysics Data System (ADS)
Gligor, A.; Ilie, S.; Nicolae, V.; Mitran, G.
2015-11-01
Taking into account the legal prescriptions which are in force and the new regulatory requirements that will be mandatory to implement in the near future regarding testing characteristics of automotive fuel tanks, resulted the necessity to develop a new testing methodology which allows to estimate the behaviour of the closing system of automotive fuel tank over a long period of time (10-15 years). Thus, were designed and conducted accelerated tests under extreme assembling and testing conditions (high values for initial tightening torques, extreme values of temperature and pressure). In this paper are presented two of durability tests which were performed on an optimized closing system of fuel tank: (i) the test of exposure to temperature with cyclical variation and (ii) the test of continuous exposure to elevated temperature. In these experimental tests have been used main components of the closing system manufactured of two materials variants, both based on the polyoxymethylene, material that provides higher mechanical stiffness and strength in a wide temperature range, as well as showing increased resistance to the action of chemical agents and fuels. The tested sample included a total of 16 optimized locking systems, 8 of each of 2 versions of material. Over deploying the experiments were determined various parameters such as: the initial tightening torque, the tightening torque at different time points during measurements, the residual tightening torque, defects occurred in the system components (fissures, cracks, ruptures), the sealing conditions of system at the beginning and at the end of test. Based on obtained data were plotted the time evolution diagrams of considered parameter (the residual tightening torque of the system consisting of locking nut and threaded ring), in different temperature conditions, becoming possible to make pertinent assessments on the choice between the two types of materials. By conducting these tests and interpreting the obtained results, it can be created a clear picture of the capacity of closing system of fuel tank to fulfil the functional requirements following the exposure to values of testing parameters significantly above the values that may appear throughout the entire service life of the vehicle. The proposed accelerated testing method shows the main advantage of simulation in a limited time all the situations which may be encountered in a much longer period of time, namely the service life of the vehicle.
Fan, Longling; Yao, Jing; Yang, Chun; Xu, Di; Tang, Dalin
2018-01-01
Modeling ventricle active contraction based on in vivo data is extremely challenging because of complex ventricle geometry, dynamic heart motion and active contraction where the reference geometry (zero-stress geometry) changes constantly. A new modeling approach using different diastole and systole zero-load geometries was introduced to handle the changing zero-load geometries for more accurate stress/strain calculations. Echo image data were acquired from 5 patients with infarction (Infarct Group) and 10 without (Non-Infarcted Group). Echo-based computational two-layer left ventricle models using one zero-load geometry (1G) and two zero-load geometries (2G) were constructed. Material parameter values in Mooney-Rivlin models were adjusted to match echo volume data. Effective Young’s moduli (YM) were calculated for easy comparison. For diastole phase, begin-filling (BF) mean YM value in the fiber direction (YMf) was 738% higher than its end-diastole (ED) value (645.39 kPa vs. 76.97 kPa, p=3.38E-06). For systole phase, end-systole (ES) YMf was 903% higher than its begin-ejection (BE) value (1025.10 kPa vs. 102.11 kPa, p=6.10E-05). Comparing systolic and diastolic material properties, ES YMf was 59% higher than its BF value (1025.10 kPa vs. 645.39 kPa. p=0.0002). BE mean stress value was 514% higher than its ED value (299.69 kPa vs. 48.81 kPa, p=3.39E-06), while BE mean strain value was 31.5% higher than its ED value (0.9417 vs. 0.7162, p=0.004). Similarly, ES mean stress value was 562% higher than its BF value (19.74 kPa vs. 2.98 kPa, p=6.22E-05), and ES mean strain value was 264% higher than its BF value (0.1985 vs. 0.0546, p=3.42E-06). 2G models improved over 1G model limitations and may provide better material parameter estimation and stress/strain calculations. PMID:29399004
Fan, Longling; Yao, Jing; Yang, Chun; Xu, Di; Tang, Dalin
2016-01-01
Modeling ventricle active contraction based on in vivo data is extremely challenging because of complex ventricle geometry, dynamic heart motion and active contraction where the reference geometry (zero-stress geometry) changes constantly. A new modeling approach using different diastole and systole zero-load geometries was introduced to handle the changing zero-load geometries for more accurate stress/strain calculations. Echo image data were acquired from 5 patients with infarction (Infarct Group) and 10 without (Non-Infarcted Group). Echo-based computational two-layer left ventricle models using one zero-load geometry (1G) and two zero-load geometries (2G) were constructed. Material parameter values in Mooney-Rivlin models were adjusted to match echo volume data. Effective Young's moduli (YM) were calculated for easy comparison. For diastole phase, begin-filling (BF) mean YM value in the fiber direction (YM f ) was 738% higher than its end-diastole (ED) value (645.39 kPa vs. 76.97 kPa, p=3.38E-06). For systole phase, end-systole (ES) YM f was 903% higher than its begin-ejection (BE) value (1025.10 kPa vs. 102.11 kPa, p=6.10E-05). Comparing systolic and diastolic material properties, ES YM f was 59% higher than its BF value (1025.10 kPa vs. 645.39 kPa. p=0.0002). BE mean stress value was 514% higher than its ED value (299.69 kPa vs. 48.81 kPa, p=3.39E-06), while BE mean strain value was 31.5% higher than its ED value (0.9417 vs. 0.7162, p=0.004). Similarly, ES mean stress value was 562% higher than its BF value (19.74 kPa vs. 2.98 kPa, p=6.22E-05), and ES mean strain value was 264% higher than its BF value (0.1985 vs. 0.0546, p=3.42E-06). 2G models improved over 1G model limitations and may provide better material parameter estimation and stress/strain calculations.
Risk assessment of precipitation extremes in northern Xinjiang, China
NASA Astrophysics Data System (ADS)
Yang, Jun; Pei, Ying; Zhang, Yanwei; Ge, Quansheng
2018-05-01
This study was conducted using daily precipitation records gathered at 37 meteorological stations in northern Xinjiang, China, from 1961 to 2010. We used the extreme value theory model, generalized extreme value (GEV) and generalized Pareto distribution (GPD), statistical distribution function to fit outputs of precipitation extremes with different return periods to estimate risks of precipitation extremes and diagnose aridity-humidity environmental variation and corresponding spatial patterns in northern Xinjiang. Spatiotemporal patterns of daily maximum precipitation showed that aridity-humidity conditions of northern Xinjiang could be well represented by the return periods of the precipitation data. Indices of daily maximum precipitation were effective in the prediction of floods in the study area. By analyzing future projections of daily maximum precipitation (2, 5, 10, 30, 50, and 100 years), we conclude that the flood risk will gradually increase in northern Xinjiang. GEV extreme value modeling yielded the best results, proving to be extremely valuable. Through example analysis for extreme precipitation models, the GEV statistical model was superior in terms of favorable analog extreme precipitation. The GPD model calculation results reflect annual precipitation. For most of the estimated sites' 2 and 5-year T for precipitation levels, GPD results were slightly greater than GEV results. The study found that extreme precipitation reaching a certain limit value level will cause a flood disaster. Therefore, predicting future extreme precipitation may aid warnings of flood disaster. A suitable policy concerning effective water resource management is thus urgently required.
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121
The FP4026 Research Database on the fundamental period of RC infilled frame structures.
Asteris, Panagiotis G
2016-12-01
The fundamental period of vibration appears to be one of the most critical parameters for the seismic design of buildings because it strongly affects the destructive impact of the seismic forces. In this article, important research data (entitled FP4026 Research Database (Fundamental Period-4026 cases of infilled frames) based on a detailed and in-depth analytical research on the fundamental period of reinforced concrete structures is presented. In particular, the values of the fundamental period which have been analytically determined are presented, taking into account the majority of the involved parameters. This database can be extremely valuable for the development of new code proposals for the estimation of the fundamental period of reinforced concrete structures fully or partially infilled with masonry walls.
Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.
Neutron scattering reveals the dynamic basis of protein adaptation to extreme temperature.
Tehei, Moeava; Madern, Dominique; Franzetti, Bruno; Zaccai, Giuseppe
2005-12-09
To explore protein adaptation to extremely high temperatures, two parameters related to macromolecular dynamics, the mean square atomic fluctuation and structural resilience, expressed as a mean force constant, were measured by neutron scattering for hyperthermophilic malate dehydrogenase from Methanococcus jannaschii and a mesophilic homologue, lactate dehydrogenase from Oryctolagus cunniculus (rabbit) muscle. The root mean square fluctuations, defining flexibility, were found to be similar for both enzymes (1.5 A) at their optimal activity temperature. Resilience values, defining structural rigidity, are higher by an order of magnitude for the high temperature-adapted protein (0.15 Newtons/meter for O. cunniculus lactate dehydrogenase and 1.5 Newtons/meter for M. jannaschii malate dehydrogenase). Thermoadaptation appears to have been achieved by evolution through selection of appropriate structural rigidity in order to preserve specific protein structure while allowing the conformational flexibility required for activity.
Solar Extreme UV radiation and quark nugget dark matter model
NASA Astrophysics Data System (ADS)
Zhitnitsky, Ariel
2017-10-01
We advocate the idea that the surprising emission of extreme ultra violet (EUV) radiation and soft x-rays from the Sun are powered externally by incident dark matter (DM) particles. The energy and the spectral shape of this otherwise unexpected solar irradiation is estimated within the quark nugget dark matter model. This model was originally invented as a natural explanation of the observed ratio Ωdark ~ Ωvisible when the DM and visible matter densities assume the same order of magnitude values. This generic consequence of the model is a result of the common origin of both types of matter which are formed during the same QCD transition and both proportional to the same fundamental dimensional parameter ΛQCD. We also present arguments suggesting that the transient brightening-like "nanoflares" in the Sun may be related to the annihilation events which inevitably occur in the solar atmosphere within this dark matter scenario.
NASA Astrophysics Data System (ADS)
Septiadi, Deni; S, Yarianto Sugeng B.; Sriyana; Anzhar, Kurnia; Suntoko, Hadi
2018-03-01
The potential sources of meteorological phenomena in Nuclear Power Plant (NPP) area of interest are identified and the extreme values of the possible resulting hazards associated which such phenomena are evaluated to derive the appropriate design bases for the NPP. The appropriate design bases shall be determined according to the Nuclear Energy Regulatory Agency (Bapeten) applicable regulations, which presently do not indicate quantitative criteria for purposes of determining the design bases for meteorological hazards. These meteorological investigations are also carried out to evaluate the regional and site specific meteorological parameters which affect the transport and dispersion of radioactive effluents on the environment of the region around the NPP site. The meteorological hazards are to be monitored and assessed periodically over the lifetime of the plant to ensure that consistency with the design assumptions is maintained throughout the full lifetime of the facility.
On the statistical properties of viral misinformation in online social media
NASA Astrophysics Data System (ADS)
Bessi, Alessandro
2017-03-01
The massive diffusion of online social media allows for the rapid and uncontrolled spreading of conspiracy theories, hoaxes, unsubstantiated claims, and false news. Such an impressive amount of misinformation can influence policy preferences and encourage behaviors strongly divergent from recommended practices. In this paper, we study the statistical properties of viral misinformation in online social media. By means of methods belonging to Extreme Value Theory, we show that the number of extremely viral posts over time follows a homogeneous Poisson process, and that the interarrival times between such posts are independent and identically distributed, following an exponential distribution. Moreover, we characterize the uncertainty around the rate parameter of the Poisson process through Bayesian methods. Finally, we are able to derive the predictive posterior probability distribution of the number of posts exceeding a certain threshold of shares over a finite interval of time.
The application of the statistical theory of extreme values to gust-load problems
NASA Technical Reports Server (NTRS)
Press, Harry
1950-01-01
An analysis is presented which indicates that the statistical theory of extreme values is applicable to the problems of predicting the frequency of encountering the larger gust loads and gust velocities for both specific test conditions as well as commercial transport operations. The extreme-value theory provides an analytic form for the distributions of maximum values of gust load and velocity. Methods of fitting the distribution are given along with a method of estimating the reliability of the predictions. The theory of extreme values is applied to available load data from commercial transport operations. The results indicate that the estimates of the frequency of encountering the larger loads are more consistent with the data and more reliable than those obtained in previous analyses. (author)
NASA Astrophysics Data System (ADS)
Rieder, Harald E.; Staehelin, Johannes; Maeder, Jörg A.; Peter, Thomas; Ribatet, Mathieu; Davison, Anthony C.; Stübi, Rene; Weihs, Philipp; Holawe, Franz
2010-05-01
In this study tools from extreme value theory (e.g. Coles, 2001; Ribatet, 2007) are applied for the first time in the field of stratospheric ozone research, as statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not address the internal data structure concerning extremes adequately. The study illustrates that tools based on extreme value theory are appropriate to identify ozone extremes and to describe the tails of the world's longest total ozone record (Arosa, Switzerland - for details see Staehelin et al., 1998a,b) (Rieder et al., 2010a). A daily moving threshold was implemented for consideration of the seasonal cycle in total ozone. The frequency of days with extreme low (termed ELOs) and extreme high (termed EHOs) total ozone and the influence of those on mean values and trends is analyzed for Arosa total ozone time series. The results show (a) an increase in ELOs and (b) a decrease in EHOs during the last decades and (c) that the overall trend during the 1970s and 1980s in total ozone is strongly dominated by changes in these extreme events. After removing the extremes, the time series shows a strongly reduced trend (reduction by a factor of 2.5 for trend in annual mean). Furthermore, it is shown that the fitted model represents the tails of the total ozone data set with very high accuracy over the entire range (including absolute monthly minima and maxima). Also the frequency distribution of ozone mini-holes (using constant thresholds) can be calculated with high accuracy. Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight in time series properties. Excursions in the frequency of extreme events reveal "fingerprints" of dynamical factors such as ENSO or NAO, and chemical factors, such as cold Arctic vortex ozone losses, as well as major volcanic eruptions of the 20th century (e.g. Gunung Agung, El Chichón, Mt. Pinatubo). Furthermore, atmospheric loading in ozone depleting substances lead to a continuous modification of column ozone in the northern hemisphere also with respect to extreme values (partly again in connection with polar vortex contributions). It is shown that application of extreme value theory allows the identification of many more such fingerprints than conventional time series analysis of annual and seasonal mean values. Especially, the analysis shows the strong influence of dynamics, revealing that even moderate ENSO and NAO events have a discernible effect on total ozone (Rieder et al., 2010b). Overall the presented new extremes concept provides new information on time series properties, variability, trends and the influence of dynamics and chemistry, complementing earlier analyses focusing only on monthly (or annual) mean values. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder ,H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998a. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa (Switzerland), 1929-1996, J. Geophys. Res., 103(D7), 8389-8400, doi:10.1029/97JD03650, 1998b.
Lahanas, M; Baltas, D; Giannouli, S; Milickovic, N; Zamboglou, N
2000-05-01
We have studied the accuracy of statistical parameters of dose distributions in brachytherapy using actual clinical implants. These include the mean, minimum and maximum dose values and the variance of the dose distribution inside the PTV (planning target volume), and on the surface of the PTV. These properties have been studied as a function of the number of uniformly distributed sampling points. These parameters, or the variants of these parameters, are used directly or indirectly in optimization procedures or for a description of the dose distribution. The accurate determination of these parameters depends on the sampling point distribution from which they have been obtained. Some optimization methods ignore catheters and critical structures surrounded by the PTV or alternatively consider as surface dose points only those on the contour lines of the PTV. D(min) and D(max) are extreme dose values which are either on the PTV surface or within the PTV. They must be avoided for specification and optimization purposes in brachytherapy. Using D(mean) and the variance of D which we have shown to be stable parameters, achieves a more reliable description of the dose distribution on the PTV surface and within the PTV volume than does D(min) and D(max). Generation of dose points on the real surface of the PTV is obligatory and the consideration of catheter volumes results in a realistic description of anatomical dose distributions.
NASA Technical Reports Server (NTRS)
Chao, Luen-Yuan; Shetty, Dinesh K.
1992-01-01
Statistical analysis and correlation between pore-size distribution and fracture strength distribution using the theory of extreme-value statistics is presented for a sintered silicon nitride. The pore-size distribution on a polished surface of this material was characterized, using an automatic optical image analyzer. The distribution measured on the two-dimensional plane surface was transformed to a population (volume) distribution, using the Schwartz-Saltykov diameter method. The population pore-size distribution and the distribution of the pore size at the fracture origin were correllated by extreme-value statistics. Fracture strength distribution was then predicted from the extreme-value pore-size distribution, usin a linear elastic fracture mechanics model of annular crack around pore and the fracture toughness of the ceramic. The predicted strength distribution was in good agreement with strength measurements in bending. In particular, the extreme-value statistics analysis explained the nonlinear trend in the linearized Weibull plot of measured strengths without postulating a lower-bound strength.
NASA Astrophysics Data System (ADS)
Endreny, Theodore A.; Pashiardis, Stelios
2007-02-01
SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.
Extreme values and the level-crossing problem: An application to the Feller process
NASA Astrophysics Data System (ADS)
Masoliver, Jaume
2014-04-01
We review the question of the extreme values attained by a random process. We relate it to level crossings to one boundary (first-passage problems) as well as to two boundaries (escape problems). The extremes studied are the maximum, the minimum, the maximum absolute value, and the range or span. We specialize in diffusion processes and present detailed results for the Wiener and Feller processes.
Skrajnie niskie i wysokie przepływy rzek Polski w dwudziestoleciu 1986-2005
NASA Astrophysics Data System (ADS)
Sobolewski, Wojciech
2008-01-01
The objective of this study was to determine the parameters of extreme high and low flows of selected Polish rivers in the two decades 1986-2005. These parameters were used to elaborate river basin characteristics and to perform a series of maps. Subsequently, on the base of maps analysis of spatial diversity of extreme high or extremely low flows of particular rivers was performed. The analysis shows characters of extreme high flow events (in particular their size and progress), which are changing from South to North. It indicates a strong connection between hypsometric parameters of catchment area and infiltration. Different situation can be seen in case of the extremely low flows. The spatial diversity of their properties has not so apparent tendency. In southern and central part of Poland change from SW to NE was observed. However, the northern basins are no longer subject to this rule and form a separate group. Such a distribution of characteristics is probably associated with a stronger impact of other than catchment hypsometry environmental factors.
On the Use of the Beta Distribution in Probabilistic Resource Assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olea, Ricardo A., E-mail: olea@usgs.gov
2011-12-15
The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. Themore » beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution.« less
Cheng, Xiaofei; Zhang, Kai; Sun, Xiaojiang; Zhao, Changqing; Li, Hua; Zhao, Jie
2017-07-01
The objective was to analyze the compensatory effect of the pelvis and lower extremities on sagittal spinal malalignment in patients with pelvic incidence (PI) and lumbar lordosis (LL) mismatch. A series of parameters including PI, LL, PI-LL, thoracic kyphosis (TK), pelvic tilt (PT), sacral slope (SS), knee flexion angle (KFA), tibial obliquity angle (TOA), femoral obliquity angle (FOA), femur pelvis angle (FPA) and pelvic shift (PS) were measured. Patients with PI-LL mismatch were divided into pelvic retroversion group and pelvic retroposition group based on their PT and PS, and then the parameters were compared within the two groups and with the control group. All variables were significantly different when comparing the pelvic retroversion and retroposition group with the control group except for PI, FOA and PS in the pelvic retroversion group. The pelvic retroposition group had significantly greater value of PI-LL, PI, PT, KFA, FOA and PS and contribution ratio of FOA and PS, and smaller value of LL, TK and FPA and contribution ratio of PT, TOA and FPA compared with the pelvic retroversion group. Patients with lesser PI-LL mismatch rely more on hip extension to increase pelvic retroversion while those with greater PI-LL mismatch tend to add extra femoral obliquity. When compensating for larger PI-LL mismatch, the importance of hip extension is decreased and the effect of the knee and ankle joint becomes more important by providing greater femoral incline and relatively lesser ankle dorsiflexion respectively. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wen, Xian-Huan; Gómez-Hernández, J. Jaime
1998-03-01
The macrodispersion of an inert solute in a 2-D heterogeneous porous media is estimated numerically in a series of fields of varying heterogeneity. Four different random function (RF) models are used to model log-transmissivity (ln T) spatial variability, and for each of these models, ln T variance is varied from 0.1 to 2.0. The four RF models share the same univariate Gaussian histogram and the same isotropic covariance, but differ from one another in terms of the spatial connectivity patterns at extreme transmissivity values. More specifically, model A is a multivariate Gaussian model for which, by definition, extreme values (both high and low) are spatially uncorrelated. The other three models are non-multi-Gaussian: model B with high connectivity of high extreme values, model C with high connectivity of low extreme values, and model D with high connectivities of both high and low extreme values. Residence time distributions (RTDs) and macrodispersivities (longitudinal and transverse) are computed on ln T fields corresponding to the different RF models, for two different flow directions and at several scales. They are compared with each other, as well as with predicted values based on first-order analytical results. Numerically derived RTDs and macrodispersivities for the multi-Gaussian model are in good agreement with analytically derived values using first-order theories for log-transmissivity variance up to 2.0. The results from the non-multi-Gaussian models differ from each other and deviate largely from the multi-Gaussian results even when ln T variance is small. RTDs in non-multi-Gaussian realizations with high connectivity at high extreme values display earlier breakthrough than in multi-Gaussian realizations, whereas later breakthrough and longer tails are observed for RTDs from non-multi-Gaussian realizations with high connectivity at low extreme values. Longitudinal macrodispersivities in the non-multi-Gaussian realizations are, in general, larger than in the multi-Gaussian ones, while transverse macrodispersivities in the non-multi-Gaussian realizations can be larger or smaller than in the multi-Gaussian ones depending on the type of connectivity at extreme values. Comparing the numerical results for different flow directions, it is confirmed that macrodispersivities in multi-Gaussian realizations with isotropic spatial correlation are not flow direction-dependent. Macrodispersivities in the non-multi-Gaussian realizations, however, are flow direction-dependent although the covariance of ln T is isotropic (the same for all four models). It is important to account for high connectivities at extreme transmissivity values, a likely situation in some geological formations. Some of the discrepancies between first-order-based analytical results and field-scale tracer test data may be due to the existence of highly connected paths of extreme conductivity values.
NASA Astrophysics Data System (ADS)
Li, Chuang; Min, Fuhong; Jin, Qiusen; Ma, Hanyuan
2017-12-01
An active charge-controlled memristive Chua's circuit is implemented, and its basic properties are analyzed. Firstly, with the system trajectory starting from an equilibrium point, the dynamic behavior of multiple coexisting attractors depending on the memristor initial value and the system parameter is studied, which shows the coexisting behaviors of point, period, chaos, and quasic-period. Secondly, with the system motion starting from a non-equilibrium point, the dynamics of extreme multistability in a wide initial value domain are easily conformed by new analytical methods. Furthermore, the simulation results indicate that some strange chaotic attractors like multi-wing type and multi-scroll type are observed when the observed signals are extended from voltage and current to power and energy, respectively. Specially, when different initial conditions are taken, the coexisting strange chaotic attractors between the power and energy signals are exhibited. Finally, the chaotic sequences of the new system are used for encrypting color image to protect image information security. The encryption performance is analyzed by statistic histogram, correlation, key spaces and key sensitivity. Simulation results show that the new memristive chaotic system has high security in color image encryption.
Reward and uncertainty in exploration programs
NASA Technical Reports Server (NTRS)
Kaufman, G. M.; Bradley, P. G.
1971-01-01
A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.
Refining value-at-risk estimates using a Bayesian Markov-switching GJR-GARCH copula-EVT model.
Sampid, Marius Galabe; Hasim, Haslifah M; Dai, Hongsheng
2018-01-01
In this paper, we propose a model for forecasting Value-at-Risk (VaR) using a Bayesian Markov-switching GJR-GARCH(1,1) model with skewed Student's-t innovation, copula functions and extreme value theory. A Bayesian Markov-switching GJR-GARCH(1,1) model that identifies non-constant volatility over time and allows the GARCH parameters to vary over time following a Markov process, is combined with copula functions and EVT to formulate the Bayesian Markov-switching GJR-GARCH(1,1) copula-EVT VaR model, which is then used to forecast the level of risk on financial asset returns. We further propose a new method for threshold selection in EVT analysis, which we term the hybrid method. Empirical and back-testing results show that the proposed VaR models capture VaR reasonably well in periods of calm and in periods of crisis.
Trilateral interlaboratory with SSL (WLEDi) luminaire
NASA Astrophysics Data System (ADS)
Burini Junior, E. C.; Santos, E. R.; Assaf, L. O.
2018-03-01
The IEE/USP laboratory and two others, all belonging to RBLE (Brazilian Network of Test Laboratories) participated in a trilateral comparison performed from measurement independently of participants interaction. The results from electric and photometric measurements carried out on samples of Solid State Lighting - SSL, Inorganic White Light Emitting Diode (WLEDi) luminaires by three accredited laboratories were considered in order to point out mutual deviations and to verify the confidence in a bilateral comparison. The first analysis revealed a maximum deviation of 4.2 % between the luminous intensity attributed by one laboratory and the arithmetic mean value from three laboratories. The largest standard uncertainty value of 1.9 % was estimated for Total Harmonic Distortion of electric current THDi and the lowest value, 0.4 %, to the luminous flux. The extreme deviation for one parameter results was 7.2 % at maximum luminous intensity and the lowest was 1.7 % for luminous flux.
Wang, Pei-Yong; Long, Fei-Xiao; Fu, Lan-Ying; Li, Yue; Ding, Hai-Shu; Qu, An-Lian; Zhou, Xiao-Ping
2010-02-01
Using continuous two wavelength near-infrared technology to detect the variation in the consistency of oxygen hemoglobin in the muscle and the sports heart rate wireless real time collection technology, we devised the real time muscle tissue oxygenation and instantaneous heart rate experiment scheme and implemented it for the process of the 100 m run with two parameters given simultaneously. The experiment shows that the concentration of the oxygen hemoglobin in the muscle tissue continues decreasing after the end of the 100 m run, and the time interval between the moment when the concentration of the oxygen hemoglobin attains the minimum value and the moment when the athletes finish the 100 m run is (6.65 +/- 1.10) sec; while the heart rate continues increasing after the end of the 100 m run, and the time interval between the moment when the heart rate attains the maximum value and the moment when the athletes finish the 100 m run is (8.00 +/- 1.57) sec. The results show that the two wavelength near-infrared tissue oxygenation detection technology and the sports heart rate real time collection equipment can accurately measure the sports tissue oxygenation and the heart rate in the extreme intensity sport, and reveal the process of muscle oxygen transportation and consumption and its dynamic character with the heart rate in the extreme intensity sport.
Jurkojć, Jacek; Wodarski, Piotr; Michnik, Robert A; Bieniek, Andrzej; Gzik, Marek; Granek, Arkadiusz
2017-01-01
Indexing methods are very popular in terms of determining the degree of disability associated with motor dysfunctions. Currently, indexing methods dedicated to the upper limbs are not very popular, probably due to difficulties in their interpretation. This work presents the calculation algorithm of new SDDI index and the attempt is made to determine the level of physical dysfunction along with description of its kind, based on the interpretation of the calculation results of SDDI and PULMI indices. 23 healthy people (10 women and 13 men), which constituted a reference group, and a group of 3 people with mobility impairments participated in the tests. In order to examine possibilities of the utilization of the SDDI index the participants had to repetitively perform two selected rehabilitation movements of upper extremities. During the tests the kinematic value was registered using inertial motion analysis system MVN BIOMECH. The results of the test were collected in waveforms of 9 anatomical angles in 4 joints of upper extremities. Then, SDDI and PULMI indices were calculated for each person with mobility impairments. Next, the analysis was performed to check which abnormalities in upper extremity motion can influence the value of both indexes and interpretation of those indexes was shown. Joint analysis of the both indices provides information on whether the patient has correctly performed the set sequence of movement and enables the determination of possible irregularities in the performance of movement given.
A Fiducial Approach to Extremes and Multiple Comparisons
ERIC Educational Resources Information Center
Wandler, Damian V.
2010-01-01
Generalized fiducial inference is a powerful tool for many difficult problems. Based on an extension of R. A. Fisher's work, we used generalized fiducial inference for two extreme value problems and a multiple comparison procedure. The first extreme value problem is dealing with the generalized Pareto distribution. The generalized Pareto…
Gunawardhana, Luminda Niroshana; Al-Rawas, Ghazi A; Kazama, So; Al-Najar, Khalid A
2015-10-01
The objective of this study is to investigate how the magnitude and occurrence of extreme precipitation events are affected by climate change and to predict the subsequent impacts on the wadi flow regime in the Al-Khod catchment area, Muscat, Oman. The tank model, a lumped-parameter rainfall-runoff model, was used to simulate the wadi flow. Precipitation extremes and their potential future changes were predicted using six-member ensembles of general circulation models (GCMs) from the Coupled Model Intercomparison Project Phase 5 (CMIP5). Yearly maxima of the daily precipitation and wadi flow for varying return periods were compared for observed and projected data by fitting the generalized extreme value (GEV) distribution function. Flow duration curves (FDC) were developed and compared for the observed and projected wadi flows. The results indicate that extreme precipitation events consistently increase by the middle of the twenty-first century for all return periods (49-52%), but changes may become more profound by the end of the twenty-first century (81-101%). Consequently, the relative change in extreme wadi flow is greater than twofolds for all of the return periods in the late twenty-first century compared to the relative changes that occur in the mid-century period. Precipitation analysis further suggests that greater than 50% of the precipitation may be associated with extreme events in the future. The FDC analysis reveals that changes in low-to-moderate flows (Q60-Q90) may not be statistically significant, whereas increases in high flows (Q5) are statistically robust (20 and 25% for the mid- and late-century periods, respectively).
NASA Astrophysics Data System (ADS)
Sippel, S.; Otto, F. E. L.; Forkel, M.; Allen, M. R.; Guillod, B. P.; Heimann, M.; Reichstein, M.; Seneviratne, S. I.; Kirsten, T.; Mahecha, M. D.
2015-12-01
Understanding, quantifying and attributing the impacts of climatic extreme events and variability is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit pronounced biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. We assess how biases and their correction affect the quantification and attribution of simulated extremes and variability in i) climatological variables and ii) impacts on ecosystem functioning as simulated by a terrestrial biosphere model. Our study demonstrates that assessments of simulated climatic extreme events and impacts in the terrestrial biosphere are highly sensitive to bias correction schemes with major implications for the detection and attribution of these events. We introduce a novel ensemble-based resampling scheme based on a large regional climate model ensemble generated by the distributed weather@home setup[1], which fully preserves the physical consistency and multivariate correlation structure of the model output. We use extreme value statistics to show that this procedure considerably improves the representation of climatic extremes and variability. Subsequently, biosphere-atmosphere carbon fluxes are simulated using a terrestrial ecosystem model (LPJ-GSI) to further demonstrate the sensitivity of ecosystem impacts to the methodology of bias correcting climate model output. We find that uncertainties arising from bias correction schemes are comparable in magnitude to model structural and parameter uncertainties. The present study consists of a first attempt to alleviate climate model biases in a physically consistent way and demonstrates that this yields improved simulations of climate extremes and associated impacts. [1] http://www.climateprediction.net/weatherathome/
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
NASA Astrophysics Data System (ADS)
Zaliapin, I.; Ghil, M.; Thompson, S.
2007-12-01
We consider a Delay Differential Equation (DDE) model for El-Nino Southern Oscillation (ENSO) variability. The model combines two key mechanisms that participate in the ENSO dynamics: delayed negative feedback and seasonal forcing. Descriptive and metric stability analyses of the model are performed in a complete 3D space of its physically relevant parameters. Existence of two regimes --- stable and unstable --- is reported. The domains of the regimes are separated by a sharp neutral curve in the parameter space. The detailed structure of the neutral curve become very complicated (possibly fractal), and individual trajectories within the unstable region become highly complex (possibly chaotic) as the atmosphere-ocean coupling increases. In the unstable regime, spontaneous transitions in the mean "temperature" (i.e., thermocline depth), period, and extreme annual values occur, for purely periodic, seasonal forcing. This indicates (via the continuous dependence theorem) the existence of numerous unstable solutions responsible for the complex dynamics of the system. In the stable regime, only periodic solutions are found. Our results illustrate the role of the distinct parameters of ENSO variability, such as strength of seasonal forcing vs. atmosphere ocean coupling and propagation period of oceanic waves across the Tropical Pacific. The model reproduces, among other phenomena, the Devil's bleachers (caused by period locking) documented in other ENSO models, such as nonlinear PDEs and GCMs, as well as in certain observations. We expect such behavior in much more detailed and realistic models, where it is harder to describe its causes as completely.
NASA Astrophysics Data System (ADS)
Nursamsiah; Nugroho Sugianto, Denny; Suprijanto, Jusup; Munasik; Yulianto, Bambang
2018-02-01
The information of extreme wave height return level was required for maritime planning and management. The recommendation methods in analyzing extreme wave were better distributed by Generalized Pareto Distribution (GPD). Seasonal variation was often considered in the extreme wave model. This research aims to identify the best model of GPD by considering a seasonal variation of the extreme wave. By using percentile 95 % as the threshold of extreme significant wave height, the seasonal GPD and non-seasonal GPD fitted. The Kolmogorov-Smirnov test was applied to identify the goodness of fit of the GPD model. The return value from seasonal and non-seasonal GPD was compared with the definition of return value as criteria. The Kolmogorov-Smirnov test result shows that GPD fits data very well both seasonal and non-seasonal model. The seasonal return value gives better information about the wave height characteristics.
Rheology and fluid mechanics of a hyper-concentrated biomass suspension
NASA Astrophysics Data System (ADS)
Botto, Lorenzo; Xu, Xiao
2013-11-01
The production of bioethanol from biomass material originating from energy crops requires mixing of highly concentrated suspensions, which are composed of millimetre-sized lignocellulosic fibers. In these applications, the solid concentration is typically extremely high. Owing to the large particle porosity, for a solid mass concentration slightly larger than 10%, the dispersed solid phase can fill the available space almost completely. To extract input parameters for simulations, we have carried out rheological measurements of a lignocellulosic suspension of Miscanthus, a fast-growing plant, for particle concentrations close to maximum random packing. We find that in this regime the rheometric curves exhibit features similar to those observed in model ``gravitational suspensions,'' including viscoplastic behaviour, strong shear-banding, non-continuum effects, and a marked influence of the particle weight. In the talk, these aspects will be examined in some detail, and differences between Miscanthus and corn stover, currently the most industrially relevant biomass substrate, briefly discussed. We will also comment on values of the Reynolds and Oldroyd numbers found in biofuel applications, and the flow patterns expected for these parameter values.
Possible future changes in extreme events over Northern Eurasia
NASA Astrophysics Data System (ADS)
Monier, Erwan; Sokolov, Andrei; Scott, Jeffery
2013-04-01
In this study, we investigate possible future climate change over Northern Eurasia and its impact on extreme events. Northern Eurasia is a major player in the global carbon budget because of boreal forests and peatlands. Circumpolar boreal forests alone contain more than five times the amount of carbon of temperate forests and almost double the amount of carbon of the world's tropical forests. Furthermore, severe permafrost degradation associated with climate change could result in peatlands releasing large amounts of carbon dioxide and methane. Meanwhile, changes in the frequency and magnitude of extreme events, such as extreme precipitation, heat waves or frost days are likely to have substantial impacts on Northern Eurasia ecosystems. For this reason, it is very important to quantify the possible climate change over Northern Eurasia under different emissions scenarios, while accounting for the uncertainty in the climate response and changes in extreme events. For several decades, the Massachusetts Institute of Technology (MIT) Joint Program on the Science and Policy of Global Change has been investigating uncertainty in climate change using the MIT Integrated Global System Model (IGSM) framework, an integrated assessment model that couples an earth system model of intermediate complexity (with a 2D zonal-mean atmosphere) to a human activity model. In this study, regional change is investigated using the MIT IGSM-CAM framework that links the IGSM to the National Center for Atmospheric Research (NCAR) Community Atmosphere Model (CAM). New modules were developed and implemented in CAM to allow climate parameters to be changed to match those of the IGSM. The simulations presented in this paper were carried out for two emission scenarios, a "business as usual" scenario and a 660 ppm of CO2-equivalent stabilization, which are similar to, respectively, the Representative Concentration Pathways RCP8.5 and RCP4.5 scenarios. Values of climate sensitivity and net aerosol forcing used in the simulations within the IGSM-CAM framework provide a good approximation for the median, and the lower and upper bound of 90% probability distribution of 21st century climate change. Five member ensembles were carried out for each choice of parameters using different initial conditions. With these simulations, we investigate the role of emissions scenarios (climate policies), the global climate response (climate sensitivity) and natural variability (initial conditions) on the uncertainty in future climate changes over Northern Eurasia. A particular emphasis is made on future changes in extreme events, including frost days, extreme summer temperature and extreme summer and winter precipitation.
Soeiro, Bruno T; Boen, Thaís R; Wagner, Roger; Lima-Pallone, Juliana A
2009-01-01
The aim of the present work was to determine parameters of the corn and wheat flour matrix, such as protein, lipid, moisture, ash and carbohydrates, folic acid and iron contents. Three principal components explained 91% of the total variance. Wheat flours were characterized by high protein and moisture content. On the other hand, the corn flours had the greater carbohydrates, lipids and folic acid levels. The concentrations of folic acid were lower than the issued value for wheat flours. Nevertheless, corn flours presented extremely high values. The iron concentration was higher than that recommended in Brazilian legislation. Poor homogenization of folic acid and iron was observed in enriched flours. This study could be useful to help the governmental authorities in the enriched food programs evaluation.
Calculating broad neutron resonances in a cut-off Woods-Saxon potential
NASA Astrophysics Data System (ADS)
Baran, Á.; Noszály, Cs.; Salamon, P.; Vertse, T.
2015-07-01
In a cut-off Woods-Saxon (CWS) potential with realistic depth S -matrix poles being far from the imaginary wave number axis form a sequence where the distances of the consecutive resonances are inversely proportional with the cut-off radius value, which is an unphysical parameter. Other poles lying closer to the imaginary wave number axis might have trajectories with irregular shapes as the depth of the potential increases. Poles being close repel each other, and their repulsion is responsible for the changes of the directions of the corresponding trajectories. The repulsion might cause that certain resonances become antibound and later resonances again when they collide on the imaginary axis. The interaction is extremely sensitive to the cut-off radius value, which is an apparent handicap of the CWS potential.
A Simple Model of Global Aerosol Indirect Effects
NASA Technical Reports Server (NTRS)
Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter
2013-01-01
Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.
NASA Astrophysics Data System (ADS)
Li, Xingmin; Lu, Ling; Yang, Wenfeng; Cheng, Guodong
2012-07-01
Estimating surface evapotranspiration is extremely important for the study of water resources in arid regions. Data from the National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer (NOAA/AVHRR), meteorological observations and data obtained from the Watershed Allied Telemetry Experimental Research (WATER) project in 2008 are applied to the evaporative fraction model to estimate evapotranspiration over the Heihe River Basin. The calculation method for the parameters used in the model and the evapotranspiration estimation results are analyzed and evaluated. The results observed within the oasis and the banks of the river suggest that more evapotranspiration occurs in the inland river basin in the arid region from May to September. Evapotranspiration values for the oasis, where the land surface types and vegetations are highly variable, are relatively small and heterogeneous. In the Gobi desert and other deserts with little vegetation, evapotranspiration remains at its lowest level during this period. These results reinforce the conclusion that rational utilization of water resources in the oasis is essential to manage the water resources in the inland river basin. In the remote sensing-based evapotranspiration model, the accuracy of the parameter estimate directly affects the accuracy of the evapotranspiration results; more accurate parameter values yield more precise values for evapotranspiration. However, when using the evaporative fraction to estimate regional evapotranspiration, better calculation results can be achieved only if evaporative fraction is constant in the daytime.
ZERODUR: bending strength data for etched surfaces
NASA Astrophysics Data System (ADS)
Hartmann, Peter; Leys, Antoine; Carré, Antoine; Kerz, Franca; Westerhoff, Thomas
2014-07-01
In a continuous effort since 2007 a considerable amount of new data and information has been gathered on the bending strength of the extremely low thermal expansion glass ceramic ZERODUR®. By fitting a three parameter Weibull distribution to the data it could be shown that for homogenously ground surfaces minimum breakage stresses exist lying much higher than the previously applied design limits. In order to achieve even higher allowable stress values diamond grain ground surfaces have been acid etched, a procedure widely accepted as strength increasing measure. If surfaces are etched taking off layers with thickness which are comparable to the maximum micro crack depth of the preceding grinding process they also show statistical distributions compatible with a three parameter Weibull distribution. SCHOTT has performed additional measurement series with etch solutions with variable composition testing the applicability of this distribution and the possibility to achieve further increase of the minimum breakage stress. For long term loading applications strength change with time and environmental media are important. The parameter needed for prediction calculations which is combining these influences is the stress corrosion constant. Results from the past differ significantly from each other. On the basis of new investigations better information will be provided for choosing the best value for the given application conditions.
Quintela-del-Río, Alejandro; Francisco-Fernández, Mario
2011-02-01
The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.
Exploring the free-energy landscape of a short peptide using an average force
NASA Astrophysics Data System (ADS)
Chipot, Christophe; Hénin, Jérôme
2005-12-01
The reversible folding of deca-alanine is chosen as a test case for characterizing a method that uses an adaptive biasing force (ABF) to escape from the minima and overcome the barriers of the free-energy landscape. This approach relies on the continuous estimation of a biasing force that yields a Hamiltonian in which no average force is exerted along the ordering parameter ξ. Optimizing the parameters that control how the ABF is applied, the method is shown to be extremely effective when a nonequivocal ordering parameter can be defined to explore the folding pathway of the peptide. Starting from a β-turn motif and restraining ξ to a region of the conformational space that extends from the α-helical state to an ensemble of extended structures, the ABF scheme is successful in folding the peptide chain into a compact α helix. Sampling of this conformation is, however, marginal when the range of ξ values embraces arrangements of greater compactness, hence demonstrating the inherent limitations of free-energy methods when ambiguous ordering parameters are utilized.
Tanchev, S; Pandurski, F; Georgiev, A; Gesheva, Iu; Platikanov, V; Dinov, P
2004-01-01
We report our clinical opinion for recombinant activated factor VII (NovoSeven, Novo Nordisk, Copenhagen, Denmark) administration in gynecology patients with massive haemorrhage. 3 women with gynecology deseases and severe bleeding in recieved NovoSeven in bolus IV. The blood loss and laboratory changes in hematology and haemostasis parameters are monitored. The bleeding was ceased in all cases. Decrease in values of Hb, Er and PTT was noted. The use of recombinant factor VIIA in gynecology patients with severe bleeding is effective and safe enough and could be an alternative to the extreme surgical procedures.
Endogenous population growth may imply chaos.
Prskawetz, A; Feichtinger, G
1995-01-01
The authors consider a discrete-time neoclassical growth model with an endogenous rate of population growth. The resulting one-dimensional map for the capital intensity has a tilted z-shape. Using the theory of nonlinear dynamical systems, they obtain numerical results on the qualitative behavior of time paths for changing parameter values. Besides stable and periodic solutions, erratic time paths may result. In particular, myopic and far-sighted economies--assumed to be characterized by low and high savings rate respectively--are characterized by stable per capita capital stocks, while solutions with chaotic windows exist between these two extremes.
Chaos for cardiac arrhythmias through a one-dimensional modulation equation for alternans
Dai, Shu; Schaeffer, David G.
2010-01-01
Instabilities in cardiac dynamics have been widely investigated in recent years. One facet of this work has studied chaotic behavior, especially possible correlations with fatal arrhythmias. Previously chaotic behavior was observed in various models, specifically in the breakup of spiral and scroll waves. In this paper we study cardiac dynamics and find spatiotemporal chaotic behavior through the Echebarria–Karma modulation equation for alternans in one dimension. Although extreme parameter values are required to produce chaos in this model, it seems significant mathematically that chaos may occur by a different mechanism from previous observations. PMID:20590327
Measuring the absolute carrier-envelope phase of many-cycle laser fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tzallas, P.; Skantzakis, E.; Charalambidis, D.
2010-12-15
The carrier-envelope phase (CEP) of high-peak-power, many-cycle laser fields becomes a crucial parameter when such fields are used, in conjunction with polarization gating techniques, in isolated attosecond (asec) pulse generation. However, its measurement has not been achieved so far. We demonstrate a physical process sensitive to the CEP value of such fields and describe a method for its online shot-to-shot monitoring. This work paves the way for the exploitation of energetic isolated asec pulses in studies of nonlinear extreme ultraviolet (XUV) processes and XUV-pump-XUV-probe experiments with asec resolutions.
Investigating NARCCAP Precipitation Extremes via Bivariate Extreme Value Theory (Invited)
NASA Astrophysics Data System (ADS)
Weller, G. B.; Cooley, D. S.; Sain, S. R.; Bukovsky, M. S.; Mearns, L. O.
2013-12-01
We introduce methodology from statistical extreme value theory to examine the ability of reanalysis-drive regional climate models to simulate past daily precipitation extremes. Going beyond a comparison of summary statistics such as 20-year return values, we study whether the most extreme precipitation events produced by climate model simulations exhibit correspondence to the most extreme events seen in observational records. The extent of this correspondence is formulated via the statistical concept of tail dependence. We examine several case studies of extreme precipitation events simulated by the six models of the North American Regional Climate Change Assessment Program (NARCCAP) driven by NCEP reanalysis. It is found that the NARCCAP models generally reproduce daily winter precipitation extremes along the Pacific coast quite well; in contrast, simulation of past daily summer precipitation extremes in a central US region is poor. Some differences in the strength of extremal correspondence are seen in the central region between models which employ spectral nudging and those which do not. We demonstrate how these techniques may be used to draw a link between extreme precipitation events and large-scale atmospheric drivers, as well as to downscale extreme precipitation simulated by a future run of a regional climate model. Specifically, we examine potential future changes in the nature of extreme precipitation along the Pacific coast produced by the pineapple express (PE) phenomenon. A link between extreme precipitation events and a "PE Index" derived from North Pacific sea-surface pressure fields is found. This link is used to study PE-influenced extreme precipitation produced by a future-scenario climate model run.
NASA Astrophysics Data System (ADS)
Torrungrueng, Danai; Johnson, Joel T.; Chou, Hsi-Tseng
2002-03-01
The novel spectral acceleration (NSA) algorithm has been shown to produce an $[\\mathcal{O}]$(Ntot) efficient iterative method of moments for the computation of radiation/scattering from both one-dimensional (1-D) and two-dimensional large-scale quasi-planar structures, where Ntot is the total number of unknowns to be solved. This method accelerates the matrix-vector multiplication in an iterative method of moments solution and divides contributions between points into ``strong'' (exact matrix elements) and ``weak'' (NSA algorithm) regions. The NSA method is based on a spectral representation of the electromagnetic Green's function and appropriate contour deformation, resulting in a fast multipole-like formulation in which contributions from large numbers of points to a single point are evaluated simultaneously. In the standard NSA algorithm the NSA parameters are derived on the basis of the assumption that the outermost possible saddle point, φs,max, along the real axis in the complex angular domain is small. For given height variations of quasi-planar structures, this assumption can be satisfied by adjusting the size of the strong region Ls. However, for quasi-planar structures with large height variations, the adjusted size of the strong region is typically large, resulting in significant increases in computational time for the computation of the strong-region contribution and degrading overall efficiency of the NSA algorithm. In addition, for the case of extremely large scale structures, studies based on the physical optics approximation and a flat surface assumption show that the given NSA parameters in the standard NSA algorithm may yield inaccurate results. In this paper, analytical formulas associated with the NSA parameters for an arbitrary value of φs,max are presented, resulting in more flexibility in selecting Ls to compromise between the computation of the contributions of the strong and weak regions. In addition, a ``multilevel'' algorithm, decomposing 1-D extremely large scale quasi-planar structures into more than one weak region and appropriately choosing the NSA parameters for each weak region, is incorporated into the original NSA method to improve its accuracy.
Barnikol, Wolfgang K R; Pötzschke, Harald
2012-01-01
The basis for the new procedure is the simultaneous transcutaneous measurement of the peri-ulceral oxygen partial pressure (tcPO(2)), using a minimum of 4 electrodes which are placed as close to the wound margin as possible, additionally, as a challenge the patient inhales pure oxygen for approximately 15 minutes. In order to evaluate the measurement data and to characterise the wounds, two new oxygen parameters were defined: (1) the oxygen characteristic (K-PO(2)), and (2) the oxygen inhomogeneity (I-PO(2)) of a chronic wound. The first of these is the arithmetic mean of the two lowest tcPO(2) measurement values, and the second is the variation coefficient of the four measurement values. Using the K-PO(2) parameter, a grading of wound hypoxia can be obtained. To begin with, the physiologically regulated (and still compensated) hypoxia with K-PO(2) values of between 35 and 40 mmHg is distinguished from the pathological decompensated hypoxia with K-PO(2) values of between 0 and 35 mmHg; the first of these still stimulates self-healing (within the limits of the oxygen balance). The decompensated hypoxia can be (arbitrarily) divided into "simple" hypoxia (Grade I), intense hypoxia (Grade II) and extreme hypoxia (Grade III), with the possibility of intermediate grades (I/II and II/III).Measurements were carried out using the new procedure on the skin of the right inner ankle of 21 healthy volunteers of various ages, and in 17 CVI (chronic venous insufficiency) wounds. Sixteen of the 17 CVI wounds (i.e., 94%) were found to be pathologically hypoxic, a state which was not found in any of the healthy volunteers. The oxygen inhomogeneity (I-PO(2)) of the individual chronic wounds increased exponentially as a function of the hypoxia grading (K-PO(2)), with a 10-fold increase with extreme hypoxia in contrast to a constant value of approximately 14% in the healthy volunteers. This pronounced oxygen inhomogeneity explains inhomogeneous wound healing, resulting in the so-called mosaic wounds. The hypoxia grades found in all of the chronic wounds was seen to be evenly distributed with values ranging from 0 to 40 mmHg, and therefore extremely inhomogeneous. In terms of oxygenation, chronic wounds are therefore inhomogeneous in two respects: (1) within the wound itself (intra-individual wound inhomogeneity) and (2) between different wounds (inter-individual wound inhomogeneity). Due to the extreme oxygen inhomogeneity, single measurements are not diagnostically useful. In healthy individuals the oxygen inhalation challenge (see above) results in synchronised tcPO(2) oscillations occurring at minute rhythms, which are not seen in CVI wounds. These oscillations can be interpreted as a sign of a functioning arterial vasomotor system.The new procedure is suitable for the routine characterisation of chronic wounds in terms of their oxygen status, and correspondingly, their metabolically determining (and limiting) potential for healing and regeneration. The oxygen characteristic K-PO(2) can furthermore be used as a warning of impending ulceration, since the oxygen provision worsens over time prior to the demise of the ulcerated tissue, thus making a controlled prophylaxis possible.
Barnikol, Wolfgang K. R.; Pötzschke, Harald
2012-01-01
The basis for the new procedure is the simultaneous transcutaneous measurement of the peri-ulceral oxygen partial pressure (tcPO2), using a minimum of 4 electrodes which are placed as close to the wound margin as possible, additionally, as a challenge the patient inhales pure oxygen for approximately 15 minutes. In order to evaluate the measurement data and to characterise the wounds, two new oxygen parameters were defined: (1) the oxygen characteristic (K-PO2), and (2) the oxygen inhomogeneity (I-PO2) of a chronic wound. The first of these is the arithmetic mean of the two lowest tcPO2 measurement values, and the second is the variation coefficient of the four measurement values. Using the K-PO2 parameter, a grading of wound hypoxia can be obtained. To begin with, the physiologically regulated (and still compensated) hypoxia with K-PO2 values of between 35 and 40 mmHg is distinguished from the pathological decompensated hypoxia with K-PO2 values of between 0 and 35 mmHg; the first of these still stimulates self-healing (within the limits of the oxygen balance). The decompensated hypoxia can be (arbitrarily) divided into “simple” hypoxia (Grade I), intense hypoxia (Grade II) and extreme hypoxia (Grade III), with the possibility of intermediate grades (I/II and II/III). Measurements were carried out using the new procedure on the skin of the right inner ankle of 21 healthy volunteers of various ages, and in 17 CVI (chronic venous insufficiency) wounds. Sixteen of the 17 CVI wounds (i.e., 94%) were found to be pathologically hypoxic, a state which was not found in any of the healthy volunteers. The oxygen inhomogeneity (I-PO2) of the individual chronic wounds increased exponentially as a function of the hypoxia grading (K-PO2), with a 10-fold increase with extreme hypoxia in contrast to a constant value of approximately 14% in the healthy volunteers. This pronounced oxygen inhomogeneity explains inhomogeneous wound healings, resulting in the so-called mosaic wounds. The hypoxia grades found in all of the chronic wounds was seen to be evenly distributed with values ranging from 0 to 40 mmHg, and therefore extremely inhomogeneous. In terms of oxygenation, chronic wounds are therefore inhomogeneous in two respects: (1) within the wound itself (intra-individual wound inhomogeneity) and (2) between different wounds (inter-individual wound inhomogeneity). Due to the extreme oxygen inhomogeneity, single measurements are not diagnostically useful. In healthy individuals the oxygen inhalation challenge (see above) results in synchronised tcPO2 oscillations occurring at minute rhythms, which are not seen in CVI wounds. These oscillations can be interpreted as a sign of a functioning arterial vasomotor system. The new procedure is suitable for the routine characterisation of chronic wounds in terms of their oxygen status, and correspondingly, their metabolically determining (and limiting) potential for healing and regeneration. The oxygen characteristic K-PO2 can furthermore be used as a warning of impending ulceration, since the oxygen provision worsens over time prior to the demise of the ulcerated tissue, thus making a controlled prophylaxis possible. PMID:22737104
Climate and its change over the Tibetan Plateau and its Surroundings in 1963-2015
NASA Astrophysics Data System (ADS)
Ding, J.; Cuo, L.
2017-12-01
Tibetan Plateau and its surroundings (TPS, 23°-43°N, 73°-106°E) lies in the southwest of China and includes Tibet Autonomous Region, Qinghai Province, southern Xinjiang Uygur Autonomous Region, part of Gansu Province, western Sichuan Province, and northern Yunnan Province. The region is of strategic importance in water resources because it is the headwater of ten large rivers that support more than 16 billion population. In this study, we use daily temperature maximum and minimum, precipitation and wind speed in 1963-2015 obtained from Climate Data Center of China Meteorological Administration and Qinghai Meteorological Bureau to investigate extreme climate conditions and their changes over the TPS. The extreme events are selected based on annual extreme values and percentiles. Annual extreme value approach produces one value each year for all variables, which enables us to examine the magnitude of extreme events; whereas percentile approach selects extreme values by setting 95th percentile as thresholds for maximum temperature, precipitation and wind speed, and 5th percentile for minimum temperature. Percentile approach not only enables us to investigate the magnitude but also frequency of the extreme events. Also, Mann-Kendall trend and mutation analysis were applied to analyze the changes in mean and extreme conditions. The results will help us understand more about the extreme events during the past five decades on the TPS and will provide valuable information for the upcoming IPCC reports on climate change.
Diverse strategies for ion regulation in fish collected from the ion-poor, acidic Rio Negro.
Gonzalez, R J; Wilson, R W; Wood, C M; Patrick, M L; Val, A L
2002-01-01
We measured unidirectional ion fluxes of fish collected directly from the Rio Negro, an extremely dilute, acidic blackwater tributary of the Amazon. Kinetic analysis of Na(+) uptake revealed that most species had fairly similar J(max) values, ranging from 1,150 to 1,750 nmol g(-1) h(-1), while K(m) values varied to a greater extent. Three species had K(m) values <33 micromol L(-1), while the rest had K(m) values >or=110 micromol L(-1). Because of the extremely low Na(+) concentration of Rio Negro water, the differences in K(m) values yield very different rates of Na(+) uptake. However, regardless of the rate of Na(+) uptake, measurements of Na(+) efflux show that Na(+) balance was maintained at very low Na(+) levels (<50 micromol L(-1)) by most species. Unlike other species with high K(m) values, the catfish Corydoras julii maintained high rates of Na(+) uptake in dilute waters by having a J(max) value at least 100% higher than the other species. Corydoras julii also demonstrated the ability to modulate kinetic parameters in response to changes in water chemistry. After 2 wk in 2 mmol L(-1) NaCl, J(max) fell >50%, and K(m) dropped about 70%. The unusual acclimatory drop in K(m) may represent a mechanism to ensure high rates of Na(+) uptake on return to dilute water. As well as being tolerant of extremely dilute waters, Rio Negro fish generally were fairly tolerant of low pH. Still, there were significant differences in sensitivity to pH among the species on the basis of degree of stimulation of Na(+) efflux at low pH. There were also differences in sensitivity to low pH of Na(+) uptake, and two species maintained significant rates of uptake even at pH 3.5. When fish were exposed to low pH in Rio Negro water instead of deionized water (with the same concentrations of major ions), the effects of low pH were reduced. This suggests that high concentrations of dissolved organic molecules in the water, which give it its dark tea color, may interact with the branchial epithelium in some protective manner.
Epidemiologic Evaluation of Measurement Data in the Presence of Detection Limits
Lubin, Jay H.; Colt, Joanne S.; Camann, David; Davis, Scott; Cerhan, James R.; Severson, Richard K.; Bernstein, Leslie; Hartge, Patricia
2004-01-01
Quantitative measurements of environmental factors greatly improve the quality of epidemiologic studies but can pose challenges because of the presence of upper or lower detection limits or interfering compounds, which do not allow for precise measured values. We consider the regression of an environmental measurement (dependent variable) on several covariates (independent variables). Various strategies are commonly employed to impute values for interval-measured data, including assignment of one-half the detection limit to nondetected values or of “fill-in” values randomly selected from an appropriate distribution. On the basis of a limited simulation study, we found that the former approach can be biased unless the percentage of measurements below detection limits is small (5–10%). The fill-in approach generally produces unbiased parameter estimates but may produce biased variance estimates and thereby distort inference when 30% or more of the data are below detection limits. Truncated data methods (e.g., Tobit regression) and multiple imputation offer two unbiased approaches for analyzing measurement data with detection limits. If interest resides solely on regression parameters, then Tobit regression can be used. If individualized values for measurements below detection limits are needed for additional analysis, such as relative risk regression or graphical display, then multiple imputation produces unbiased estimates and nominal confidence intervals unless the proportion of missing data is extreme. We illustrate various approaches using measurements of pesticide residues in carpet dust in control subjects from a case–control study of non-Hodgkin lymphoma. PMID:15579415
Nattee, Cholwich; Khamsemanan, Nirattaya; Lawtrakul, Luckhana; Toochinda, Pisanu; Hannongbua, Supa
2017-01-01
Malaria is still one of the most serious diseases in tropical regions. This is due in part to the high resistance against available drugs for the inhibition of parasites, Plasmodium, the cause of the disease. New potent compounds with high clinical utility are urgently needed. In this work, we created a novel model using a regression tree to study structure-activity relationships and predict the inhibition constant, K i of three different antimalarial analogues (Trimethoprim, Pyrimethamine, and Cycloguanil) based on their molecular descriptors. To the best of our knowledge, this work is the first attempt to study the structure-activity relationships of all three analogues combined. The most relevant descriptors and appropriate parameters of the regression tree are harvested using extremely randomized trees. These descriptors are water accessible surface area, Log of the aqueous solubility, total hydrophobic van der Waals surface area, and molecular refractivity. Out of all possible combinations of these selected parameters and descriptors, the tree with the strongest coefficient of determination is selected to be our prediction model. Predicted K i values from the proposed model show a strong coefficient of determination, R 2 =0.996, to experimental K i values. From the structure of the regression tree, compounds with high accessible surface area of all hydrophobic atoms (ASA_H) and low aqueous solubility of inhibitors (Log S) generally possess low K i values. Our prediction model can also be utilized as a screening test for new antimalarial drug compounds which may reduce the time and expenses for new drug development. New compounds with high predicted K i should be excluded from further drug development. It is also our inference that a threshold of ASA_H greater than 575.80 and Log S less than or equal to -4.36 is a sufficient condition for a new compound to possess a low K i . Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fairbairn, Malcolm; Markkanen, Tommi; Rodriguez Roman, David
2018-04-01
We consider the effect of the Gibbons-Hawking radiation on the inflaton in the situation where it is coupled to a large number of spectator fields. We argue that this will lead to two important effects - a thermal contribution to the potential and a gradual change in parameters in the Lagrangian which results from thermodynamic and energy conservation arguments. We present a scenario of hilltop inflation where the field starts trapped at the origin before slowly experiencing a phase transition during which the field extremely slowly moves towards its zero temperature expectation value. We show that it is possible to obtain enough e-folds of expansion as well as the correct spectrum of perturbations without hugely fine-tuned parameters in the potential (albeit with many spectator fields). We also comment on how initial conditions for inflation can arise naturally in this situation.
Bastien, Olivier; Maréchal, Eric
2008-08-07
Confidence in pairwise alignments of biological sequences, obtained by various methods such as Blast or Smith-Waterman, is critical for automatic analyses of genomic data. Two statistical models have been proposed. In the asymptotic limit of long sequences, the Karlin-Altschul model is based on the computation of a P-value, assuming that the number of high scoring matching regions above a threshold is Poisson distributed. Alternatively, the Lipman-Pearson model is based on the computation of a Z-value from a random score distribution obtained by a Monte-Carlo simulation. Z-values allow the deduction of an upper bound of the P-value (1/Z-value2) following the TULIP theorem. Simulations of Z-value distribution is known to fit with a Gumbel law. This remarkable property was not demonstrated and had no obvious biological support. We built a model of evolution of sequences based on aging, as meant in Reliability Theory, using the fact that the amount of information shared between an initial sequence and the sequences in its lineage (i.e., mutual information in Information Theory) is a decreasing function of time. This quantity is simply measured by a sequence alignment score. In systems aging, the failure rate is related to the systems longevity. The system can be a machine with structured components, or a living entity or population. "Reliability" refers to the ability to operate properly according to a standard. Here, the "reliability" of a sequence refers to the ability to conserve a sufficient functional level at the folded and maturated protein level (positive selection pressure). Homologous sequences were considered as systems 1) having a high redundancy of information reflected by the magnitude of their alignment scores, 2) which components are the amino acids that can independently be damaged by random DNA mutations. From these assumptions, we deduced that information shared at each amino acid position evolved with a constant rate, corresponding to the information hazard rate, and that pairwise sequence alignment scores should follow a Gumbel distribution, which parameters could find some theoretical rationale. In particular, one parameter corresponds to the information hazard rate. Extreme value distribution of alignment scores, assessed from high scoring segments pairs following the Karlin-Altschul model, can also be deduced from the Reliability Theory applied to molecular sequences. It reflects the redundancy of information between homologous sequences, under functional conservative pressure. This model also provides a link between concepts of biological sequence analysis and of systems biology.
NASA Astrophysics Data System (ADS)
Zhang, Yin; Xia, Jun; She, Dunxian
2018-01-01
In recent decades, extreme precipitation events have been a research hotspot worldwide. Based on 12 extreme precipitation indices, the spatiotemporal variation and statistical characteristic of precipitation extremes in the middle reaches of the Yellow River Basin (MRYRB) during 1960-2013 were investigated. The results showed that the values of most extreme precipitation indices (except consecutive dry days (CDD)) increased from the northwest to the southeast of the MRYRB, reflecting that the southeast was the wettest region in the study area. Temporally, the precipitation extremes presented a drying trend with less frequent precipitation events. Generalized extreme value (GEV) distribution was selected to fit the time series of all indices, and the quantiles values under the 50-year return period showed a similar spatial extent with the corresponding precipitation extreme indices during 1960-2013, indicating a higher risk of extreme precipitation in the southeast of the MRYRB. Furthermore, the changes in probability distribution functions of indices for the period of 1960-1986 and 1987-2013 revealed a drying tendency in our study area. Both El Niño-Southern Oscillation (ENSO) and Pacific Decadal Oscillation (PDO) were proved to have a strong influence on precipitation extremes in the MRYRB. The results of this study are useful to master the change rule of local precipitation extremes, which will help to prevent natural hazards caused.
Extreme value analysis in biometrics.
Hüsler, Jürg
2009-04-01
We review some approaches of extreme value analysis in the context of biometrical applications. The classical extreme value analysis is based on iid random variables. Two different general methods are applied, which will be discussed together with biometrical examples. Different estimation, testing, goodness-of-fit procedures for applications are discussed. Furthermore, some non-classical situations are considered where the data are possibly dependent, where a non-stationary behavior is observed in the data or where the observations are not univariate. A few open problems are also stated.
NASA Astrophysics Data System (ADS)
Witt, Annette; Ehlers, Frithjof; Luther, Stefan
2017-09-01
We have analyzed symbol sequences of heart beat annotations obtained from 24-h electrocardiogram recordings of 184 post-infarction patients (from the Cardiac Arrhythmia Suppression Trial database, CAST). In the symbol sequences, each heart beat was coded as an arrhythmic or as a normal beat. The symbol sequences were analyzed with a model-based approach which relies on two-parametric peaks over the threshold (POT) model, interpreting each premature ventricular contraction (PVC) as an extreme event. For the POT model, we explored (i) the Shannon entropy which was estimated in terms of the Lempel-Ziv complexity, (ii) the shape parameter of the Weibull distribution that best fits the PVC return times, and (iii) the strength of long-range correlations quantified by detrended fluctuation analysis (DFA) for the two-dimensional parameter space. We have found that in the frame of our model the Lempel-Ziv complexity is functionally related to the shape parameter of the Weibull distribution. Thus, two complementary measures (entropy and strength of long-range correlations) are sufficient to characterize realizations of the two-parametric model. For the CAST data, we have found evidence for an intermediate strength of long-range correlations in the PVC timings, which are correlated to the age of the patient: younger post-infarction patients have higher strength of long-range correlations than older patients. The normalized Shannon entropy has values in the range 0.5
Skin hydration, microrelief and greasiness of normal skin in Antarctica.
Tsankov, N; Mateev, D; Darlenski, R
2018-03-01
The skin is the primary defence of the human body against external factors from physical, chemical, mechanical and biologic origin. Climatic factors together with low temperature and sun radiation affect the skin. The effect of climatic conditions in Antarctica on healthy skin has not been previously addressed. The aim of this study was to evaluate the changes in the skin hydration, greasiness and microrelief due to the extreme climatic environmental factors during the stay of the members of the Bulgarian Antarctic expedition. Fifty-nine Caucasian healthy subjects, 42 men and 17 women with mean age 50.9 years (27-68), were enrolled. The study was performed in five consecutive years from 2011 to 2016 at the Bulgarian Antarctic base camp at Livingston Island. The study protocol consisted of two parts: study A: duration of 15 days with measurement of skin physiology parameters on a daily basis, and study B: five measurements at baseline and at days 14, 30, 45 and 50 upon arrival in Antarctica. We measured three biophysical parameters related to skin physiology at cheek skin by an impedance measuring device. No statistically significant difference between parameters at the different measurement points. There is a variation in skin hydration reaching its lower point at day 11 and then returning to values similar to baseline. Initially, an increase in skin greasiness was witnessed with a sharp depression at day 11 and final values at day 15 resembling the ones at baseline. An increase, although not statistically significant, in skin roughness was observed in the first 15 days of the study. Study B showed no statistically significant variances between values of the three parameters. Our studies show the pioneer results of the effect of Antarctic climate on human skin physiology. © 2017 European Academy of Dermatology and Venereology.
Zák, J; Kapitola, J; Povýsil, C
2003-01-01
Authors deal with question, if there is possibility to infer bone histological structure (described by histomorphometric parameters of trabecular bone volume and trabecular thickness) from bone density, ash weight or even from weight of animal (rat). Both tibias of each of 30 intact male rats, 90 days old, were processed. Left tibia was utilized to the determination of histomorphometric parameters of undecalcified bone tissue patterns by automatic image analysis. Right tibia was used to the determination of values of bone density, using Archimedes' principle. Values of bone density, ash weight, ash weight related to bone volume and animal weight were correlated with histomorphometric parameters (trabecular bone volume, trabecular thickness) by Pearson's correlation test. One could presume the existence of relation between data, describing bone mass at the histological level (trabecular bone of tibia) and other data, describing mass of whole bone or even animal mass (weight). But no statistically significant correlation was found. The reason of the present results could be in the deviations of trabecular density in marrow of tibia. Because of higher trabecular bone density in metaphyseal and epiphyseal regions, the histomorphometric analysis of trabecular bone is preferentially done in these areas. It is possible, that this irregularity of trabecular tibial density could be the source of the deviations, which could influence the results of correlations determined. The values of bone density, ash weight and animal weight do not influence trabecular bone volume and vice versa: static histomorphometric parameters of trabecular bone do not reflect bone density, ash weight and weight of animal.
Sea Extremes: Integrated impact assessment in coastal climate adaptation
NASA Astrophysics Data System (ADS)
Sorensen, Carlo; Knudsen, Per; Broge, Niels; Molgaard, Mads; Andersen, Ole
2016-04-01
We investigate effects of sea level rise and a change in precipitation pattern on coastal flooding hazards. Historic and present in situ and satellite data of water and groundwater levels, precipitation, vertical ground motion, geology, and geotechnical soil properties are combined with flood protection measures, topography, and infrastructure to provide a more complete picture of the water-related impact from climate change at an exposed coastal location. Results show that future sea extremes evaluated from extreme value statistics may, indeed, have a large impact. The integrated effects from future storm surges and other geo- and hydro-parameters need to be considered in order to provide for the best protection and mitigation efforts, however. Based on the results we present and discuss a simple conceptual model setup that can e.g. be used for 'translation' of regional sea level rise evidence and projections to concrete impact measures. This may be used by potentially affected stakeholders -often working in different sectors and across levels of governance, in a common appraisal of the challenges faced ahead. The model may also enter dynamic tools to evaluate local impact as sea level research advances and projections for the future are updated.
NASA Astrophysics Data System (ADS)
Shrestha, K.; Chou, M.; Graf, D.; Yang, H. D.; Lorenz, B.; Chu, C. W.
2017-05-01
Weak antilocalization (WAL) effects in Bi2Te3 single crystals have been investigated at high and low bulk charge-carrier concentrations. At low charge-carrier density the WAL curves scale with the normal component of the magnetic field, demonstrating the dominance of topological surface states in magnetoconductivity. At high charge-carrier density the WAL curves scale with neither the applied field nor its normal component, implying a mixture of bulk and surface conduction. WAL due to topological surface states shows no dependence on the nature (electrons or holes) of the bulk charge carriers. The observations of an extremely large nonsaturating magnetoresistance and ultrahigh mobility in the samples with lower carrier density further support the presence of surface states. The physical parameters characterizing the WAL effects are calculated using the Hikami-Larkin-Nagaoka formula. At high charge-carrier concentrations, there is a greater number of conduction channels and a decrease in the phase coherence length compared to low charge-carrier concentrations. The extremely large magnetoresistance and high mobility of topological insulators have great technological value and can be exploited in magnetoelectric sensors and memory devices.
Design guidelines for wind-resistant structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDonald, J.R.; Mehta, K.C.; Minor, J.E.
1975-06-01
The purpose of this document is to prescribe criteria and to provide guidance for professional personnel who are involved in the design and evaluation of buildings and structures to resist tornadoes and extreme winds at the Oak Ridge, Tennessee, Portsmouth, Ohio, and Paducah, Kentucky, Plant Sites. The scope of the document covers loads due to extreme winds and tornadoes. Other loading conditions such as dead, live, or earthquake loads shall be considered as prescribed by the Union Carbide Corporation. In Section II the method for determining the maximum design windspeed for any specified level of risk is described. The straightmore » wind and tornado parameters are then deduced from the value of maximum design windspeed. The three types of tornado and extreme wind loads (aerodynamic, atmospheric pressure change and missiles) are treated in Sections III, IV, and V, respectively. Appropriate load combinations are defined in Section VI. The final section contains several examples showing how the design guidelines are used to determine appropriate design wind pressures. A description of the computer program used to predict missile accelerations, velocities and trajectories is contained in Appendix A. Additional design examples are provided in Appendix B.« less
Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.
2010-10-01
In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.
Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.
2010-05-01
In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.
Dimitrova, Ralits; Nenova, Elena; Uzunov, Blagoy; Shishiniova, Maria; Stoyneva, Maya
2014-09-03
Vaya (Ramsar site, protected area and Natura 2000 site) is the biggest natural lake in Bulgaria and the shallowest Black Sea coastal lake, which during the last decades has undergone significant changes and was included as critically endangered in the Red List of Bulgarian Wetlands. Our studies were conducted during the summer and autumn months of three years - 2004-2006. The paper presents results on the phytoplankton abundance (numbers, biomass and carbon content) in combination with the indices of species diversity, evenness and dominance. Phytoplankton abundance was extremely high (average values of 1135 × 10 6 cells/L for the quantity and of 46 mg/L for the biomass) and increased in the end of the studied period (years 2005-2006), when decrease of species diversity and increase of the dominance index values were detected. The carbon content of the phytoplankton was at an average value of 9.7 mg/L and also increased from 2004 to 2006. Cyanoprokaryota dominated in the formation of the total carbon content of the phytoplankton, in its numbers (88%-97.8%), and in the biomass (62%-87.9%). All data on phytoplankton abundance and structural parameters in Vaya confirm the hypertrophic status of the lake and reflect the general negative trend in its development.
NASA Astrophysics Data System (ADS)
Panziera, Luca; Gabella, Marco; Zanini, Stefano; Hering, Alessandro; Germann, Urs; Berne, Alexis
2016-06-01
This paper presents a regional extreme rainfall analysis based on 10 years of radar data for the 159 regions adopted for official natural hazard warnings in Switzerland. Moreover, a nowcasting tool aimed at issuing heavy precipitation regional alerts is introduced. The two topics are closely related, since the extreme rainfall analysis provides the thresholds used by the nowcasting system for the alerts. Warm and cold seasons' monthly maxima of several statistical quantities describing regional rainfall are fitted to a generalized extreme value distribution in order to derive the precipitation amounts corresponding to sub-annual return periods for durations of 1, 3, 6, 12, 24 and 48 h. It is shown that regional return levels exhibit a large spatial variability in Switzerland, and that their spatial distribution strongly depends on the duration of the aggregation period: for accumulations of 3 h and shorter, the largest return levels are found over the northerly alpine slopes, whereas for longer durations the southern Alps exhibit the largest values. The inner alpine chain shows the lowest values, in agreement with previous rainfall climatologies. The nowcasting system presented here is aimed to issue heavy rainfall alerts for a large variety of end users, who are interested in different precipitation characteristics and regions, such as, for example, small urban areas, remote alpine catchments or administrative districts. The alerts are issued not only if the rainfall measured in the immediate past or forecast in the near future exceeds some predefined thresholds but also as soon as the sum of past and forecast precipitation is larger than threshold values. This precipitation total, in fact, has primary importance in applications for which antecedent rainfall is as important as predicted one, such as urban floods early warning systems. The rainfall fields, the statistical quantity representing regional rainfall and the frequency of alerts issued in case of continuous threshold exceedance are some of the configurable parameters of the tool. The analysis of the urban flood which occurred in the city of Schaffhausen in May 2013 suggests that this alert tool might have complementary skill with respect to radar-based thunderstorm nowcasting systems for storms which do not show a clear convective signature.
Meteorological risks as drivers of innovation for agroecosystem management
NASA Astrophysics Data System (ADS)
Gobin, Anne; Van de Vyver, Hans; Zamani, Sepideh; Curnel, Yannick; Planchon, Viviane; Verspecht, Ann; Van Huylenbroeck, Guido
2015-04-01
Devastating weather-related events recorded in recent years have captured the interest of the general public in Belgium. The MERINOVA project research hypothesis is that meteorological risks act as drivers of environmental innovation in agro-ecosystem management which is being tested using a "chain of risk" approach. The major objectives are to (1) assess the probability of extreme meteorological events by means of probability density functions; (2) analyse the extreme events impact of on agro-ecosystems using process-based bio-physical modelling methods; (3) identify the most vulnerable agro-ecosystems using fuzzy multi-criteria and spatial analysis; (4) uncover innovative risk management and adaptation options using actor-network theory and economic modelling; and, (5) communicate to research, policy and practitioner communities using web-based techniques. Generalized Extreme Value (GEV) theory was used to model annual rainfall maxima based on location-, scale- and shape-parameters that determine the centre of the distribution, the deviation of the location-parameter and the upper tail decay, respectively. Likewise the distributions of consecutive rainy days, rainfall deficits and extreme 24-hour rainfall were modelled. Spatial interpolation of GEV-derived return levels resulted in maps of extreme precipitation, precipitation deficits and wet periods. The degree of temporal overlap between extreme weather conditions and sensitive periods in the agro-ecosystem was determined using a bio-physically based modelling framework that couples phenological models, a soil water balance, crop growth and environmental models. 20-year return values were derived for frost, heat stress, drought, waterlogging and field access during different sensitive stages for different arable crops. Extreme yield values were detected from detrended long term arable yields and relationships were found with soil moisture conditions, heat stress or other meteorological variables during the season. A methodology for identifying agro-ecosystem vulnerability was developed using spatially explicit information and was tested for arable crop production in Belgium. The different components of vulnerability for a region include spatial information on meteorology, soil available water content, soil erosion, the degree of waterlogging, crop share and the diversity of potato varieties. The level of vulnerability and resilience of an agro-ecosystem is also determined by risk management. The types of agricultural risk and their relative importance differ across sectors and farm types. Risk types are further distinguished according to production, market, institutional, financial and liability risks. Strategies are often combined in the risk management strategy of a farmer and include reduction and prevention, mitigation, coping and impact reduction. Based on an extensive literature review, a portfolio of potential strategies was identified at farm, market and policy level. Research hypotheses were tested using an on-line questionnaire on knowledge of agricultural risk, measuring the general risk aversion of the farmer and risk management strategies. The "chain of risk" approach adopted as a research methodology allows for investigating the hypothesis that meteorological risks act as drivers for agricultural innovation. Risks related to extreme weather events in Belgium are mainly caused by heat, frost, excess rainfall, drought and storms, and their impact is predominantly felt by arable, horticultural and extensive dairy farmers. Quantification of the risk is evaluated in terms of probability of occurrence, magnitude, frequency and extent of impact on several agro-ecosystems services. The spatial extent of vulnerability is developed by integrating different layers of geo-information, while risk management is analysed using questionnaires and economic modelling methods. Future work will concentrate on the further development and testing of the currently developed modelling methodologies. https://merinova.vito.be The research is funded by the Belgian Science Policy Organisation (Belspo) under contract nr SD/RI/03A.
Isokinetic knee joint evaluation in track and field events.
Deli, Chariklia K; Paschalis, Vassilis; Theodorou, Anastasios A; Nikolaidis, Michalis G; Jamurtas, Athanasios Z; Koutedakis, Yiannis
2011-09-01
The purpose of this study was to evaluate maximal torque of the knee flexors and extensors, flexor/extensor ratios, and maximal torque differences between the 2 lower extremities in young track and field athletes. Forty male track and field athletes 13-17 years old and 20 male nonathletes of the same age participated in the study. Athletes were divided into 4 groups according to their age and event (12 runners and 10 jumpers 13-15 years old, 12 runners and 6 jumpers 16-17 years old) and nonathletes into 2 groups of the same age. Maximal torque evaluation of knee flexors and extensors was performed on an isokinetic dynamometer at 60°·s(-1). At the age of 16-17 years, jumpers exhibited higher strength values at extension than did runners and nonathletes, whereas at the age of 13-15 years, no significant differences were found between events. Younger athletes were weaker than older athletes at flexion. Runners and jumpers were stronger than nonathletes in all relative peak torque parameters. Nonathletes exhibited a higher flexor/extensor ratio compared with runners and jumpers. Strength imbalance in athletes was found between the 2 lower extremities in knee flexors and extensors and also at flexor/extensor ratio of the same extremity. Young track and field athletes exhibit strength imbalances that could reduce their athletic performance, and specific strength training for the weak extremity may be needed.
Somatotype variables related to muscle torque and power in judoists.
Lewandowska, Joanna; Buśko, Krzysztof; Pastuszak, Anna; Boguszewska, Katarzyna
2011-12-01
The purpose of this study was to examine the relationship between somatotype, muscle torque and power output in judoists. Thirteen judoists (age 18.4±3.1 years, body height 178.6±8.2 cm, body mass 82.3±15.9 kg) volunteered to participate in this study. Somatotype was determined using the Heath-Carter method. Maximal muscle torques of elbow, shoulder, knee, hip and trunk flexors as well as extensors were measured under static conditions. Power outputs were measured in 5 maximal cycle ergometer exercise bouts, 10 s each, at increasing external loads equal to 2.5, 5.0, 7.5, 10.0 and 12.5% of body weight. The Pearson's correlation coefficients were calculated between all parameters. The mean somatotype of judoists was: 3.5-5.9-1.8 (values for endomorphy, mesomorphy and ectomorphy, respectively). The values (mean±SD) of sum of muscle torque of ten muscle groups (TOTAL) was 3702.2±862.9 N x m. The power output ranged from 393.2±79.4 to 1077.2±275.4 W. The values of sum of muscle torque of right and left upper extremities (SUE), sum of muscle torque of right and left lower extremities (SLE), sum of muscle torque of the trunk (ST) and TOTAL were significantly correlated with the mesomorphic component (0.68, 0.80, 0.71 and 0.78, respectively). The ectomorphic component correlated significantly with values of SUE, SLE, ST and TOTAL (-0.69, -0.81, -0.71 and -0.79, respectively). Power output was also strongly correlated with both mesomorphy (positively) and ectomorphy (negatively). The results indicated that the values of mesomorphic and ectomorphic somatotype components influence muscle torque and power output, thus body build could be an important factor affecting results in judo.
Estimation of muscle torque in various combat sports.
Pędzich, Wioletta; Mastalerz, Andrzej; Sadowski, Jerzy
2012-01-01
The purpose of the research was to compare muscle torque of elite combat groups. Twelve taekwondo WTF athletes, twelve taekwondo ITF athletes and nine boxers participated in the study. Measurements of muscle torques were done under static conditions on a special stand which belonged to the Department of Biomechanics. The sum of muscle torque of lower right and left extremities of relative values was significantly higher for taekwondo WTF athletes than for boxers (16%, p < 0.001 for right and 10%, p < 0.05 for left extremities) and taekwondo ITF (10%, p < 0.05 for right and 8% for left extremities). Taekwondo ITF athletes attained significantly higher absolute muscle torque values than boxers for elbow flexors (20%, p < 0.05 for right and 11% for left extremities) and extensors (14% for right and 18%, p < 0.05 for left extremities) and shoulder flexors (10% for right and 12%, p < 0.05 for left extremities) and extensors (11% for right and 1% for left extremities). Taekwondo WTF and taekwondo ITF athletes obtained significantly different relative values of muscle torque of the hip flexors (16%, p < 0.05) and extensors (11%, p < 0.05) of the right extremities.
NASA Astrophysics Data System (ADS)
Troselj, Josko; Sayama, Takahiro; Varlamov, Sergey M.; Sasaki, Toshiharu; Racault, Marie-Fanny; Takara, Kaoru; Miyazawa, Yasumasa; Kuroki, Ryusuke; Yamagata, Toshio; Yamashiki, Yosuke
2017-12-01
This study demonstrates the importance of accurate extreme discharge input in hydrological and oceanographic combined modeling by introducing two extreme typhoon events. We investigated the effects of extreme freshwater outflow events from river mouths on sea surface salinity distribution (SSS) in the coastal zone of the north-eastern Japan. Previous studies have used observed discharge at the river mouth, as well as seasonally averaged inter-annual, annual, monthly or daily simulated data. Here, we reproduced the hourly peak discharge during two typhoon events for a targeted set of nine rivers and compared their impact on SSS in the coastal zone based on observed, climatological and simulated freshwater outflows in conjunction with verification of the results using satellite remote-sensing data. We created a set of hourly simulated freshwater outflow data from nine first-class Japanese river basins flowing to the western Pacific Ocean for the two targeted typhoon events (Chataan and Roke) and used it with the integrated hydrological (CDRMV3.1.1) and oceanographic (JCOPE-T) model, to compare the case using climatological mean monthly discharges as freshwater input from rivers with the case using our hydrological model simulated discharges. By using the CDRMV model optimized with the SCE-UA method, we successfully reproduced hindcasts for peak discharges of extreme typhoon events at the river mouths and could consider multiple river basin locations. Modeled SSS results were verified by comparison with Chlorophyll-a distribution, observed by satellite remote sensing. The projection of SSS in the coastal zone became more realistic than without including extreme freshwater outflow. These results suggest that our hydrological models with optimized model parameters calibrated to the Typhoon Roke and Chataan cases can be successfully used to predict runoff values from other extreme precipitation events with similar physical characteristics. Proper simulation of extreme typhoon events provides more realistic coastal SSS and may allow a different scenario analysis with various precipitation inputs for developing a nowcasting analysis in the future.
NASA Astrophysics Data System (ADS)
Vautard, Robert; Christidis, Nikolaos; Ciavarella, Andrew; Alvarez-Castro, Carmen; Bellprat, Omar; Christiansen, Bo; Colfescu, Ioana; Cowan, Tim; Doblas-Reyes, Francisco; Eden, Jonathan; Hauser, Mathias; Hegerl, Gabriele; Hempelmann, Nils; Klehmet, Katharina; Lott, Fraser; Nangini, Cathy; Orth, René; Radanovics, Sabine; Seneviratne, Sonia I.; van Oldenborgh, Geert Jan; Stott, Peter; Tett, Simon; Wilcox, Laura; Yiou, Pascal
2018-04-01
A detailed analysis is carried out to assess the HadGEM3-A global atmospheric model skill in simulating extreme temperatures, precipitation and storm surges in Europe in the view of their attribution to human influence. The analysis is performed based on an ensemble of 15 atmospheric simulations forced with observed sea surface temperature of the 54 year period 1960-2013. These simulations, together with dual simulations without human influence in the forcing, are intended to be used in weather and climate event attribution. The analysis investigates the main processes leading to extreme events, including atmospheric circulation patterns, their links with temperature extremes, land-atmosphere and troposphere-stratosphere interactions. It also compares observed and simulated variability, trends and generalized extreme value theory parameters for temperature and precipitation. One of the most striking findings is the ability of the model to capture North-Atlantic atmospheric weather regimes as obtained from a cluster analysis of sea level pressure fields. The model also reproduces the main observed weather patterns responsible for temperature and precipitation extreme events. However, biases are found in many physical processes. Slightly excessive drying may be the cause of an overestimated summer interannual variability and too intense heat waves, especially in central/northern Europe. However, this does not seem to hinder proper simulation of summer temperature trends. Cold extremes appear well simulated, as well as the underlying blocking frequency and stratosphere-troposphere interactions. Extreme precipitation amounts are overestimated and too variable. The atmospheric conditions leading to storm surges were also examined in the Baltics region. There, simulated weather conditions appear not to be leading to strong enough storm surges, but winds were found in very good agreement with reanalyses. The performance in reproducing atmospheric weather patterns indicates that biases mainly originate from local and regional physical processes. This makes local bias adjustment meaningful for climate change attribution.
NASA Astrophysics Data System (ADS)
Darko, Deborah; Adjei, Kwaku A.; Appiah-Adjei, Emmanuel K.; Odai, Samuel N.; Obuobie, Emmanuel; Asmah, Ruby
2018-06-01
The extent to which statistical bias-adjusted outputs of two regional climate models alter the projected change signals for the mean (and extreme) rainfall and temperature over the Volta Basin is evaluated. The outputs from two regional climate models in the Coordinated Regional Climate Downscaling Experiment for Africa (CORDEX-Africa) are bias adjusted using the quantile mapping technique. Annual maxima rainfall and temperature with their 10- and 20-year return values for the present (1981-2010) and future (2051-2080) climates are estimated using extreme value analyses. Moderate extremes are evaluated using extreme indices (viz. percentile-based, duration-based, and intensity-based). Bias adjustment of the original (bias-unadjusted) models improves the reproduction of mean rainfall and temperature for the present climate. However, the bias-adjusted models poorly reproduce the 10- and 20-year return values for rainfall and maximum temperature whereas the extreme indices are reproduced satisfactorily for the present climate. Consequently, projected changes in rainfall and temperature extremes were weak. The bias adjustment results in the reduction of the change signals for the mean rainfall while the mean temperature signals are rather magnified. The projected changes for the original mean climate and extremes are not conserved after bias adjustment with the exception of duration-based extreme indices.
Conditional probability of rainfall extremes across multiple durations
NASA Astrophysics Data System (ADS)
Le, Phuong Dong; Leonard, Michael; Westra, Seth
2017-04-01
The conditional probability that extreme rainfall will occur at one location given that it is occurring at another location is critical in engineering design and management circumstances including planning of evacuation routes and the sitting of emergency infrastructure. A challenge with this conditional simulation is that in many situations the interest is not so much the conditional distributions of rainfall of the same duration at two locations, but rather the conditional distribution of flooding in two neighbouring catchments, which may be influenced by rainfall of different critical durations. To deal with this challenge, a model that can consider both spatial and duration dependence of extremes is required. The aim of this research is to develop a model that can take account both spatial dependence and duration dependence into the dependence structure of extreme rainfalls. To achieve this aim, this study is a first attempt at combining extreme rainfall for multiple durations within a spatial extreme model framework based on max-stable process theory. Max-stable processes provide a general framework for modelling multivariate extremes with spatial dependence for just a single duration extreme rainfall. To achieve dependence across multiple timescales, this study proposes a new approach that includes addition elements representing duration dependence of extremes to the covariance matrix of max-stable model. To improve the efficiency of calculation, a re-parameterization proposed by Koutsoyiannis et al. (1998) is used to reduce the number of parameters necessary to be estimated. This re-parameterization enables the GEV parameters to be represented as a function of timescale. A stepwise framework has been adopted to achieve the overall aims of this research. Firstly, the re-parameterization is used to define a new set of common parameters for marginal distribution across multiple durations. Secondly, spatial interpolation of the new parameter set is used to estimate marginal parameters across the full spatial domain. Finally, spatial interpolation result is used as initial condition to estimate dependence parameters via a likelihood function of max-stable model for multiple durations. The Hawkesbury-Nepean catchment near Sydney in Australia was selected as case study for this research. This catchment has 25 sub-daily rain gauges with the minimum record length of 24 years over a region of 300 km × 300 km area. The re-parameterization was applied for each station for durations from 1 hour to 24 hours and then is evaluated by comparing with the at-site fitted GEV. The evaluation showed that the average R2 for all station is around 0.80 with the range from 0.26 to 1.0. The output of re-parameterization then was used to construct the spatial surface based on covariates including longitude, latitude, and elevation. The dependence model showed good agreements between empirical extremal coefficient and theoretical extremal coefficient for multiple durations. For the overall model, a leave-one-out cross-validation for all stations showed it works well for 20 out of 25 stations. The potential application of this model framework was illustrated through a conditional map of return period and return level across multiple durations, both of which are important for engineering design and management.
NASA Astrophysics Data System (ADS)
Gunardi, Setiawan, Ezra Putranda
2015-12-01
Indonesia is a country with high risk of earthquake, because of its position in the border of earth's tectonic plate. An earthquake could raise very high amount of damage, loss, and other economic impacts. So, Indonesia needs a mechanism for transferring the risk of earthquake from the government or the (reinsurance) company, as it could collect enough money for implementing the rehabilitation and reconstruction program. One of the mechanisms is by issuing catastrophe bond, `act-of-God bond', or simply CAT bond. A catastrophe bond issued by a special-purpose-vehicle (SPV) company, and then sold to the investor. The revenue from this transaction is joined with the money (premium) from the sponsor company and then invested in other product. If a catastrophe happened before the time-of-maturity, cash flow from the SPV to the investor will discounted or stopped, and the cash flow is paid to the sponsor company to compensate their loss because of this catastrophe event. When we consider the earthquake only, the amount of discounted cash flow could determine based on the earthquake's magnitude. A case study with Indonesian earthquake magnitude data show that the probability of maximum magnitude can model by generalized extreme value (GEV) distribution. In pricing this catastrophe bond, we assumed stochastic interest rate that following the Cox-Ingersoll-Ross (CIR) interest rate model. We develop formulas for pricing three types of catastrophe bond, namely zero coupon bonds, `coupon only at risk' bond, and `principal and coupon at risk' bond. Relationship between price of the catastrophe bond and CIR model's parameter, GEV's parameter, percentage of coupon, and discounted cash flow rule then explained via Monte Carlo simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunardi,; Setiawan, Ezra Putranda
Indonesia is a country with high risk of earthquake, because of its position in the border of earth’s tectonic plate. An earthquake could raise very high amount of damage, loss, and other economic impacts. So, Indonesia needs a mechanism for transferring the risk of earthquake from the government or the (reinsurance) company, as it could collect enough money for implementing the rehabilitation and reconstruction program. One of the mechanisms is by issuing catastrophe bond, ‘act-of-God bond’, or simply CAT bond. A catastrophe bond issued by a special-purpose-vehicle (SPV) company, and then sold to the investor. The revenue from this transactionmore » is joined with the money (premium) from the sponsor company and then invested in other product. If a catastrophe happened before the time-of-maturity, cash flow from the SPV to the investor will discounted or stopped, and the cash flow is paid to the sponsor company to compensate their loss because of this catastrophe event. When we consider the earthquake only, the amount of discounted cash flow could determine based on the earthquake’s magnitude. A case study with Indonesian earthquake magnitude data show that the probability of maximum magnitude can model by generalized extreme value (GEV) distribution. In pricing this catastrophe bond, we assumed stochastic interest rate that following the Cox-Ingersoll-Ross (CIR) interest rate model. We develop formulas for pricing three types of catastrophe bond, namely zero coupon bonds, ‘coupon only at risk’ bond, and ‘principal and coupon at risk’ bond. Relationship between price of the catastrophe bond and CIR model’s parameter, GEV’s parameter, percentage of coupon, and discounted cash flow rule then explained via Monte Carlo simulation.« less
On the Use of the Beta Distribution in Probabilistic Resource Assessments
Olea, R.A.
2011-01-01
The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. The beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution. ?? 2011 International Association for Mathematical Geology (outside the USA).
Statistical Methods for Quantifying the Variability of Solar Wind Transients of All Sizes
NASA Astrophysics Data System (ADS)
Tindale, E.; Chapman, S. C.
2016-12-01
The solar wind is inherently variable across a wide range of timescales, from small-scale turbulent fluctuations to the 11-year periodicity induced by the solar cycle. Each solar cycle is unique, and this change in overall cycle activity is coupled from the Sun to Earth via the solar wind, leading to long-term trends in space weather. Our work [Tindale & Chapman, 2016] applies novel statistical methods to solar wind transients of all sizes, to quantify the variability of the solar wind associated with the solar cycle. We use the same methods to link solar wind observations with those on the Sun and Earth. We use Wind data to construct quantile-quantile (QQ) plots comparing the statistical distributions of multiple commonly used solar wind-magnetosphere coupling parameters between the minima and maxima of solar cycles 23 and 24. We find that in each case the distribution is multicomponent, ranging from small fluctuations to extreme values, with the same functional form at all phases of the solar cycle. The change in PDF is captured by a simple change of variables, which is independent of the PDF model. Using this method we can quantify the quietness of the cycle 24 maximum, identify which variable drives the changing distribution of composite parameters such as ɛ, and we show that the distribution of ɛ is less sensitive to changes in its extreme values than that of its constituents. After demonstrating the QQ method on solar wind data, we extend the analysis to include solar and magnetospheric data spanning the same time period. We focus on GOES X-ray flux and WDC AE index data. Finally, having studied the statistics of transients across the full distribution, we apply the same method to time series of extreme bursts in each variable. Using these statistical tools, we aim to track the solar cycle-driven variability from the Sun through the solar wind and into the Earth's magnetosphere. Tindale, E. and S.C. Chapman (2016), Geophys. Res. Lett., 43(11), doi: 10.1002/2016GL068920.
NASA Astrophysics Data System (ADS)
Trepanier, J. C.; Yuan, J.; Jagger, T. H.
2017-03-01
Tropical cyclones, with their nearshore high wind speeds and deep storm surges, frequently strike the United States Gulf of Mexico coastline influencing millions of people and disrupting offshore economic activities. The combined risk of occurrence of tropical cyclone nearshore wind speeds and storm surges is assessed at 22 coastal cities throughout the United States Gulf of Mexico. The models used are extreme value copulas fitted with margins defined by the generalized Pareto distribution or combinations of Weibull, gamma, lognormal, or normal distributions. The statistical relationships between the nearshore wind speed and storm surge are provided for each coastal city prior to the copula model runs using Spearman's rank correlations. The strongest significant relationship between the nearshore wind speed and storm surge exists at Shell Beach, LA (ρ = 0.67), followed by South Padre Island, TX (ρ = 0.64). The extreme value Archimedean copula models for each city then provide return periods for specific nearshore wind speed and storm surge pairs. Of the 22 cities considered, Bay St. Louis, MS, has the shortest return period for a tropical cyclone with at least a 50 ms-1 nearshore wind speed and a 3 m surge (19.5 years, 17.1-23.5). The 90% confidence intervals are created by recalculating the return periods for a fixed set of wind speeds and surge levels using 100 samples of the model parameters. The results of this study can be utilized by policy managers and government officials concerned with coastal populations and economic activity in the Gulf of Mexico.
NASA Astrophysics Data System (ADS)
Mortier, A.; Sousa, S. G.; Adibekyan, V. Zh.; Brandão, I. M.; Santos, N. C.
2014-12-01
Context. Precise stellar parameters (effective temperature, surface gravity, metallicity, stellar mass, and radius) are crucial for several reasons, amongst which are the precise characterization of orbiting exoplanets and the correct determination of galactic chemical evolution. The atmospheric parameters are extremely important because all the other stellar parameters depend on them. Using our standard equivalent-width method on high-resolution spectroscopy, good precision can be obtained for the derived effective temperature and metallicity. The surface gravity, however, is usually not well constrained with spectroscopy. Aims: We use two different samples of FGK dwarfs to study the effect of the stellar surface gravity on the precise spectroscopic determination of the other atmospheric parameters. Furthermore, we present a straightforward formula for correcting the spectroscopic surface gravities derived by our method and with our linelists. Methods: Our spectroscopic analysis is based on Kurucz models in local thermodynamic equilibrium, performed with the MOOG code to derive the atmospheric parameters. The surface gravity was either left free or fixed to a predetermined value. The latter is either obtained through a photometric transit light curve or derived using asteroseismology. Results: We find first that, despite some minor trends, the effective temperatures and metallicities for FGK dwarfs derived with the described method and linelists are, in most cases, only affected within the errorbars by using different values for the surface gravity, even for very large differences in surface gravity, so they can be trusted. The temperatures derived with a fixed surface gravity continue to be compatible within 1 sigma with the accurate results of the infrared flux method (IRFM), as is the case for the unconstrained temperatures. Secondly, we find that the spectroscopic surface gravity can easily be corrected to a more accurate value using a linear function with the effective temperature. Tables 1 and 2 are available in electronic form at http://www.aanda.org
The rate of planet formation and the solar system's small bodies
NASA Technical Reports Server (NTRS)
Safronov, Viktor S.
1991-01-01
The evolution of random velocities and the mass distribution of preplanetary body at the early stage of accumulation are currently under review. Arguments were presented for and against the view of an extremely rapid, runaway growth of the largest bodies at this stage with parameter values of Theta approximately greater than 10(exp 3). Difficulties are encountered assuming such a large Theta: (1) bodies of the Jovian zone penetrate the asteroid zone too late and do not have time to hinder the formation of a normal-sized planet in the asteroidal zone and thereby remove a significant portion of the mass of solid matter and (2) Uranus and Neptune cannot eject bodies from the solar system into the cometary cloud. Therefore, the values Theta less than 10(exp 2) appear to be preferable.
NASA Astrophysics Data System (ADS)
Ma, Ning; Zhang, Yinsheng; Xu, Chong-Yu; Szilagyi, Jozsef
2015-08-01
Quantitative estimation of actual evapotranspiration (ETa) by in situ measurements and mathematical modeling is a fundamental task for physical understanding of ETa as well as the feedback mechanisms between land and the ambient atmosphere. However, the ETa information in the Tibetan Plateau (TP) has been greatly impeded by the extremely sparse ground observation network in the region. Approaches for estimating ETa solely from routine meteorological variables are therefore important for investigating spatiotemporal variations of ETa in the data-scarce region of the TP. Motivated by this need, the complementary relationship (CR) and Penman-Monteith approaches were evaluated against in situ measurements of ETa on a daily basis in an alpine steppe region of the TP. The former includes the Nonlinear Complementary Relationship (Nonlinear-CR) as well as the Complementary Relationship Areal Evapotranspiration (CRAE) models, while the latter involves the Katerji-Perrier and the Todorovic models. Results indicate that the Nonlinear-CR, CRAE, and Katerji-Perrier models are all capable of efficiently simulating daily ETa, provided their parameter values were appropriately calibrated. The Katerji-Perrier model performed best since its site-specific parameters take the soil water status into account. The Nonlinear-CR model also performed well with the advantage of not requiring the user to choose between a symmetric and asymmetric CR. The CRAE model, even with a relatively low Nash-Sutcliffe efficiency (NSE) value, is also an acceptable approach in this data-scarce region as it does not need information of wind speed and ground surface conditions. In contrast, application of the Todorovic model was found to be inappropriate in the dry regions of the TP due to its significant overestimation of ETa as it neglects the effect of water stress on the bulk surface resistance. Sensitivity analysis of the parameter values demonstrated the relative importance of each parameter in the corresponding model. Overall, the Nonlinear-CR model is recommended in the absence of measured ETa for local calibration of the model parameter values.
NASA Astrophysics Data System (ADS)
Laverick, M.; Lobel, A.; Merle, T.; Royer, P.; Martayan, C.; David, M.; Hensberge, H.; Thienpont, E.
2018-04-01
Context. Fundamental atomic parameters, such as oscillator strengths, play a key role in modelling and understanding the chemical composition of stars in the Universe. Despite the significant work underway to produce these parameters for many astrophysically important ions, uncertainties in these parameters remain large and can propagate throughout the entire field of astronomy. Aims: The Belgian repository of fundamental atomic data and stellar spectra (BRASS) aims to provide the largest systematic and homogeneous quality assessment of atomic data to date in terms of wavelength, atomic and stellar parameter coverage. To prepare for it, we first compiled multiple literature occurrences of many individual atomic transitions, from several atomic databases of astrophysical interest, and assessed their agreement. In a second step synthetic spectra will be compared against extremely high-quality observed spectra, for a large number of BAFGK spectral type stars, in order to critically evaluate the atomic data of a large number of important stellar lines. Methods: Several atomic repositories were searched and their data retrieved and formatted in a consistent manner. Data entries from all repositories were cross-matched against our initial BRASS atomic line list to find multiple occurrences of the same transition. Where possible we used a new non-parametric cross-match depending only on electronic configurations and total angular momentum values. We also checked for duplicate entries of the same physical transition, within each retrieved repository, using the non-parametric cross-match. Results: We report on the number of cross-matched transitions for each repository and compare their fundamental atomic parameters. We find differences in log(gf) values of up to 2 dex or more. We also find and report that 2% of our line list and Vienna atomic line database retrievals are composed of duplicate transitions. Finally we provide a number of examples of atomic spectral lines with different retrieved literature log(gf) values, and discuss the impact of these uncertain log(gf) values on quantitative spectroscopy. All cross-matched atomic data and duplicate transition pairs are available to download at http://brass.sdf.org
Causes of Glacier Melt Extremes in the Alps Since 1949
NASA Astrophysics Data System (ADS)
Thibert, E.; Dkengne Sielenou, P.; Vionnet, V.; Eckert, N.; Vincent, C.
2018-01-01
Recent record-breaking glacier melt values are attributable to peculiar extreme events and long-term warming trends that shift averages upward. Analyzing one of the world's longest mass balance series with extreme value statistics, we show that detrending melt anomalies makes it possible to disentangle these effects, leading to a fairer evaluation of the return period of melt extreme values such as 2003, and to characterize them by a more realistic bounded behavior. Using surface energy balance simulations, we show that three independent drivers control melt: global radiation, latent heat, and the amount of snow at the beginning of the melting season. Extremes are governed by large deviations in global radiation combined with sensible heat. Long-term trends are driven by the lengthening of melt duration due to earlier and longer-lasting melting of ice along with melt intensification caused by trends in long-wave irradiance and latent heat due to higher air moisture.
Extreme events in total ozone: Spatio-temporal analysis from local to global scale
NASA Astrophysics Data System (ADS)
Rieder, Harald E.; Staehelin, Johannes; Maeder, Jörg A.; Ribatet, Mathieu; di Rocco, Stefania; Jancso, Leonhardt M.; Peter, Thomas; Davison, Anthony C.
2010-05-01
Recently tools from extreme value theory (e.g. Coles, 2001; Ribatet, 2007) have been applied for the first time in the field of stratospheric ozone research, as statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not address the internal data structure concerning extremes adequately (Rieder et al., 2010a,b). A case study the world's longest total ozone record (Arosa, Switzerland - for details see Staehelin et al., 1998a,b) illustrates that tools based on extreme value theory are appropriate to identify ozone extremes and to describe the tails of the total ozone record. Excursions in the frequency of extreme events reveal "fingerprints" of dynamical factors such as ENSO or NAO, and chemical factors, such as cold Arctic vortex ozone losses, as well as major volcanic eruptions of the 20th century (e.g. Gunung Agung, El Chichón, Mt. Pinatubo). Furthermore, atmospheric loading in ozone depleting substances led to a continuous modification of column ozone in the northern hemisphere also with respect to extreme values (partly again in connection with polar vortex contributions). It is shown that application of extreme value theory allows the identification of many more such fingerprints than conventional time series analysis of annual and seasonal mean values. Especially, the extremal analysis shows the strong influence of dynamics, revealing that even moderate ENSO and NAO events have a discernible effect on total ozone (Rieder et al., 2010b). Overall the extremes concept provides new information on time series properties, variability, trends and the influence of dynamics and chemistry, complementing earlier analyses focusing only on monthly (or annual) mean values. Findings described above could be proven also for the total ozone records of 5 other long-term series (Belsk, Hohenpeissenberg, Hradec Kralove, Potsdam, Uccle) showing that strong influence of atmospheric dynamics (NAO, ENSO) on total ozone is a global feature in the northern mid-latitudes (Rieder et al., 2010c). In a next step frequency distributions of extreme events are analyzed on global scale (northern and southern mid-latitudes). A specific focus here is whether findings gained through analysis of long-term European ground based stations can be clearly identified as a global phenomenon. By showing results from these three types of studies an overview of extreme events in total ozone (and the dynamical and chemical features leading to those) will be presented from local to global scales. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD. Rieder, H.E., Jancso, L., Staehelin, J., Maeder, J.A., Ribatet, Peter, T., and A.D., Davison (2010): Extreme events in total ozone over the northern mid-latitudes: A case study based on long-term data sets from 5 ground-based stations, in preparation. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998a. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa (Switzerland), 1929-1996, J. Geophys. Res., 103(D7), 8389-8400, doi:10.1029/97JD03650, 1998b.
Oltean, Gabriel; Ivanciu, Laura-Nicoleta
2016-01-01
The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the output waveform).
Oltean, Gabriel; Ivanciu, Laura-Nicoleta
2016-01-01
The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the output waveform). PMID:26745370
A Model of the Pulsating Extremely Low-mass White Dwarf Precursor WASP 0247-25B
NASA Astrophysics Data System (ADS)
Istrate, A. G.; Fontaine, G.; Heuser, C.
2017-10-01
We present an analysis of the evolutionary and pulsation properties of the extremely low-mass white dwarf precursor (B) component of the double-lined eclipsing system WASP 0247-25. Given that the fundamental parameters of that star have been obtained previously at a unique level of precision, WASP 0247-25B represents the ideal case for testing evolutionary models of this newly found category of pulsators. Taking into account the known constraints on the mass, orbital period, effective temperature, surface gravity, and atmospheric composition, we present a model that is compatible with these constraints and show pulsation modes that have periods very close to the observed values. Importantly, these modes are predicted to be excited. Although the overall consistency remains perfectible, the observable properties of WASP 0247-25B are closely reproduced. A key ingredient of our binary evolutionary models is represented by rotational mixing as the main competitor against gravitational settling. Depending on assumptions made about the values of the degree index ℓ for the observed pulsation modes, we found three possible seismic solutions. We discuss two tests, rotational splitting and multicolor photometry, that should readily identify the modes and discriminate between these solutions. However, this will require improved temporal resolution and higher S/N observations, which are currently unavailable.
Mathematical aspects of assessing extreme events for the safety of nuclear plants
NASA Astrophysics Data System (ADS)
Potempski, Slawomir; Borysiewicz, Mieczyslaw
2015-04-01
In the paper the review of mathematical methodologies applied for assessing low frequencies of rare natural events like earthquakes, tsunamis, hurricanes or tornadoes, floods (in particular flash floods and surge storms), lightning, solar flares, etc., will be given in the perspective of the safety assessment of nuclear plants. The statistical methods are usually based on the extreme value theory, which deals with the analysis of extreme deviation from the median (or the mean). In this respect application of various mathematical tools can be useful, like: the extreme value theorem of Fisher-Tippett-Gnedenko leading to possible choices of general extreme value distributions, or the Pickands-Balkema-de Haan theorem for tail fitting, or the methods related to large deviation theory. In the paper the most important stochastic distributions relevant for performing rare events statistical analysis will be presented. This concerns, for example, the analysis of the data with the annual extreme values (maxima - "Annual Maxima Series" or minima), or the peak values, exceeding given thresholds at some periods of interest ("Peak Over Threshold"), or the estimation of the size of exceedance. Despite of the fact that there is a lack of sufficient statistical data directly containing rare events, in some cases it is still possible to extract useful information from existing larger data sets. As an example one can consider some data sets available from the web sites for floods, earthquakes or generally natural hazards. Some aspects of such data sets will be also presented taking into account their usefulness for the practical assessment of risk for nuclear power plants coming from extreme weather conditions.
Using Extreme Tropical Precipitation Statistics to Constrain Future Climate States
NASA Astrophysics Data System (ADS)
Igel, M.; Biello, J. A.
2017-12-01
Tropical precipitation is characterized by a rapid growth in mean intensity as the column humidity increases. This behavior is examined in both a cloud resolving model and with high-resolution observations of precipitation and column humidity from CloudSat and AIRS, respectively. The model and the observations exhibit remarkable consistency and suggest a new paradigm for extreme precipitation. We show that the total precipitation can be decomposed into a product of contributions from a mean intensity, a probability of precipitation, and a global PDF of column humidity values. We use the modeling and observational results to suggest simple, analytic forms for each of these functions. The analytic representations are then used to construct a simple expression for the global accumulated precipitation as a function of the parameters of each of the component functions. As the climate warms, extreme precipitation intensity and global precipitation are expected to increase, though at different rates. When these predictions are incorporated into the new analytic expression for total precipitation, predictions for changes due to global warming to the probability of precipitation and the PDF of column humidity can be made. We show that strong constraints can be imposed on the future shape of the PDF of column humidity but that only weak constraints can be set on the probability of precipitation. These are largely imposed by the intensification of extreme precipitation. This result suggests that understanding precisely how extreme precipitation responds to climate warming is critical to predicting other impactful properties of global hydrology. The new framework can also be used to confirm and discount existing theories for shifting precipitation.
Regional Frequency and Uncertainty Analysis of Extreme Precipitation in Bangladesh
NASA Astrophysics Data System (ADS)
Mortuza, M. R.; Demissie, Y.; Li, H. Y.
2014-12-01
Increased frequency of extreme precipitations, especially those with multiday durations, are responsible for recent urban floods and associated significant losses of lives and infrastructures in Bangladesh. Reliable and routinely updated estimation of the frequency of occurrence of such extreme precipitation events are thus important for developing up-to-date hydraulic structures and stormwater drainage system that can effectively minimize future risk from similar events. In this study, we have updated the intensity-duration-frequency (IDF) curves for Bangladesh using daily precipitation data from 1961 to 2010 and quantified associated uncertainties. Regional frequency analysis based on L-moments is applied on 1-day, 2-day and 5-day annual maximum precipitation series due to its advantages over at-site estimation. The regional frequency approach pools the information from climatologically similar sites to make reliable estimates of quantiles given that the pooling group is homogeneous and of reasonable size. We have used Region of influence (ROI) approach along with homogeneity measure based on L-moments to identify the homogenous pooling groups for each site. Five 3-parameter distributions (i.e., Generalized Logistic, Generalized Extreme value, Generalized Normal, Pearson Type Three, and Generalized Pareto) are used for a thorough selection of appropriate models that fit the sample data. Uncertainties related to the selection of the distributions and historical data are quantified using the Bayesian Model Averaging and Balanced Bootstrap approaches respectively. The results from this study can be used to update the current design and management of hydraulic structures as well as in exploring spatio-temporal variations of extreme precipitation and associated risk.
Samiee, Farzaneh; Samiee, Keivandokht
2017-01-01
There is limited research on the effect of electromagnetic field on aquatic organisms, especially freshwater fish species. This study was conducted to evaluate the effect of extremely low frequency electromagnetic field (ELF-EMF) (50 Hz) exposure on brain histopathology of Cyprinus carpio, one of the important species of Caspian Sea with significant economic value. A total of 200 healthy fish were used in this study. They were classified randomly in two groups: sham-exposed group and experimental group, which were exposed to five different magnetic field intensities (0.1, 1, 3, 5, and 7 mT) at two different exposure times (0.5 and 1 h). Histologic results indicate that exposure of C. carpio to artificial ELF-EMF caused severe histopathological changes in the brain at field intensities ≥3 mT leading to brain necrosis. Field intensity and duration of exposure were key parameters in induction of lesion in the brain. Further studies are needed to elucidate exact mechanism of EMF exposure on the brain.
Solar Extreme UV radiation and quark nugget dark matter model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhitnitsky, Ariel, E-mail: arz@phas.ubc.ca
2017-10-01
We advocate the idea that the surprising emission of extreme ultra violet (EUV) radiation and soft x-rays from the Sun are powered externally by incident dark matter (DM) particles. The energy and the spectral shape of this otherwise unexpected solar irradiation is estimated within the quark nugget dark matter model. This model was originally invented as a natural explanation of the observed ratio Ω{sub dark} ∼ Ω{sub visible} when the DM and visible matter densities assume the same order of magnitude values. This generic consequence of the model is a result of the common origin of both types of mattermore » which are formed during the same QCD transition and both proportional to the same fundamental dimensional parameter Λ{sub QCD}. We also present arguments suggesting that the transient brightening-like 'nanoflares' in the Sun may be related to the annihilation events which inevitably occur in the solar atmosphere within this dark matter scenario.« less
NASA Astrophysics Data System (ADS)
Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel
2018-01-01
The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor ( k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Nickles, Cassandra; Goodman, Matthew; Saez, Jose; Issakhanian, Emin
2016-11-01
California's current drought has renewed public interest in recycled water from Water Reclamation Plants (WRPs). It is critical that the recycled water meets public health standards. This project consists of simulating the transport of an instantaneous conservative tracer through the WRP chlorine contact tanks. Local recycled water regulations stipulate a minimum 90-minute modal contact time during disinfection at peak dry weather design flow. In-situ testing is extremely difficult given flowrate dependence on real world sewage line supply and recycled water demand. Given as-built drawings and operation parameters, the chlorine contact tanks are modeled to simulate extreme situations, which may not meet regulatory standards. The turbulent flow solutions are used as the basis to model the transport of a turbulently diffusing conservative tracer added instantaneously to the inlet of the reactors. This tracer simulates the transport through advection and dispersion of chlorine in the WRPs. Previous work validated the models against experimental data. The current work shows the predictive value of the simulations.
Ely, D. Matthew
2006-01-01
Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.
A Test-Length Correction to the Estimation of Extreme Proficiency Levels
ERIC Educational Resources Information Center
Magis, David; Beland, Sebastien; Raiche, Gilles
2011-01-01
In this study, the estimation of extremely large or extremely small proficiency levels, given the item parameters of a logistic item response model, is investigated. On one hand, the estimation of proficiency levels by maximum likelihood (ML), despite being asymptotically unbiased, may yield infinite estimates. On the other hand, with an…
A comparative assessment of statistical methods for extreme weather analysis
NASA Astrophysics Data System (ADS)
Schlögl, Matthias; Laaha, Gregor
2017-04-01
Extreme weather exposure assessment is of major importance for scientists and practitioners alike. We compare different extreme value approaches and fitting methods with respect to their value for assessing extreme precipitation and temperature impacts. Based on an Austrian data set from 25 meteorological stations representing diverse meteorological conditions, we assess the added value of partial duration series over the standardly used annual maxima series in order to give recommendations for performing extreme value statistics of meteorological hazards. Results show the merits of the robust L-moment estimation, which yielded better results than maximum likelihood estimation in 62 % of all cases. At the same time, results question the general assumption of the threshold excess approach (employing partial duration series, PDS) being superior to the block maxima approach (employing annual maxima series, AMS) due to information gain. For low return periods (non-extreme events) the PDS approach tends to overestimate return levels as compared to the AMS approach, whereas an opposite behavior was found for high return levels (extreme events). In extreme cases, an inappropriate threshold was shown to lead to considerable biases that may outperform the possible gain of information from including additional extreme events by far. This effect was neither visible from the square-root criterion, nor from standardly used graphical diagnosis (mean residual life plot), but from a direct comparison of AMS and PDS in synoptic quantile plots. We therefore recommend performing AMS and PDS approaches simultaneously in order to select the best suited approach. This will make the analyses more robust, in cases where threshold selection and dependency introduces biases to the PDS approach, but also in cases where the AMS contains non-extreme events that may introduce similar biases. For assessing the performance of extreme events we recommend conditional performance measures that focus on rare events only in addition to standardly used unconditional indicators. The findings of this study are of relevance for a broad range of environmental variables, including meteorological and hydrological quantities.
NASA Astrophysics Data System (ADS)
Leclerc, D. F.
2016-12-01
Northern-hemisphere (NH) heatwaves, during which temperatures rise 5 standard deviations (SD), sigma, above the historical mean temperature, mu, are becoming frequent; these events skew temperature anomaly (delta T) profiles towards extreme values. Although general extreme value (GEV) distributions have modeled precipitation data, their application to temperatures have met with limited success. This work presents a modified three-parameter (mu, sigma and tau (skew)) Exponential-Gaussian (eGd) model that hindcasts decadal NH land winter (DJF) and summer (JJA) delta Ts from 1951 to 2011, and forecasts profiles for a business-as-usual (BAU) scenario for 2061-2071. We accessed 12 numerical binned (0.05 °C/bin) z-scored NH decadal datasets (posted online until August 2015) from the publicly available website http://www.columbia.edu/ mhs119/PerceptionsAndDice/ mentioned in Hansen et al, PNAS 109 E2415-E2423 (2012) and stated to be in the public domain. No pre-processing was done. Parameters were calculated for the 12 NH datasets pasted into Microsoft Excel™ through the method of moments for 1-tail distributions and through the BEST deconvolution program described by Pommé and Marroyo, Applied Radiation and Isotopes 96 148-153 (2015) for 2-tail distributions. We used maximum likelihood estimation (MLE), residual sum of squares (RSS) and F-test to find optimal parameter values. Calculated 1st (= sigma + tau) and 2nd (= sigma2 + tau2) moments were found to be within 0.5% of observed values. Land delta Ts were recovered from the z-score values by multiplying the winter data by its SD (1.2 °C) and likewise the summer data by 0.6 °C. Results were all within 0.05 °C of 10-year averages from the GHCNv3 NH land dataset. Assuming BAU (increases from 2.1 to 2.6 ppm/y CO2) and using temperature rises of 0.27 °C and 0.35 °C per decade, for summer and winter, respectively, and forecasting to 2071, we obtain for the transient climate response for doubled CO2 (560 ppm CO2) mean delta Ts of 2.39 °C for summer and 2.97 °C for NH winter, thereby widely missing the agreed-to 2 °C international target which will be reached around 2040 @ 465 ppm CO2. In summary, barring volcanic eruptions and/or El Niño events, winter delta Ts will exceed 6 °C over 5% of land area, whereas in summer delta Ts will surpass 3.6 °C over 23% of same, both at the 5 sigma level.
NASA Astrophysics Data System (ADS)
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.
2005-08-01
The so-called Pareto-Levy or power-law distribution has been successfully used as a model to describe probabilities associated to extreme variations of stock markets indexes worldwide. The selection of the threshold parameter from empirical data and consequently, the determination of the exponent of the distribution, is often done using a simple graphical method based on a log-log scale, where a power-law probability plot shows a straight line with slope equal to the exponent of the power-law distribution. This procedure can be considered subjective, particularly with regard to the choice of the threshold or cutoff parameter. In this work, a more objective procedure based on a statistical measure of discrepancy between the empirical and the Pareto-Levy distribution is presented. The technique is illustrated for data sets from the New York Stock Exchange (DJIA) and the Mexican Stock Market (IPC).
EMITTING ELECTRONS AND SOURCE ACTIVITY IN MARKARIAN 501
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mankuzhiyil, Nijil; Ansoldi, Stefano; Persic, Massimo
2012-07-10
We study the variation of the broadband spectral energy distribution (SED) of the BL Lac object Mrk 501 as a function of source activity, from quiescent to flaring. Through {chi}{sup 2}-minimization we model eight simultaneous SED data sets with a one-zone synchrotron self-Compton (SSC) model, and examine how model parameters vary with source activity. The emerging variability pattern of Mrk 501 is complex, with the Compton component arising from {gamma}-e scatterings that sometimes are (mostly) Thomson and sometimes (mostly) extreme Klein-Nishina. This can be seen from the variation of the Compton to synchrotron peak distance according to source state. Themore » underlying electron spectra are faint/soft in quiescent states and bright/hard in flaring states. A comparison with Mrk 421 suggests that the typical values of the SSC parameters are different in the two sources: however, in both jets the energy density is particle-dominated in all states.« less
Broadband polygonal invisibility cloak for visible light
Chen, Hongsheng; Zheng, Bin
2012-01-01
Invisibility cloaks have recently become a topic of considerable interest thanks to the theoretical works of transformation optics and conformal mapping. The design of the cloak involves extreme values of material properties and spatially dependent parameter tensors, which are very difficult to implement. The realization of an isolated invisibility cloak in the visible light, which is an important step towards achieving a fully movable invisibility cloak, has remained elusive. Here, we report the design and experimental demonstration of an isolated polygonal cloak for visible light. The cloak is made of several elements, whose electromagnetic parameters are designed by a linear homogeneous transformation method. Theoretical analysis shows the proposed cloak can be rendered invisible to the rays incident from all the directions. Using natural anisotropic materials, a simplified hexagonal cloak which works for six incident directions is fabricated for experimental demonstration. The performance is validated in a broadband visible spectrum. PMID:22355767
Novoselova, Elena G; Glushkova, Olga V; Khrenov, Maxim O; Novoselova, Tatyana V; Lunin, Sergey M; Fesenko, Eugeny E
2017-05-01
To clarify whether extremely low-level microwaves (MW) alone or in combination with p38 inhibitor affect immune cell responses to inhalation exposure of mice to low-level toluene. The cytokine profile, heat shock proteins expression, and the activity of several signal cascades, namely, NF-κB, SAPK/JNK, IRF-3, p38 MAPK, and TLR4 were measured in spleen lymphocytes of mice treated to air-delivered toluene (0.6 mg/m 3 ) or extremely low-level microwaves (8.15-18 GHz, 1μW/cm 2 , 1 Hz swinging frequency) or combined action of these two factors. A single exposure to air-delivered low-level toluene induced activation of NF-κB, SAPK/JNK, IFR-3, p38 MAPK and TLR4 pathways. Furthermore, air toluene induced the expression of Hsp72 and enhanced IL-1, IL-6, and TNF-α in blood plasma, which is indicative of a pro-inflammatory response. Exposure to MW alone also resulted in the enhancement of the plasma cytokine values (e.g. IL-6, TNF-α, and IFN-γ) and activation of the NF-κB, MAPK p38, and especially the TLR4 pathways in splenic lymphocytes. Paradoxically, pre-exposure to MW partially recovered or normalized the lymphocyte parameters in the toluene-exposed mice, while the p38 inhibitor XI additionally increased protective activity of microwaves by down regulating MAPKs (JNK and p38), IKK, as well as expression of TLR4 and Hsp90-α. The results suggest that exposure to low-intensity MW at specific conditions may recover immune parameters in mice undergoing inhalation exposure to low-level toluene via mechanisms involving cellular signaling.
Evaluation of last extreme drought events in Amazon basin using remotely sensing data
NASA Astrophysics Data System (ADS)
Panisset, Jéssica S.; Gouveia, Célia M.; Libonati, Renata; Peres, Leonardo; Machado-Silva, Fausto; França, Daniela A.; França, José R. A.
2017-04-01
Amazon basin has experienced several intense droughts among which were highlighted last recent ones in 2005 and 2010. Climate models suggest these events will be even more frequent due to higher concentration of greenhouse gases that are also driven forward by alteration in forest dynamics. Environmental and social impacts demand to identify these intense droughts and the behavior of climate parameters that affect vegetation. This present study also identifies a recent intense drought in Amazon basin during 2015. Meteorological parameters and vegetation indices suggest this event was the most severe already registered in the region. We have used land surface temperature (LST), vegetation indices, rainfall and shortwave radiation from 2000 to 2015 to analyze and compare droughts of 2005, 2010 and 2015. Our results show singularities among the three climate extreme events. The austral winter was the most affected season in 2005 and 2010, but not in 2015 when austral summer presented extreme conditions. Precipitation indicates epicenter of 2005 in west Amazon corroborating with previous studies. In 2010, the west region was strongly affected again together with the northwest and the southeast areas. However, 2015 epicenters were concentrated in the east of the basin. In 2015, shortwave radiation has exceeded the maximum values of 2005 and temperature the maximum value of 2010. Vegetation indices have shown positive and negative anomalies. Despite of heterogenous response of Amazon forest to drought, hybrid vegetation indices using NDVI (Normalized Difference Vegetation Index) and LST highlights the exceptionality of 2015 drought episode that exhibits higher vegetation water stress than the cases of 2010 and 2005. Finally, this work has shown how meteorological parameters influence droughts and the effects on vegetation in Amazon basin. Complexity of climate, ecosystem heterogeneity and high diversity of Amazon forest are response by idiosyncrasies of each drought. All these information improve the predictability of future climate scenarios and their effects in the environment. Research performed was supported by FAPESP/FCT Project Brazilian Fire-Land-Atmosphere System (BrFLAS) (1389/2014 and 2015/01389-4), by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) through a Master grant from PPGM/IGEO/UFRJ (first author), and by Fundação Carlos Chagas Filho de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ) through grants E-26/201.521/2014; E-26/101.423/2014; E-26/201.221/2015; and E-26/203.174/2016.
The Microphysical Structure of Extreme Precipitation as Inferred from Ground-Based Raindrop Spectra.
NASA Astrophysics Data System (ADS)
Uijlenhoet, Remko; Smith, James A.; Steiner, Matthias
2003-05-01
The controls on the variability of raindrop size distributions in extreme rainfall and the associated radar reflectivity-rain rate relationships are studied using a scaling-law formalism for the description of raindrop size distributions and their properties. This scaling-law formalism enables a separation of the effects of changes in the scale of the raindrop size distribution from those in its shape. Parameters controlling the scale and shape of the scaled raindrop size distribution may be related to the microphysical processes generating extreme rainfall. A global scaling analysis of raindrop size distributions corresponding to rain rates exceeding 100 mm h1, collected during the 1950s with the Illinois State Water Survey raindrop camera in Miami, Florida, reveals that extreme rain rates tend to be associated with conditions in which the variability of the raindrop size distribution is strongly number controlled (i.e., characteristic drop sizes are roughly constant). This means that changes in properties of raindrop size distributions in extreme rainfall are largely produced by varying raindrop concentrations. As a result, rainfall integral variables (such as radar reflectivity and rain rate) are roughly proportional to each other, which is consistent with the concept of the so-called equilibrium raindrop size distribution and has profound implications for radar measurement of extreme rainfall. A time series analysis for two contrasting extreme rainfall events supports the hypothesis that the variability of raindrop size distributions for extreme rain rates is strongly number controlled. However, this analysis also reveals that the actual shapes of the (measured and scaled) spectra may differ significantly from storm to storm. This implies that the exponents of power-law radar reflectivity-rain rate relationships may be similar, and close to unity, for different extreme rainfall events, but their prefactors may differ substantially. Consequently, there is no unique radar reflectivity-rain rate relationship for extreme rain rates, but the variability is essentially reduced to one free parameter (i.e., the prefactor). It is suggested that this free parameter may be estimated on the basis of differential reflectivity measurements in extreme rainfall.
Zanchi, Davide; Viallon, Magalie; Le Goff, Caroline; Millet, Grégoire P.; Giardini, Guido; Croisille, Pierre; Haller, Sven
2017-01-01
Background: Pioneer studies demonstrate the impact of extreme sport load on the human brain, leading to threatening conditions for athlete's health such as cerebral edema. The investigation of brain water diffusivity, allowing the measurement of the intercellular water and the assessment of cerebral edema, can give a great contribution to the investigation of the effects of extreme sports on the brain. We therefore assessed the effect of supra-physiological effort (extreme distance and elevation changes) in mountain ultra-marathons (MUMs) athletes combining for the first time brain magnetic resonance imaging (MRI) and blood parameters. Methods:This longitudinal study included 19 volunteers (44.2 ± 9.5 years) finishing a MUM (330 km, elevation + 24000 m). Quantitative measurements of brain diffusion-weighted images (DWI) were performed at 3 time-points: Before the race, upon arrival and after 48 h. Multiple blood biomarkers were simultaneously investigated. Data analyses included brain apparent diffusion coefficient (ADC) and physiological data comparisons between three time-points. Results:The whole brain ADC significantly increased from baseline to arrival (p = 0.005) and then significantly decreased at recovery (p = 0.005) to lower values than at baseline (p = 0.005). While sodium, potassium, calcium, and chloride as well as hematocrit (HCT) changed over time, the serum osmolality remained constant. Significant correlations were found between whole brain ADC changes and osmolality (p = 0.01), cholesterol (p = 0.009), c-reactive protein (p = 0.04), sodium (p = 0.01), and chloride (p = 0.002) plasma level variations. Conclusions:These results suggest the relative increase of the inter-cellular volume upon arrival, and subsequently its reduction to lower values than at baseline, indicating that even after 48 h the brain has not fully recovered to its equilibrium state. Even though serum electrolytes may only indirectly indicate modifications at the brain level due to the blood brain barrier, the results concerning osmolality suggest that body water might directly influence the change in cerebral ADC. These findings establish therefore a direct link between general brain inter-cellular water content and physiological biomarkers modifications produced by extreme sport. PMID:28105018
Zanchi, Davide; Viallon, Magalie; Le Goff, Caroline; Millet, Grégoire P; Giardini, Guido; Croisille, Pierre; Haller, Sven
2016-01-01
Background: Pioneer studies demonstrate the impact of extreme sport load on the human brain, leading to threatening conditions for athlete's health such as cerebral edema. The investigation of brain water diffusivity, allowing the measurement of the intercellular water and the assessment of cerebral edema, can give a great contribution to the investigation of the effects of extreme sports on the brain. We therefore assessed the effect of supra-physiological effort (extreme distance and elevation changes) in mountain ultra-marathons (MUMs) athletes combining for the first time brain magnetic resonance imaging (MRI) and blood parameters. Methods: This longitudinal study included 19 volunteers (44.2 ± 9.5 years) finishing a MUM (330 km, elevation + 24000 m). Quantitative measurements of brain diffusion-weighted images (DWI) were performed at 3 time-points: Before the race, upon arrival and after 48 h. Multiple blood biomarkers were simultaneously investigated. Data analyses included brain apparent diffusion coefficient (ADC) and physiological data comparisons between three time-points. Results: The whole brain ADC significantly increased from baseline to arrival ( p = 0.005) and then significantly decreased at recovery ( p = 0.005) to lower values than at baseline ( p = 0.005). While sodium, potassium, calcium, and chloride as well as hematocrit (HCT) changed over time, the serum osmolality remained constant. Significant correlations were found between whole brain ADC changes and osmolality ( p = 0.01), cholesterol ( p = 0.009), c-reactive protein ( p = 0.04), sodium ( p = 0.01), and chloride ( p = 0.002) plasma level variations. Conclusions: These results suggest the relative increase of the inter-cellular volume upon arrival, and subsequently its reduction to lower values than at baseline, indicating that even after 48 h the brain has not fully recovered to its equilibrium state. Even though serum electrolytes may only indirectly indicate modifications at the brain level due to the blood brain barrier, the results concerning osmolality suggest that body water might directly influence the change in cerebral ADC. These findings establish therefore a direct link between general brain inter-cellular water content and physiological biomarkers modifications produced by extreme sport.
Barth, Gilbert R.; Hill, M.C.
2005-01-01
This paper evaluates the importance of seven types of parameters to virus transport: hydraulic conductivity, porosity, dispersivity, sorption rate and distribution coefficient (representing physical-chemical filtration), and in-solution and adsorbed inactivation (representing virus inactivation). The first three parameters relate to subsurface transport in general while the last four, the sorption rate, distribution coefficient, and in-solution and adsorbed inactivation rates, represent the interaction of viruses with the porous medium and their ability to persist. The importance of four types of observations to estimate the virus-transport parameters are evaluated: hydraulic heads, flow, temporal moments of conservative-transport concentrations, and virus concentrations. The evaluations are conducted using one- and two-dimensional homogeneous simulations, designed from published field experiments, and recently developed sensitivity-analysis methods. Sensitivity to the transport-simulation time-step size is used to evaluate the importance of numerical solution difficulties. Results suggest that hydraulic conductivity, porosity, and sorption are most important to virus-transport predictions. Most observation types provide substantial information about hydraulic conductivity and porosity; only virus-concentration observations provide information about sorption and inactivation. The observations are not sufficient to estimate these important parameters uniquely. Even with all observation types, there is extreme parameter correlation between porosity and hydraulic conductivity and between the sorption rate and in-solution inactivation. Parameter estimation was accomplished by fixing values of porosity and in-solution inactivation.
Detection and identification of concealed weapons using matrix pencil
NASA Astrophysics Data System (ADS)
Adve, Raviraj S.; Thayaparan, Thayananthan
2011-06-01
The detection and identification of concealed weapons is an extremely hard problem due to the weak signature of the target buried within the much stronger signal from the human body. This paper furthers the automatic detection and identification of concealed weapons by proposing the use of an effective approach to obtain the resonant frequencies in a measurement. The technique, based on Matrix Pencil, a scheme for model based parameter estimation also provides amplitude information, hence providing a level of confidence in the results. Of specific interest is the fact that Matrix Pencil is based on a singular value decomposition, making the scheme robust against noise.
Strong cosmic censorship in de Sitter space
NASA Astrophysics Data System (ADS)
Dias, Oscar J. C.; Eperon, Felicity C.; Reall, Harvey S.; Santos, Jorge E.
2018-05-01
Recent work indicates that the strong cosmic censorship hypothesis is violated by nearly extremal Reissner-Nordström-de Sitter black holes. It was argued that perturbations of such a black hole decay sufficiently rapidly that the perturbed spacetime can be extended across the Cauchy horizon as a weak solution of the equations of motion. In this paper we consider the case of Kerr-de Sitter black holes. We find that, for any nonextremal value of the black hole parameters, there are quasinormal modes which decay sufficiently slowly to ensure that strong cosmic censorship is respected. Our analysis covers both scalar field and linearized gravitational perturbations.
Two-flavor hybrid stars with the Dyson-Schwinger quark model
NASA Astrophysics Data System (ADS)
Wei, J. B.; Chen, H.; Schulze, H.-J.
2017-11-01
We study the properties of two-flavor quark matter in the Dyson-Schwinger model and investigate the possible consequences for hybrid neutron stars, with particular regard to the two-solar-mass limit. We find that with some extreme values of the model parameters, the mass fraction of two-flavor quark matter in heavy neutron stars can be as high as 30 percent and the possible energy release during the conversion from nucleonic neutron stars to hybrid stars can reach 1052 erg. Supported by NSFC (11305144, 11475149, 11303023), Central Universities (CUGL 140609) in China, “NewCompStar,” COST Action MP1304
Receive Mode Analysis and Design of Microstrip Reflectarrays
NASA Technical Reports Server (NTRS)
Rengarajan, Sembiam
2011-01-01
Traditionally microstrip or printed reflectarrays are designed using the transmit mode technique. In this method, the size of each printed element is chosen so as to provide the required value of the reflection phase such that a collimated beam results along a given direction. The reflection phase of each printed element is approximated using an infinite array model. The infinite array model is an excellent engineering approximation for a large microstrip array since the size or orientation of elements exhibits a slow spatial variation. In this model, the reflection phase from a given printed element is approximated by that of an infinite array of elements of the same size and orientation when illuminated by a local plane wave. Thus the reflection phase is a function of the size (or orientation) of the element, the elevation and azimuth angles of incidence of a local plane wave, and polarization. Typically, one computes the reflection phase of the infinite array as a function of several parameters such as size/orientation, elevation and azimuth angles of incidence, and in some cases for vertical and horizontal polarization. The design requires the selection of the size/orientation of the printed element to realize the required phase by interpolating or curve fitting all the computed data. This is a substantially complicated problem, especially in applications requiring a computationally intensive commercial code to determine the reflection phase. In dual polarization applications requiring rectangular patches, one needs to determine the reflection phase as a function of five parameters (dimensions of the rectangular patch, elevation and azimuth angles of incidence, and polarization). This is an extremely complex problem. The new method employs the reciprocity principle and reaction concept, two well-known concepts in electromagnetics to derive the receive mode analysis and design techniques. In the "receive mode design" technique, the reflection phase is computed for a plane wave incident on the reflectarray from the direction of the beam peak. In antenna applications with a single collimated beam, this method is extremely simple since all printed elements see the same angles of incidence. Thus the number of parameters is reduced by two when compared to the transmit mode design. The reflection phase computation as a function of five parameters in the rectangular patch array discussed previously is reduced to a computational problem with three parameters in the receive mode. Furthermore, if the beam peak is in the broadside direction, the receive mode design is polarization independent and the reflection phase computation is a function of two parameters only. For a square patch array, it is a function of the size, one parameter only, thus making it extremely simple.
Low-frequency fluctuations in vertical cavity lasers: Experiments versus Lang-Kobayashi dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torcini, Alessandro; Istituto Nazionale di Fisica Nucleare, Sezione di Firenze, via Sansone 1, 50019 Sesto Fiorentino; Barland, Stephane
2006-12-15
The limits of applicability of the Lang-Kobayashi (LK) model for a semiconductor laser with optical feedback are analyzed. The model equations, equipped with realistic values of the parameters, are investigated below the solitary laser threshold where low-frequency fluctuations (LFF's) are usually observed. The numerical findings are compared with experimental data obtained for the selected polarization mode from a vertical cavity surface emitting laser (VCSEL) subject to polarization selective external feedback. The comparison reveals the bounds within which the dynamics of the LK model can be considered as realistic. In particular, it clearly demonstrates that the deterministic LK model, for realisticmore » values of the linewidth enhancement factor {alpha}, reproduces the LFF's only as a transient dynamics towards one of the stationary modes with maximal gain. A reasonable reproduction of real data from VCSEL's can be obtained only by considering the noisy LK or alternatively deterministic LK model for extremely high {alpha} values.« less
An analysis of annual maximum streamflows in Terengganu, Malaysia using TL-moments approach
NASA Astrophysics Data System (ADS)
Ahmad, Ummi Nadiah; Shabri, Ani; Zakaria, Zahrahtul Amani
2013-02-01
TL-moments approach has been used in an analysis to determine the best-fitting distributions to represent the annual series of maximum streamflow data over 12 stations in Terengganu, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: generalized pareto (GPA), generalized logistic, and generalized extreme value distribution. The influence of TL-moments on estimated probability distribution functions are examined by evaluating the relative root mean square error and relative bias of quantile estimates through Monte Carlo simulations. The boxplot is used to show the location of the median and the dispersion of the data, which helps in reaching the decisive conclusions. For most of the cases, the results show that TL-moments with one smallest value was trimmed from the conceptual sample (TL-moments (1,0)), of GPA distribution was the most appropriate in majority of the stations for describing the annual maximum streamflow series in Terengganu, Malaysia.
Extremal Optimization: Methods Derived from Co-Evolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boettcher, S.; Percus, A.G.
1999-07-13
We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance provesmore » competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.« less
NASA Astrophysics Data System (ADS)
Ostrenga, D.; Shen, S.; Vollmer, B.; Meyer, D. L.
2017-12-01
NASA climate reanalysis dataset from MERRA-2 contains numerous data for atmosphere, land, and ocean, that are grouped into 95 products of archived volume over 300 TB. The data files are saved as hourly-file, day-file (hourly time interval) and month-file containing up to 125 parameters. Due to the large number of data files and the sheer data volumes, it is a challenging for users, especially those in the application research community, to handle dealing with the original data files. Most of these researchers prefer to focus on a small region or single location using the hourly data for long time periods to analyze extreme weather events or say winds for renewable energy applications. At the GES DISC, we have been working closely with the science teams and the application user community to create several new value added data products and high quality services to facilitate the use of the model data for various types of research. We have tested converting hourly data from one-day per file into different data cubes, such as one-month, one-year, or whole-mission and then continued to analyze the efficiency of the accessibility of this newly structured data through various services. Initial results have shown that compared to the original file structure, the new data has significantly improved the performance for accessing long time series. It is noticed that the performance is associated to the cube size and structure, the compression method, and how the data are accessed. The optimized data cube structure will not only improve the data access, but also enable better online analytic services for doing statistical analysis and extreme events mining. Two case studies will be presented using the newly structured data and value added services, the California drought and the extreme drought of the Northeastern states of Brazil. Furthermore, data access and analysis through cloud storage capabilities will be investigated.
Power-law modeling based on least-squares minimization criteria.
Hernández-Bermejo, B; Fairén, V; Sorribas, A
1999-10-01
The power-law formalism has been successfully used as a modeling tool in many applications. The resulting models, either as Generalized Mass Action or as S-systems models, allow one to characterize the target system and to simulate its dynamical behavior in response to external perturbations and parameter changes. The power-law formalism was first derived as a Taylor series approximation in logarithmic space for kinetic rate-laws. The especial characteristics of this approximation produce an extremely useful systemic representation that allows a complete system characterization. Furthermore, their parameters have a precise interpretation as local sensitivities of each of the individual processes and as rate-constants. This facilitates a qualitative discussion and a quantitative estimation of their possible values in relation to the kinetic properties. Following this interpretation, parameter estimation is also possible by relating the systemic behavior to the underlying processes. Without leaving the general formalism, in this paper we suggest deriving the power-law representation in an alternative way that uses least-squares minimization. The resulting power-law mimics the target rate-law in a wider range of concentration values than the classical power-law. Although the implications of this alternative approach remain to be established, our results show that the predicted steady-state using the least-squares power-law is closest to the actual steady-state of the target system.
XENON100 exclusion limit without considering Leff as a nuisance parameter
NASA Astrophysics Data System (ADS)
Davis, Jonathan H.; Bœhm, Céline; Oppermann, Niels; Ensslin, Torsten; Lacroix, Thomas
2012-07-01
In 2011, the XENON100 experiment has set unprecedented constraints on dark matter-nucleon interactions, excluding dark matter candidates with masses down to 6 GeV if the corresponding cross section is larger than 10-39cm2. The dependence of the exclusion limit in terms of the scintillation efficiency (Leff) has been debated at length. To overcome possible criticisms XENON100 performed an analysis in which Leff was considered as a nuisance parameter and its uncertainties were profiled out by using a Gaussian likelihood in which the mean value corresponds to the best fit Leff value (smoothly extrapolated to 0 below 3 keVnr). Although such a method seems fairly robust, it does not account for more extreme types of extrapolation nor does it enable us to anticipate how much the exclusion limit would vary if new data were to support a flat behavior for Leff below 3 keVnr, for example. Yet, such a question is crucial for light dark matter models which are close to the published XENON100 limit. To answer this issue, we use a maximum likelihood ratio analysis, as done by the XENON100 Collaboration, but do not consider Leff as a nuisance parameter. Instead, Leff is obtained directly from the fits to the data. This enables us to define frequentist confidence intervals by marginalizing over Leff.
A dynamical model on deposit and loan of banking: A bifurcation analysis
NASA Astrophysics Data System (ADS)
Sumarti, Novriana; Hasmi, Abrari Noor
2015-09-01
A dynamical model, which is one of sophisticated techniques using mathematical equations, can determine the observed state, for example bank profits, for all future times based on the current state. It will also show small changes in the state of the system create either small or big changes in the future depending on the model. In this research we develop a dynamical system of the form: d/D d t =f (D ,L ,rD,rL,r ), d/L d t =g (D ,L ,rD,rL,r ), Here D and rD are the volume of deposit and its rate, L and rL are the volume of loan and its rate, and r is the interbank market rate. There are parameters required in this model which give connections between two variables or between two derivative functions. In this paper we simulate the model for several parameters values. We do bifurcation analysis on the dynamics of the system in order to identify the appropriate parameters that control the stability behaviour of the system. The result shows that the system will have a limit cycle for small value of interest rate of loan, so the deposit and loan volumes are fluctuating and oscillating extremely. If the interest rate of loan is too high, the loan volume will be decreasing and vanish and the system will converge to its carrying capacity.
Quantitative assessment of upper extremities motor function in multiple sclerosis.
Daunoraviciene, Kristina; Ziziene, Jurgita; Griskevicius, Julius; Pauk, Jolanta; Ovcinikova, Agne; Kizlaitiene, Rasa; Kaubrys, Gintaras
2018-05-18
Upper extremity (UE) motor function deficits are commonly noted in multiple sclerosis (MS) patients and assessing it is challenging because of the lack of consensus regarding its definition. Instrumented biomechanical analysis of upper extremity movements can quantify coordination with different spatiotemporal measures and facilitate disability rating in MS patients. To identify objective quantitative parameters for more accurate evaluation of UE disability and relate it to existing clinical scores. Thirty-four MS patients and 24 healthy controls (CG) performed a finger-to-nose test as fast as possible and, in addition, clinical evaluation kinematic parameters of UE were measured by using inertial sensors. Generally, a higher disability score was associated with an increase of several temporal parameters, like slower task performance. The time taken to touch their nose was longer when the task was fulfilled with eyes closed. Time to peak angular velocity significantly changed in MS patients (EDSS > 5.0). The inter-joint coordination significantly decreases in MS patients (EDSS 3.0-5.5). Spatial parameters indicated that maximal ROM changes were in elbow flexion. Our findings have revealed that spatiotemporal parameters are related to the UE motor function and MS disability level. Moreover, they facilitate clinical rating by supporting clinical decisions with quantitative data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klymenko, M. V.; Remacle, F., E-mail: fremacle@ulg.ac.be
2014-10-28
A methodology is proposed for designing a low-energy consuming ternary-valued full adder based on a quantum dot (QD) electrostatically coupled with a single electron transistor operating as a charge sensor. The methodology is based on design optimization: the values of the physical parameters of the system required for implementing the logic operations are optimized using a multiobjective genetic algorithm. The searching space is determined by elements of the capacitance matrix describing the electrostatic couplings in the entire device. The objective functions are defined as the maximal absolute error over actual device logic outputs relative to the ideal truth tables formore » the sum and the carry-out in base 3. The logic units are implemented on the same device: a single dual-gate quantum dot and a charge sensor. Their physical parameters are optimized to compute either the sum or the carry out outputs and are compatible with current experimental capabilities. The outputs are encoded in the value of the electric current passing through the charge sensor, while the logic inputs are supplied by the voltage levels on the two gate electrodes attached to the QD. The complex logic ternary operations are directly implemented on an extremely simple device, characterized by small sizes and low-energy consumption compared to devices based on switching single-electron transistors. The design methodology is general and provides a rational approach for realizing non-switching logic operations on QD devices.« less
Analysis of the dependence of extreme rainfalls
NASA Astrophysics Data System (ADS)
Padoan, Simone; Ancey, Christophe; Parlange, Marc
2010-05-01
The aim of spatial analysis is to quantitatively describe the behavior of environmental phenomena such as precipitation levels, wind speed or daily temperatures. A number of generic approaches to spatial modeling have been developed[1], but these are not necessarily ideal for handling extremal aspects given their focus on mean process levels. The areal modelling of the extremes of a natural process observed at points in space is important in environmental statistics; for example, understanding extremal spatial rainfall is crucial in flood protection. In light of recent concerns over climate change, the use of robust mathematical and statistical methods for such analyses has grown in importance. Multivariate extreme value models and the class of maxstable processes [2] have a similar asymptotic motivation to the univariate Generalized Extreme Value (GEV) distribution , but providing a general approach to modeling extreme processes incorporating temporal or spatial dependence. Statistical methods for max-stable processes and data analyses of practical problems are discussed by [3] and [4]. This work illustrates methods to the statistical modelling of spatial extremes and gives examples of their use by means of a real extremal data analysis of Switzerland precipitation levels. [1] Cressie, N. A. C. (1993). Statistics for Spatial Data. Wiley, New York. [2] de Haan, L and Ferreria A. (2006). Extreme Value Theory An Introduction. Springer, USA. [3] Padoan, S. A., Ribatet, M and Sisson, S. A. (2009). Likelihood-Based Inference for Max-Stable Processes. Journal of the American Statistical Association, Theory & Methods. In press. [4] Davison, A. C. and Gholamrezaee, M. (2009), Geostatistics of extremes. Journal of the Royal Statistical Society, Series B. To appear.
Examining global extreme sea level variations on the coast from in-situ and remote observations
NASA Astrophysics Data System (ADS)
Menendez, Melisa; Benkler, Anna S.
2017-04-01
The estimation of extreme water level values on the coast is a requirement for a wide range of engineering and coastal management applications. In addition, climate variations of extreme sea levels on the coastal area result from a complex interacting of oceanic, atmospheric and terrestrial processes across a wide range of spatial and temporal scales. In this study, variations of extreme sea level return values are investigated from two available sources of information: in-situ tide-gauge records and satellite altimetry data. Long time series of sea level from tide-gauge records are the most valuable observations since they directly measure water level in a specific coastal location. They have however a number of sources of in-homogeneities that may affect the climate description of extremes when this data source is used. Among others, the presence of gaps, historical time in-homogeneities and jumps in the mean sea level signal are factors that can provide uncertainty in the characterization of the extreme sea level behaviour. Moreover, long records from tide-gauges are sparse and there are many coastal areas worldwide without in-situ available information. On the other hand, with the accumulating altimeter records of several satellite missions from the 1990s, approaching 25 recorded years at the time of writing, it is becoming possible the analysis of extreme sea level events from this data source. Aside the well-known issue of altimeter measurements very close to the coast (mainly due to corruption by land, wet troposphere path delay errors and local tide effects on the coastal area), there are other aspects that have to be considered when sea surface height values estimated from satellite are going to be used in a statistical extreme model, such as the use of a multi-mission product to get long observed periods and the selection of the maxima sample, since altimeter observations do not provide values uniform in time and space. Here, we have compared the extreme values of 'still water level' and 'non-tidal-residual' of in-situ records from the GESLA2 dataset (Woodworth et al. 2016) against the novel coastal altimetry datasets (Cipollini et al. 2016). Seasonal patterns, inter-annual variability and long-term trends are analyzed. Then, a time-dependent extreme model (Menendez et al. 2009) is applied to characterize extreme sea level return values and their variability on the coastal area around the world.
This is a presentation titled Estimating the Effect of Climate Change on Crop Yields and Farmland Values: The Importance of Extreme Temperatures that was given for the National Center for Environmental Economics
The effect of local parameters on gas turbine emissions
NASA Technical Reports Server (NTRS)
Kauffman, C. W.; Correa, S. M.; Orozco, N. J.
1980-01-01
Gas turbine engine inlet parameters reflect changes in local atmospheric conditions. The pollutant emissions for the engine reflects these changes. In attempting to model the effect of the changing ambient conditions on the emissions it was found that these emissions exhibit an extreme sensitivity to some of the details of the combustion process such as the local fuel-air ratio and the size of the drops in the fuel spray. Fuel-air ratios have been mapped under nonburning conditions using a single JT8D-17 combustion can at simulated idle conditions, and significant variations in the local values have been found. Modelling of the combustor employs a combination of perfectly stirred and plug flow reactors including a finite rate vaporization treatment of the fuel spray. Results show that a small increase in the mean drop size can lead to a large increase in hydrocarbon emissions and decreasing the value of the CO-OH rate constant can lead to large increases in the carbon monoxide emissions. These emissions may also be affected by the spray characteristics with larger drops retarding the combustion process. Hydrocarbon, carbon monoxide, and oxides of nitrogen emissions calculated using the model accurately reflect measured emission variations caused by changing engine inlet conditions.
Some numerical methods for the Hele-Shaw equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitaker, N.
1994-03-01
Tryggvason and Aref used a boundary integral method and the vortex-in-cell method to evolve the interface between two fluids in a Hele-Shaw cell. The method gives excellent results for intermediate values of the nondimensional surface tension parameter. The results are different from the predicted results of McLean and Saffman for small surface tension. For large surface tension, there are some numerical problems. In this paper, we implement the method of Tryggvason and Aref but use the point vortex method instead of the vortex-in-cell method. A parametric spline is used to represent the interface. The finger widths obtained agree well withmore » those predicted by McLean and Saffman. We conclude the the method of Tryggvason and Aref can provide excellent results but that the vortex-in-cell method may not be the method of choice for extreme values of the surface tension parameter. In a second method, we represent the interface with a Fourier representation. In addition, an alternative way of discretizing the boundary integral is used. Our results are compared to the linearized theory and the results of McLean and Saffman and are shown to be highly accurate. 21 refs., 4 figs., 2 tabs.« less
Eccentricity growth and orbit flip in near-coplanar hierarchical three-body systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Gongjie; Naoz, Smadar; Kocsis, Bence
2014-04-20
The secular dynamical evolution of a hierarchical three-body system in which a distant third object orbits around a binary has been studied extensively, demonstrating that the inner orbit can undergo large eccentricity and inclination oscillations. It was shown before that starting with a circular inner orbit, large mutual inclination (40°-140°) can produce long timescale modulations that drive the eccentricity to extremely large values and can flip the orbit. Here, we demonstrate that starting with an almost coplanar configuration, for eccentric inner and outer orbits, the eccentricity of the inner orbit can still be excited to high values, and the orbitmore » can flip by ∼180°, rolling over its major axis. The ∼180° flip criterion and the flip timescale are described by simple analytic expressions that depend on the initial orbital parameters. With tidal dissipation, this mechanism can produce counter-orbiting exoplanetary systems. In addition, we also show that this mechanism has the potential to enhance the tidal disruption or collision rates for different systems. Furthermore, we explore the entire e {sub 1} and i {sub 0} parameter space that can produce flips.« less
Genetic parameters for test day somatic cell score in Brazilian Holstein cattle.
Costa, C N; Santos, G G; Cobuci, J A; Thompson, G; Carvalheira, J G V
2015-12-29
Selection for lower somatic cell count has been included in the breeding objectives of several countries in order to increase resistance to mastitis. Genetic parameters of somatic cell scores (SCS) were estimated from the first lactation test day records of Brazilian Holstein cows using random-regression models with Legendre polynomials (LP) of the order 3-5. Data consisted of 87,711 TD produced by 10,084 cows, sired by 619 bulls calved from 1993 to 2007. Heritability estimates varied from 0.06 to 0.14 and decreased from the beginning of the lactation up to 60 days in milk (DIM) and increased thereafter to the end of lactation. Genetic correlations between adjacent DIM were very high (>0.83) but decreased to negative values, obtained with LP of order four, between DIM in the extremes of lactation. Despite the favorable trend, genetic changes in SCS were not significant and did not differ among LP. There was little benefit of fitting an LP of an order >3 to model animal genetic and permanent environment effects for SCS. Estimates of variance components found in this study may be used for breeding value estimation for SCS and selection for mastitis resistance in Holstein cattle in Brazil.
Significant influences of global mean temperature and ENSO on extreme rainfall over Southeast Asia
NASA Astrophysics Data System (ADS)
Villafuerte, Marcelino, II; Matsumoto, Jun
2014-05-01
Along with the increasing concerns on the consequences of global warming, and the accumulating records of disaster related to heavy rainfall events in Southeast Asia, this study investigates whether a direct link can be detected between the rising global mean temperature, as well as the El Niño-Southern Oscillation (ENSO), and extreme rainfall over the region. The maximum likelihood modeling that allows incorporating covariates on the location parameter of the generalized extreme value (GEV) distribution is employed. The GEV model is fitted to annual and seasonal rainfall extremes, which were taken from a high-resolution gauge-based gridded daily precipitation data covering a span of 57 years (1951-2007). Nonstationarities in extreme rainfall are detected over the central parts of Indochina Peninsula, eastern coasts of central Vietnam, northwest of the Sumatra Island, inland portions of Borneo Island, and on the northeastern and southwestern coasts of the Philippines. These nonstationarities in extreme rainfall are directly linked to near-surface global mean temperature and ENSO. In particular, the study reveals that a kelvin increase in global mean temperature anomaly can lead to an increase of 30% to even greater than 45% in annual maximum 1-day rainfall, which were observed pronouncedly over central Vietnam, southern coast of Myanmar, northwestern sections of Thailand, northwestern tip of Sumatra, central portions of Malaysia, and the Visayas island in central Philippines. Furthermore, a pronounced ENSO influence manifested on the seasonal maximum 1-day rainfall; a northward progression of 10%-15% drier condition over Southeast Asia as the El Niño develops from summer to winter is revealed. It is important therefore, to consider the results obtained here for water resources management as well as for adaptation planning to minimize the potential adverse impact of global warming, particularly on extreme rainfall and its associated flood risk over the region. Acknowledgment: This study is supported by the Tokyo Metropolitan Government through its AHRF program.
NASA Astrophysics Data System (ADS)
Leta, O. T.; El-Kadi, A. I.; Dulaiova, H.
2016-12-01
Extreme events, such as flooding and drought, are expected to occur at increased frequencies worldwide due to climate change influencing the water cycle. This is particularly critical for tropical islands where the local freshwater resources are very sensitive to climate. This study examined the impact of climate change on extreme streamflow, reservoir water volume and outflow for the Nuuanu watershed, using the Soil and Water Assessment Tool (SWAT) model. Based on the sensitive parameters screened by the Latin Hypercube-One-factor-At-a-Time (LH-OAT) method, SWAT was calibrated and validated to daily streamflow using the SWAT Calibration and Uncertainty Program (SWAT-CUP) at three streamflow gauging stations. Results showed that SWAT adequately reproduced the observed daily streamflow hydrographs at all stations. This was verified with Nash-Sutcliffe Efficiency that resulted in acceptable values of 0.58 to 0.88, whereby more than 90% of observations were bracketed within 95% model prediction uncertainty interval for both calibration and validation periods, signifying the potential applicability of SWAT for future prediction. The climate change impact on extreme flows, reservoir water volume and outflow was assessed under the Representative Concentration Pathways of 4.5 and 8.5 scenarios. We found wide changes in extreme peak and low flows ranging from -44% to 20% and -50% to -2%, respectively, compared to baseline. Consequently, the amount of water stored in Nuuanu reservoir will be decreased up to 27% while the corresponding outflow rates are expected to decrease up to 37% relative to the baseline. In addition, the stored water and extreme flows are highly sensitive to rainfall change when compared to temperature and solar radiation changes. It is concluded that the decrease in extreme low and peak flows can have serious consequences, such as flooding, drought, with detrimental effects on riparian ecological functioning. This study's results are expected to aid in reservoir operation as well as in identifying appropriate climate change adaptation strategies.
Finite element analysis of history-dependent damage in time-dependent fracture mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnaswamy, P.; Brust, F.W.; Ghadiali, N.D.
1993-11-01
The demands for structural systems to perform reliably under both severe and changing operating conditions continue to increase. Under these conditions time-dependent straining and history-dependent damage become extremely important. This work focuses on studying creep crack growth using finite element (FE) analysis. Two important issues, namely, (1) the use of history-dependent constitutive laws, and (2) the use of various fracture parameters in predicting creep crack growth, have both been addressed in this work. The constitutive model used here is the one developed by Murakami and Ohno and is based on the concept of a creep hardening surface. An implicit FEmore » algorithm for this model was first developed and verified for simple geometries and loading configurations. The numerical methodology developed here has been used to model stationary and growing cracks in CT specimens. Various fracture parameters such as the C[sub 1], C[sup *], T[sup *], J were used to compare the numerical predictions with experimental results available in the literature. A comparison of the values of these parameters as a function of time has been made for both stationary and growing cracks. The merit of using each of these parameters has also been discussed.« less
Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete.
Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun
2015-03-13
In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of -1 to +1, eight axial mixtures were prepared at extreme values of -2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model.
Extreme Statistics of Storm Surges in the Baltic Sea
NASA Astrophysics Data System (ADS)
Kulikov, E. A.; Medvedev, I. P.
2017-11-01
Statistical analysis of the extreme values of the Baltic Sea level has been performed for a series of observations for 15-125 years at 13 tide gauge stations. It is shown that the empirical relation between value of extreme sea level rises or ebbs (caused by storm events) and its return period in the Baltic Sea can be well approximated by the Gumbel probability distribution. The maximum values of extreme floods/ebbs of the 100-year recurrence were observed in the Gulf of Finland and the Gulf of Riga. The two longest data series, observed in Stockholm and Vyborg over 125 years, have shown a significant deviation from the Gumbel distribution for the rarest events. Statistical analysis of the hourly sea level data series reveals some asymmetry in the variability of the Baltic Sea level. The probability of rises proved higher than that of ebbs. As for the magnitude of the 100-year recurrence surge, it considerably exceeded the magnitude of ebbs almost everywhere. This asymmetry effect can be attributed to the influence of low atmospheric pressure during storms. A statistical study of extreme values has also been applied to sea level series for Narva over the period of 1994-2000, which were simulated by the ROMS numerical model. Comparisons of the "simulated" and "observed" extreme sea level distributions show that the model reproduces quite satisfactorily extreme floods of "moderate" magnitude; however, it underestimates sea level changes for the most powerful storm surges.
NASA Astrophysics Data System (ADS)
Rieder, Harald E.; Jancso, Leonhardt M.; Staehelin, Johannes; Maeder, Jörg A.; Ribatet, Mathieu; Peter, Thomas; Davison, Anthony C.
2010-05-01
In this study we analyze the frequency distribution of extreme events in low and high total ozone (termed ELOs and EHOs) for 5 long-term stations in the northern mid-latitudes in Europe (Belsk, Poland; Hradec Kralove, Czech Republic; Hohenpeissenberg and Potsdam, Germany; and Uccle, Belgium). Further, the influence of these extreme events on annual and seasonal mean values and trends is analysed. The applied method follows the new "ozone extreme concept", which is based on tools from extreme value theory [Coles, 2001; Ribatet, 2007], recently developed by Rieder et al. [2010a, b]. Mathematically seen the decisive feature within the extreme concept is the Generalized Pareto Distribution (GPD). In this analysis, the long-term trends needed to be removed first, differently to the treatment of Rieder et al. [2010a, b], in which the time series of Arosa was analysed, covering many decades of measurements in the anthropogenically undisturbed stratosphere. In contrast to previous studies only focusing on so called ozone mini-holes and mini-highs the "ozone extreme concept" provides a statistical description of the tails in total ozone distributions (i.e. extreme low and high values). It is shown that this concept is not only an appropriate method to describe the frequency and distribution of extreme events, it also provides new information on time series properties and internal variability. Furthermore it allows detection of fingerprints of physical (e.g. El Niño, NAO) and chemical (e.g. polar vortex ozone loss) features in the Earth's atmosphere as well as major volcanic eruptions (e.g. El Chichón, Mt. Pinatubo). It is shown that mean values and trends in total ozone are strongly influenced by extreme events. Trend calculations (for the period 1970-1990) are performed for the entire as well as the extremes-removed time series. The results after excluding extremes show that annual trends are most reduced at Hradec Kralove (about a factor of 3), followed by Potsdam (factor of 2.5), and Hohenpeissenberg and Belsk (both about a factor of 2). In general the reduction of trend is strongest during winter and spring. Throughout all stations the influence of ELOs on observed trends is larger than those of EHOs. Especially from the 1990s on ELOs dominate the picture as only a relatively small fraction of EHOs can be observed in the records (due to strong influence of Mt. Pinatubo eruption and polar vortex ozone loss contributions). Additionally it is evidenced that the number of observed mini-holes can be estimated highly accurate by the GPD-model. Overall the results of this thesis show that extreme events play a major role in total ozone and the "ozone extremes concept" provides deeper insight in the influence of chemical and physical features on column ozone. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder ,H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD.
The impact of lake and reservoir parameterization on global streamflow simulation.
Zajac, Zuzanna; Revilla-Romero, Beatriz; Salamon, Peter; Burek, Peter; Hirpa, Feyera A; Beck, Hylke
2017-05-01
Lakes and reservoirs affect the timing and magnitude of streamflow, and are therefore essential hydrological model components, especially in the context of global flood forecasting. However, the parameterization of lake and reservoir routines on a global scale is subject to considerable uncertainty due to lack of information on lake hydrographic characteristics and reservoir operating rules. In this study we estimated the effect of lakes and reservoirs on global daily streamflow simulations of a spatially-distributed LISFLOOD hydrological model. We applied state-of-the-art global sensitivity and uncertainty analyses for selected catchments to examine the effect of uncertain lake and reservoir parameterization on model performance. Streamflow observations from 390 catchments around the globe and multiple performance measures were used to assess model performance. Results indicate a considerable geographical variability in the lake and reservoir effects on the streamflow simulation. Nash-Sutcliffe Efficiency (NSE) and Kling-Gupta Efficiency (KGE) metrics improved for 65% and 38% of catchments respectively, with median skill score values of 0.16 and 0.2 while scores deteriorated for 28% and 52% of the catchments, with median values -0.09 and -0.16, respectively. The effect of reservoirs on extreme high flows was substantial and widespread in the global domain, while the effect of lakes was spatially limited to a few catchments. As indicated by global sensitivity analysis, parameter uncertainty substantially affected uncertainty of model performance. Reservoir parameters often contributed to this uncertainty, although the effect varied widely among catchments. The effect of reservoir parameters on model performance diminished with distance downstream of reservoirs in favor of other parameters, notably groundwater-related parameters and channel Manning's roughness coefficient. This study underscores the importance of accounting for lakes and, especially, reservoirs and using appropriate parameterization in large-scale hydrological simulations.
NASA Astrophysics Data System (ADS)
Ghil, M.; Zaliapin, I.; Thompson, S.
2008-05-01
We consider a delay differential equation (DDE) model for El-Niño Southern Oscillation (ENSO) variability. The model combines two key mechanisms that participate in ENSO dynamics: delayed negative feedback and seasonal forcing. We perform stability analyses of the model in the three-dimensional space of its physically relevant parameters. Our results illustrate the role of these three parameters: strength of seasonal forcing b, atmosphere-ocean coupling κ, and propagation period τ of oceanic waves across the Tropical Pacific. Two regimes of variability, stable and unstable, are separated by a sharp neutral curve in the (b, τ) plane at constant κ. The detailed structure of the neutral curve becomes very irregular and possibly fractal, while individual trajectories within the unstable region become highly complex and possibly chaotic, as the atmosphere-ocean coupling κ increases. In the unstable regime, spontaneous transitions occur in the mean "temperature" (i.e., thermocline depth), period, and extreme annual values, for purely periodic, seasonal forcing. The model reproduces the Devil's bleachers characterizing other ENSO models, such as nonlinear, coupled systems of partial differential equations; some of the features of this behavior have been documented in general circulation models, as well as in observations. We expect, therefore, similar behavior in much more detailed and realistic models, where it is harder to describe its causes as completely.
NASA Astrophysics Data System (ADS)
Ritschel, Christoph; Ulbrich, Uwe; Névir, Peter; Rust, Henning W.
2017-12-01
For several hydrological modelling tasks, precipitation time series with a high (i.e. sub-daily) resolution are indispensable. The data are, however, not always available, and thus model simulations are used to compensate. A canonical class of stochastic models for sub-daily precipitation are Poisson cluster processes, with the original Bartlett-Lewis (OBL) model as a prominent representative. The OBL model has been shown to well reproduce certain characteristics found in observations. Our focus is on intensity-duration-frequency (IDF) relationships, which are of particular interest in risk assessment. Based on a high-resolution precipitation time series (5 min) from Berlin-Dahlem, OBL model parameters are estimated and IDF curves are obtained on the one hand directly from the observations and on the other hand from OBL model simulations. Comparing the resulting IDF curves suggests that the OBL model is able to reproduce the main features of IDF statistics across several durations but cannot capture rare events (here an event with a return period larger than 1000 years on the hourly timescale). In this paper, IDF curves are estimated based on a parametric model for the duration dependence of the scale parameter in the generalized extreme value distribution; this allows us to obtain a consistent set of curves over all durations. We use the OBL model to investigate the validity of this approach based on simulated long time series.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barack, Leor; Cutler, Curt; Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109
Inspirals of stellar-mass compact objects (COs) into {approx}10{sup 6}M{sub {center_dot}} black holes are especially interesting sources of gravitational waves for the planned Laser Interferometer Space Antenna (LISA). The orbits of these extreme-mass-ratio inspirals (EMRIs) are highly relativistic, displaying extreme versions of both perihelion precession and Lense-Thirring precession of the orbital plane. We investigate the question of whether the emitted waveforms can be used to strongly constrain the geometry of the central massive object, and in essence check that it corresponds to a Kerr black hole (BH). For a Kerr BH, all multipole moments of the spacetime have a simple, uniquemore » relation to M and S, the BH mass, and spin; in particular, the spacetime's mass quadrupole moment Q is given by Q=-S{sup 2}/M. Here we treat Q as an additional parameter, independent of S and M, and ask how well observation can constrain its difference from the Kerr value. This was already estimated by Ryan, but for the simplified case of circular, equatorial orbits, and Ryan also neglected the signal modulations arising from the motion of the LISA satellites. We consider generic orbits and include the modulations due to the satellite motions. For this analysis, we use a family of approximate (basically post-Newtonian) waveforms, which represent the full parameter space of EMRI sources, and which exhibit the main qualitative features of true, general relativistic waveforms. We extend this parameter space to include (in an approximate manner) an arbitrary value of Q, and then construct the Fisher information matrix for the extended parameter space. By inverting the Fisher matrix, we estimate how accurately Q could be extracted from LISA observations of EMRIs. For 1 yr of coherent data from the inspiral of a 10M{sub {center_dot}} black hole into rotating black holes of masses 10{sup 5.5}M{sub {center_dot}}, 10{sup 6}M{sub {center_dot}}, or 10{sup 6.5}M{sub {center_dot}}, we find {delta}(Q/M{sup 3}){approx}10{sup -4}, 10{sup -3}, or 10{sup -2}, respectively (assuming total signal-to-noise ratio of 100, typical of the brightest detectable EMRIs). These results depend only weakly on the eccentricity of the inspiral orbit or the spin of the central object.« less
Rajapakse, C. S.; Phillips, E. A.; Sun, W.; Wald, M. J.; Magland, J. F.; Snyder, P. J.; Wehrli, F. W.
2016-01-01
Summary We investigated the association of postmenopausal vertebral deformities and fractures with bone parameters derived from distal extremities using MRI and pQCT. Distal extremity measures showed variable degrees of association with vertebral deformities and fractures, highlighting the systemic nature of postmenopausal bone loss. Introduction Prevalent vertebral deformities and fractures are known to predict incident further fractures. However, the association of distal extremity measures and vertebral deformities in postmenopausal women has not been fully established. Methods This study involved 98 postmenopausal women (age range 60–88 years, mean 70 years) with DXA BMD T-scores at either the hip or spine in the range of −1.5 to −3.5. Wedge, biconcavity, and crush deformities were computed on the basis of spine MRI. Vertebral fractures were assessed using Eastell's criterion. Distal tibia and radius stiffness was computed using MRI-based finite element analysis. BMD at the distal extremities were obtained using pQCT. Results Several distal extremity MRI and pQCT measures showed negative association with vertebral deformity on the basis of single parameter correlation (r up to 0.67) and two-parameter regression (r up to 0.76) models involving MRI stiffness and pQCT BMD. Subjects who had at least one prevalent vertebral fracture showed decreased MRI stiffness (up to 17.9 %) and pQCT density (up to 34.2 %) at the distal extremities compared to the non-fracture group. DXA lumbar spine BMD T-score was not associated with vertebral deformities. Conclusions The association between vertebral deformities and distal extremity measures supports the notion of postmenopausal osteoporosis as a systemic phenomenon. PMID:24221453
Parameter extraction with neural networks
NASA Astrophysics Data System (ADS)
Cazzanti, Luca; Khan, Mumit; Cerrina, Franco
1998-06-01
In semiconductor processing, the modeling of the process is becoming more and more important. While the ultimate goal is that of developing a set of tools for designing a complete process (Technology CAD), it is also necessary to have modules to simulate the various technologies and, in particular, to optimize specific steps. This need is particularly acute in lithography, where the continuous decrease in CD forces the technologies to operate near their limits. In the development of a 'model' for a physical process, we face several levels of challenges. First, it is necessary to develop a 'physical model,' i.e. a rational description of the process itself on the basis of know physical laws. Second, we need an 'algorithmic model' to represent in a virtual environment the behavior of the 'physical model.' After a 'complete' model has been developed and verified, it becomes possible to do performance analysis. In many cases the input parameters are poorly known or not accessible directly to experiment. It would be extremely useful to obtain the values of these 'hidden' parameters from experimental results by comparing model to data. This is particularly severe, because the complexity and costs associated with semiconductor processing make a simple 'trial-and-error' approach infeasible and cost- inefficient. Even when computer models of the process already exists, obtaining data through simulations may be time consuming. Neural networks (NN) are powerful computational tools to predict the behavior of a system from an existing data set. They are able to adaptively 'learn' input/output mappings and to act as universal function approximators. In this paper we use artificial neural networks to build a mapping from the input parameters of the process to output parameters which are indicative of the performance of the process. Once the NN has been 'trained,' it is also possible to observe the process 'in reverse,' and to extract the values of the inputs which yield outputs with desired characteristics. Using this method, we can extract optimum values for the parameters and determine the process latitude very quickly.
NASA Astrophysics Data System (ADS)
Li, L.; Xu, C.-Y.; Engeland, K.
2012-04-01
With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD
Application of Statistically Derived CPAS Parachute Parameters
NASA Technical Reports Server (NTRS)
Romero, Leah M.; Ray, Eric S.
2013-01-01
The Capsule Parachute Assembly System (CPAS) Analysis Team is responsible for determining parachute inflation parameters and dispersions that are ultimately used in verifying system requirements. A model memo is internally released semi-annually documenting parachute inflation and other key parameters reconstructed from flight test data. Dispersion probability distributions published in previous versions of the model memo were uniform because insufficient data were available for determination of statistical based distributions. Uniform distributions do not accurately represent the expected distributions since extreme parameter values are just as likely to occur as the nominal value. CPAS has taken incremental steps to move away from uniform distributions. Model Memo version 9 (MMv9) made the first use of non-uniform dispersions, but only for the reefing cutter timing, for which a large number of sample was available. In order to maximize the utility of the available flight test data, clusters of parachutes were reconstructed individually starting with Model Memo version 10. This allowed for statistical assessment for steady-state drag area (CDS) and parachute inflation parameters such as the canopy fill distance (n), profile shape exponent (expopen), over-inflation factor (C(sub k)), and ramp-down time (t(sub k)) distributions. Built-in MATLAB distributions were applied to the histograms, and parameters such as scale (sigma) and location (mu) were output. Engineering judgment was used to determine the "best fit" distribution based on the test data. Results include normal, log normal, and uniform (where available data remains insufficient) fits of nominal and failure (loss of parachute and skipped stage) cases for all CPAS parachutes. This paper discusses the uniform methodology that was previously used, the process and result of the statistical assessment, how the dispersions were incorporated into Monte Carlo analyses, and the application of the distributions in trajectory benchmark testing assessments with parachute inflation parameters, drag area, and reefing cutter timing used by CPAS.
Youth Baseball Pitching Stride Length: Normal Values and Correlation With Field Testing
Fry, Karl E.; Pipkin, Andrew; Wittman, Kelcie; Hetzel, Scott; Sherry, Marc
2016-01-01
Background: Pitching biomechanical analysis has been recommended as an important component of performance, injury prevention, and rehabilitation. Normal values for youth pitching stride length have not been established, leading to application of normative values found among professional pitchers to youth pitchers. Hypotheses: The average youth pitching stride length will be significantly less than that of college and professional pitchers. There will be a positive correlation between stride length, lower extremity power, balance, and pitching experience. Study Design: Prospective cohort study. Level of Evidence: Level 3. Methods: Ninety-two youth baseball pitchers (aged 9-14 years) met the inclusion/exclusion criteria and completed the study. Stride length was recorded using a Dartfish video system over 3 maximal effort pitches. Both intra- and interrater reliability was calculated for the assessment of stride length. Double-leg vertical jump, single-leg stance time, leg length, weight, age, and pitching experience were also recorded. Results: Mean (SD) stride length was 66.0% (7.1%) of height. Stride length was correlated (P < 0.01) with vertical jump (0.38), pitching experience (0.36), and single-leg balance (0.28), with excellent intra- and interrater reliability (0.985 or higher). No significant correlations between stride length and body weight, leg length, or age existed. Conclusions: There was a significant difference between youth pitching stride length and the current published norms for older and more elite throwers. There was a positive correlation between stride length and lower extremity power, pitching experience, and single-leg balance. Clinical Relevance: Two-dimensional analysis of stride length allows for the assessment of pitching biomechanics in a practical manner. These values can be used for return to pitching parameters after an injury and designing injury prevention and performance programs. PMID:27864504
Youth Baseball Pitching Stride Length: Normal Values and Correlation With Field Testing.
Fry, Karl E; Pipkin, Andrew; Wittman, Kelcie; Hetzel, Scott; Sherry, Marc
Pitching biomechanical analysis has been recommended as an important component of performance, injury prevention, and rehabilitation. Normal values for youth pitching stride length have not been established, leading to application of normative values found among professional pitchers to youth pitchers. The average youth pitching stride length will be significantly less than that of college and professional pitchers. There will be a positive correlation between stride length, lower extremity power, balance, and pitching experience. Prospective cohort study. Level 3. Ninety-two youth baseball pitchers (aged 9-14 years) met the inclusion/exclusion criteria and completed the study. Stride length was recorded using a Dartfish video system over 3 maximal effort pitches. Both intra- and interrater reliability was calculated for the assessment of stride length. Double-leg vertical jump, single-leg stance time, leg length, weight, age, and pitching experience were also recorded. Mean (SD) stride length was 66.0% (7.1%) of height. Stride length was correlated ( P < 0.01) with vertical jump (0.38), pitching experience (0.36), and single-leg balance (0.28), with excellent intra- and interrater reliability (0.985 or higher). No significant correlations between stride length and body weight, leg length, or age existed. There was a significant difference between youth pitching stride length and the current published norms for older and more elite throwers. There was a positive correlation between stride length and lower extremity power, pitching experience, and single-leg balance. Two-dimensional analysis of stride length allows for the assessment of pitching biomechanics in a practical manner. These values can be used for return to pitching parameters after an injury and designing injury prevention and performance programs.
Sueur, Jérôme; Mackie, David; Windmill, James F. C.
2011-01-01
To communicate at long range, animals have to produce intense but intelligible signals. This task might be difficult to achieve due to mechanical constraints, in particular relating to body size. Whilst the acoustic behaviour of large marine and terrestrial animals has been thoroughly studied, very little is known about the sound produced by small arthropods living in freshwater habitats. Here we analyse for the first time the calling song produced by the male of a small insect, the water boatman Micronecta scholtzi. The song is made of three distinct parts differing in their temporal and amplitude parameters, but not in their frequency content. Sound is produced at 78.9 (63.6–82.2) SPL rms re 2.10−5 Pa with a peak at 99.2 (85.7–104.6) SPL re 2.10−5 Pa estimated at a distance of one metre. This energy output is significant considering the small size of the insect. When scaled to body length and compared to 227 other acoustic species, the acoustic energy produced by M. scholtzi appears as an extreme value, outperforming marine and terrestrial mammal vocalisations. Such an extreme display may be interpreted as an exaggerated secondary sexual trait resulting from a runaway sexual selection without predation pressure. PMID:21698252
Sueur, Jérôme; Mackie, David; Windmill, James F C
2011-01-01
To communicate at long range, animals have to produce intense but intelligible signals. This task might be difficult to achieve due to mechanical constraints, in particular relating to body size. Whilst the acoustic behaviour of large marine and terrestrial animals has been thoroughly studied, very little is known about the sound produced by small arthropods living in freshwater habitats. Here we analyse for the first time the calling song produced by the male of a small insect, the water boatman Micronecta scholtzi. The song is made of three distinct parts differing in their temporal and amplitude parameters, but not in their frequency content. Sound is produced at 78.9 (63.6-82.2) SPL rms re 2.10(-5) Pa with a peak at 99.2 (85.7-104.6) SPL re 2.10(-5) Pa estimated at a distance of one metre. This energy output is significant considering the small size of the insect. When scaled to body length and compared to 227 other acoustic species, the acoustic energy produced by M. scholtzi appears as an extreme value, outperforming marine and terrestrial mammal vocalisations. Such an extreme display may be interpreted as an exaggerated secondary sexual trait resulting from a runaway sexual selection without predation pressure.
Teh, V; Sim, K S; Wong, E K
2016-11-01
According to the statistic from World Health Organization (WHO), stroke is one of the major causes of death globally. Computed tomography (CT) scan is one of the main medical diagnosis system used for diagnosis of ischemic stroke. CT scan provides brain images in Digital Imaging and Communication in Medicine (DICOM) format. The presentation of CT brain images is mainly relied on the window setting (window center and window width), which converts an image from DICOM format into normal grayscale format. Nevertheless, the ordinary window parameter could not deliver a proper contrast on CT brain images for ischemic stroke detection. In this paper, a new proposed method namely gamma correction extreme-level eliminating with weighting distribution (GCELEWD) is implemented to improve the contrast on CT brain images. GCELEWD is capable of highlighting the hypodense region for diagnosis of ischemic stroke. The performance of this new proposed technique, GCELEWD, is compared with four of the existing contrast enhancement technique such as brightness preserving bi-histogram equalization (BBHE), dualistic sub-image histogram equalization (DSIHE), extreme-level eliminating histogram equalization (ELEHE), and adaptive gamma correction with weighting distribution (AGCWD). GCELEWD shows better visualization for ischemic stroke detection and higher values with image quality assessment (IQA) module. SCANNING 38:842-856, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
The Extreme Spin of the Black Hole in Cygnus X-1
NASA Technical Reports Server (NTRS)
Gou, Lijun; McClintock, Jeffre E.; Reid, Mark J.; Orosz, Jerome A.; Steiner, James F.; Narayan, Ramesh; Xiang, Jingen; Remillard, Ronald A.; Arnaud, Keith A.; Davis, Shane W.
2005-01-01
The compact primary in the X-ray binary Cygnus X-1 was the first black hole to be established via dynamical observatIOns. We have recently determined accurate values for its mass and distance, and for the orbital inclination angle of the binary. Building on these.results, which are based on our favored (asynchronous) dynamical model, we have measured the radius of the inner edge of the black hole's accretion disk by fitting its thermal continuum.spectrum to a fully relativistic model of a thin accretion disk. Assuming that the spin axis of the black hole is aligned with the orbital angular momentum vector, we have determined that Cygnus X-I contains a near-extreme Kerr black hole with a spin parameter a* > 0.95 (3(sigma)). For a less probable (synchronous) dynamIcal model, we find a* > 0.92 (3(sigma)). In our analysis, we include the uncertainties in black hole mass orbital inclination angle and distance, and we also include the uncertainty in the calibration of the absolute flux via the Crab. These four sources of uncertainty totally dominate the error budget. The uncertainties introduced by the thin-disk model we employ are particularly small in this case given the extreme spin of the black hole and the disk's low luminosity.
Extreme hyperglycemia with ketoacidosis and hyperkalemia in a patient on chronic hemodialysis.
Gupta, Arvin; Rohrscheib, Mark; Tzamaloukas, Antonios H
2008-10-01
A patient on hemodialysis for end-stage renal disease secondary to diabetic nephropathy was admitted in a coma with Kussmaul breathing and hypertension (232/124 mmHg). She had extreme hyperglycemia (1884 mg/dL), acidosis (total CO(2) 4 mmol/L), hyperkalemia (7.2 mmol/L) with electrocardiographic abnormalities, and hypertonicity (330.7 mOsm/kg). Initial treatment with insulin drip resulted in a decrease in serum potassium to 5.3 mmol/L, but no significant change in mental status or other laboratory parameters. Hemodialysis of 1.75 hours resulted in rapid decline in serum glucose and tonicity and rapid improvement of the acidosis, but no change in mental status, which began to improve slowly after the hemodialysis was stopped, but with ongoing treatment with continuous insulin infusion. The rate of decline in tonicity during hemodialysis (14.5 mOsm/kg/h) was high, raising concerns about neurological complications. In this case, extreme hyperglycemia with ketoacidosis, hyperkalemia, and coma developing in a hemodialysis patient responded to insulin infusion. Monitoring of the clinical status and the pertinent laboratory values is required to assess the need for other therapeutic measures including volume and potassium replacement and emergency dialysis. The indications for and risks of emergency dialysis in this setting are not clearly defined.
The Extreme Spin of the Black Hole in Cygnus X-1
NASA Technical Reports Server (NTRS)
Gou, Lijun; McClintock, Jeffrey E.; Reid, Mark J.; Orosz, Jerome A.; Steiner, James F.; Narayan, Ramesh; Xiang, Jingen; Remillard, Ronald A.; Arnaud, Keith A.; Davis, Shane W.
2011-01-01
The compact primary in the X-ray binary Cygnus X-1 was the first black hole to be established via dynamical observations. We have recently determined accurate values for its mass and distance, and for the orbital inclination angle of the binary. Building on these results, which are based on our favored (asynchronous) dynamical model, we have measured the radius of the inner edge of the black hole s accretion disk by fitting its thermal continuum spectrum to a fully relativistic model of a thin accretion disk. Assuming that the spin axis of the black hole is aligned with the orbital angular momentum vector, we have determined that Cygnus X-1 contains a near-extreme Kerr black hole with a spin parameter a* > 0.95 (3(sigma)). For a less probable (synchronous) dynamical model, we find a. > 0.92 (3 ). In our analysis, we include the uncertainties in black hole mass, orbital inclination angle, and distance, and we also include the uncertainty in the calibration of the absolute flux via the Crab. These four sources of uncertainty totally dominate the error budget. The uncertainties introduced by the thin-disk model we employ are particularly small in this case given the extreme spin of the black hole and the disk s low luminosity.
NASA Astrophysics Data System (ADS)
Bonanos, A. Z.; Stanek, K. Z.; Udalski, A.; Wyrzykowski, L.; Żebruń, K.; Kubiak, M.; Szymański, M. K.; Szewczyk, O.; Pietrzyński, G.; Soszyński, I.
2004-08-01
We present a high-precision I-band light curve for the Wolf-Rayet binary WR 20a, obtained as a subproject of the Optical Gravitational Lensing Experiment. Rauw et al. have recently presented spectroscopy for this system, strongly suggesting extremely large minimum masses of 70.7+/-4.0 and 68.8+/-3.8 Msolar for the component stars of the system, with the exact values depending strongly on the period of the system. We detect deep eclipses of about 0.4 mag in the light curve of WR 20a, confirming and refining the suspected period of P=3.686 days and deriving an inclination angle of i=74.5d+/-2.0d. Using these photometric data and the radial velocity data of Rauw et al., we derive the masses for the two components of WR 20a to be 83.0+/-5.0 and 82.0+/-5.0 Msolar. Therefore, WR 20a is confirmed to consist of two extremely massive stars and to be the most massive binary known with an accurate mass determination. Based on observations obtained with the 1.3 m Warsaw telescope at Las Campanas Observatory, which is operated by the Carnegie Institute of Washington.
Revisiting crash spatial heterogeneity: A Bayesian spatially varying coefficients approach.
Xu, Pengpeng; Huang, Helai; Dong, Ni; Wong, S C
2017-01-01
This study was performed to investigate the spatially varying relationships between crash frequency and related risk factors. A Bayesian spatially varying coefficients model was elaborately introduced as a methodological alternative to simultaneously account for the unstructured and spatially structured heterogeneity of the regression coefficients in predicting crash frequencies. The proposed method was appealing in that the parameters were modeled via a conditional autoregressive prior distribution, which involved a single set of random effects and a spatial correlation parameter with extreme values corresponding to pure unstructured or pure spatially correlated random effects. A case study using a three-year crash dataset from the Hillsborough County, Florida, was conducted to illustrate the proposed model. Empirical analysis confirmed the presence of both unstructured and spatially correlated variations in the effects of contributory factors on severe crash occurrences. The findings also suggested that ignoring spatially structured heterogeneity may result in biased parameter estimates and incorrect inferences, while assuming the regression coefficients to be spatially clustered only is probably subject to the issue of over-smoothness. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Regionalization Approach to select the final watershed parameter set among the Pareto solutions
NASA Astrophysics Data System (ADS)
Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.
2017-12-01
The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.
Regional estimation of extreme suspended sediment concentrations using watershed characteristics
NASA Astrophysics Data System (ADS)
Tramblay, Yves; Ouarda, Taha B. M. J.; St-Hilaire, André; Poulin, Jimmy
2010-01-01
SummaryThe number of stations monitoring daily suspended sediment concentration (SSC) has been decreasing since the 1980s in North America while suspended sediment is considered as a key variable for water quality. The objective of this study is to test the feasibility of regionalising extreme SSC, i.e. estimating SSC extremes values for ungauged basins. Annual maximum SSC for 72 rivers in Canada and USA were modelled with probability distributions in order to estimate quantiles corresponding to different return periods. Regionalisation techniques, originally developed for flood prediction in ungauged basins, were tested using the climatic, topographic, land cover and soils attributes of the watersheds. Two approaches were compared, using either physiographic characteristics or seasonality of extreme SSC to delineate the regions. Multiple regression models to estimate SSC quantiles as a function of watershed characteristics were built in each region, and compared to a global model including all sites. Regional estimates of SSC quantiles were compared with the local values. Results show that regional estimation of extreme SSC is more efficient than a global regression model including all sites. Groups/regions of stations have been identified, using either the watershed characteristics or the seasonality of occurrence for extreme SSC values providing a method to better describe the extreme events of SSC. The most important variables for predicting extreme SSC are the percentage of clay in the soils, precipitation intensity and forest cover.
Enceladus: three-act play and current state
NASA Astrophysics Data System (ADS)
Luan, J.; Goldreich, P.
2017-12-01
Eccentricity (e) growth as Enceladus migrates deeper into mean motion resonance with Dione results in increased tidal heating. As the bottom of the ice shell melts, the rate of tidal heating jumps and runaway melting ensues. At the end of run-away melting, the shell's thickness has fallen below the value at which the frequency of free libration equals the orbital mean motion and e has damped to well below its current value. Subsequently, both the shell thickness and e partake in a limit cycle. As e damps toward its minimum value, the shell's thickness asymptotically approaches its resonant value from below. After minimum e, the shell thickens quickly and e grows even faster. This cycle is likely to have been repeated multiple times in the past. Currently, e is much smaller than its equilibrium value corresponding to the shell thickness. Physical libration resonance resolves this mystery, it ensures that the low-e and medium-thickness state is present for most of the time between consecutive limit cycles. It is a robust scenario that avoids fine tuning or extreme parameter choice, and naturally produces episodic stages of high heating, consistent with softening of topographical features on Enceladus.
Online estimation of the wavefront outer scale profile from adaptive optics telemetry
NASA Astrophysics Data System (ADS)
Guesalaga, A.; Neichel, B.; Correia, C. M.; Butterley, T.; Osborn, J.; Masciadri, E.; Fusco, T.; Sauvage, J.-F.
2017-02-01
We describe an online method to estimate the wavefront outer scale profile, L0(h), for very large and future extremely large telescopes. The stratified information on this parameter impacts the estimation of the main turbulence parameters [turbulence strength, Cn2(h); Fried's parameter, r0; isoplanatic angle, θ0; and coherence time, τ0) and determines the performance of wide-field adaptive optics (AO) systems. This technique estimates L0(h) using data from the AO loop available at the facility instruments by constructing the cross-correlation functions of the slopes between two or more wavefront sensors, which are later fitted to a linear combination of the simulated theoretical layers having different altitudes and outer scale values. We analyse some limitations found in the estimation process: (I) its insensitivity to large values of L0(h) as the telescope becomes blind to outer scales larger than its diameter; (II) the maximum number of observable layers given the limited number of independent inputs that the cross-correlation functions provide and (III) the minimum length of data required for a satisfactory convergence of the turbulence parameters without breaking the assumption of statistical stationarity of the turbulence. The method is applied to the Gemini South multiconjugate AO system that comprises five wavefront sensors and two deformable mirrors. Statistics of L0(h) at Cerro Pachón from data acquired during 3 yr of campaigns show interesting resemblance to other independent results in the literature. A final analysis suggests that the impact of error sources will be substantially reduced in instruments of the next generation of giant telescopes.
NASA Astrophysics Data System (ADS)
Lane, John; Kasparis, Takis; Michaelides, Silas
2016-04-01
The well-known Z -R power law Z = ARb uses two parameters, A and b, in order to relate rainfall rate R to measured weather radar reflectivity Z. A common method used by researchers is to compute Z and R from disdrometer data and then extract the A-bparameter pair from a log-linear line fit to a scatter plot of Z -R pairs. Even though it may seem far more truthful to extract the parameter pair from a fit of radar ZR versus gauge rainfall rate RG, the extreme difference in spatial and temporal sampling volumes between radar and rain gauge creates a slew of problems that can generally only be solved by using rain gauge arrays and long sampling averages. Disdrometer derived A - b parameters are easily obtained and can provide information for the study of stratiform versus convective rainfall. However, an inconsistency appears when comparing averaged A - b pairs from various researchers. Values of b range from 1.26 to 1.51 for both stratiform and convective events. Paradoxically the values of Afall into three groups: 150 to 200 for convective; 200 to 400 for stratiform; and 400 to 500 again for convective. This apparent inconsistency can be explained by computing the A - b pair using the gamma DSD coupled with a modified drop terminal velocity model, v(D) = αDβ - w, where w is a somewhat artificial constant vertical velocity of the air above the disdrometer. This model predicts three regions of A, corresponding to w < 0, w = 0, and w > 0, which approximately matches observed data.
NASA Astrophysics Data System (ADS)
Yang, P.; Ng, T. L.; Yang, W.
2015-12-01
Effective water resources management depends on the reliable estimation of the uncertainty of drought events. Confidence intervals (CIs) are commonly applied to quantify this uncertainty. A CI seeks to be at the minimal length necessary to cover the true value of the estimated variable with the desired probability. In drought analysis where two or more variables (e.g., duration and severity) are often used to describe a drought, copulas have been found suitable for representing the joint probability behavior of these variables. However, the comprehensive assessment of the parameter uncertainties of copulas of droughts has been largely ignored, and the few studies that have recognized this issue have not explicitly compared the various methods to produce the best CIs. Thus, the objective of this study to compare the CIs generated using two widely applied uncertainty estimation methods, bootstrapping and Markov Chain Monte Carlo (MCMC). To achieve this objective, (1) the marginal distributions lognormal, Gamma, and Generalized Extreme Value, and the copula functions Clayton, Frank, and Plackett are selected to construct joint probability functions of two drought related variables. (2) The resulting joint functions are then fitted to 200 sets of simulated realizations of drought events with known distribution and extreme parameters and (3) from there, using bootstrapping and MCMC, CIs of the parameters are generated and compared. The effect of an informative prior on the CIs generated by MCMC is also evaluated. CIs are produced for different sample sizes (50, 100, and 200) of the simulated drought events for fitting the joint probability functions. Preliminary results assuming lognormal marginal distributions and the Clayton copula function suggest that for cases with small or medium sample sizes (~50-100), MCMC to be superior method if an informative prior exists. Where an informative prior is unavailable, for small sample sizes (~50), both bootstrapping and MCMC yield the same level of performance, and for medium sample sizes (~100), bootstrapping is better. For cases with a large sample size (~200), there is little difference between the CIs generated using bootstrapping and MCMC regardless of whether or not an informative prior exists.
NASA Astrophysics Data System (ADS)
Caporali, E.; Chiarello, V.; Galeati, G.
2014-12-01
Peak discharges estimates for a given return period are of primary importance in engineering practice for risk assessment and hydraulic structure design. Different statistical methods are chosen here for the assessment of flood frequency curve: one indirect technique based on the extreme rainfall event analysis, the Peak Over Threshold (POT) model and the Annual Maxima approach as direct techniques using river discharge data. In the framework of the indirect method, a Monte Carlo simulation approach is adopted to determine a derived frequency distribution of peak runoff using a probabilistic formulation of the SCS-CN method as stochastic rainfall-runoff model. A Monte Carlo simulation is used to generate a sample of different runoff events from different stochastic combination of rainfall depth, storm duration, and initial loss inputs. The distribution of the rainfall storm events is assumed to follow the GP law whose parameters are estimated through GEV's parameters of annual maximum data. The evaluation of the initial abstraction ratio is investigated since it is one of the most questionable assumption in the SCS-CN model and plays a key role in river basin characterized by high-permeability soils, mainly governed by infiltration excess mechanism. In order to take into account the uncertainty of the model parameters, this modified approach, that is able to revise and re-evaluate the original value of the initial abstraction ratio, is implemented. In the POT model the choice of the threshold has been an essential issue, mainly based on a compromise between bias and variance. The Generalized Extreme Value (GEV) distribution fitted to the annual maxima discharges is therefore compared with the Pareto distributed peaks to check the suitability of the frequency of occurrence representation. The methodology is applied to a large dam in the Serchio river basin, located in the Tuscany Region. The application has shown as Monte Carlo simulation technique can be a useful tool to provide more robust estimation of the results obtained by direct statistical methods.
NASA Astrophysics Data System (ADS)
Hayat, Tanzila; Nadeem, S.
2018-03-01
This paper examines the three dimensional Eyring-Powell fluid flow over an exponentially stretching surface with heterogeneous-homogeneous chemical reactions. A new model of heat flux suggested by Cattaneo and Christov is employed to study the properties of relaxation time. From the present analysis we observe that there is an inverse relationship between temperature and thermal relaxation time. The temperature in Cattaneo-Christov heat flux model is lesser than the classical Fourier's model. In this paper the three dimensional Cattaneo-Christov heat flux model over an exponentially stretching surface is calculated first time in the literature. For negative values of temperature exponent, temperature profile firstly intensifies to its most extreme esteem and after that gradually declines to zero, which shows the occurrence of phenomenon (SGH) "Sparrow-Gregg hill". Also, for higher values of strength of reaction parameters, the concentration profile decreases.
An Improved Image Ringing Evaluation Method with Weighted Sum of Gray Extreme Value
NASA Astrophysics Data System (ADS)
Yang, Ling; Meng, Yanhua; Wang, Bo; Bai, Xu
2018-03-01
Blind image restoration algorithm usually produces ringing more obvious at the edges. Ringing phenomenon is mainly affected by noise, species of restoration algorithm, and the impact of the blur kernel estimation during restoration. Based on the physical mechanism of ringing, a method of evaluating the ringing on blind restoration images is proposed. The method extracts the ringing image overshooting and ripple region to make the weighted statistics for the regional gradient value. According to the weights set by multiple experiments, the edge information is used to characterize the details of the edge to determine the weight, quantify the seriousness of the ring effect, and propose the evaluation method of the ringing caused by blind restoration. The experimental results show that the method can effectively evaluate the ring effect in the restoration images under different restoration algorithms and different restoration parameters. The evaluation results are consistent with the visual evaluation results.
Specification of ISS Plasma Environment Variability
NASA Technical Reports Server (NTRS)
Minow, Joseph I.; Neergaard, Linda F.; Bui, Them H.; Mikatarian, Ronald R.; Barsamian, H.; Koontz, Steven L.
2004-01-01
Quantifying spacecraft charging risks and associated hazards for the International Space Station (ISS) requires a plasma environment specification for the natural variability of ionospheric temperature (Te) and density (Ne). Empirical ionospheric specification and forecast models such as the International Reference Ionosphere (IRI) model typically only provide long term (seasonal) mean Te and Ne values for the low Earth orbit environment. This paper describes a statistical analysis of historical ionospheric low Earth orbit plasma measurements from the AE-C, AE-D, and DE-2 satellites used to derive a model of deviations of observed data values from IRI-2001 estimates of Ne, Te parameters for each data point to provide a statistical basis for modeling the deviations of the plasma environment from the IRI model output. Application of the deviation model with the IRI-2001 output yields a method for estimating extreme environments for the ISS spacecraft charging analysis.
Beam echoes in the presence of coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gross, Axel
2017-10-03
Transverse beam echoes could provide a new technique of measuring diusion characteristics orders of magnitude faster than the current methods; however, their interaction with many accelerator parameters is poorly understood. Using a program written in C, we explored the relationship between coupling and echo strength. We found that echoes could be generated in both dimensions, even with a dipole kick in only one dimension. We found that the echo eects are not destroyed even when there is strong coupling, falling o only at extremely high coupling values. We found that at intermediate values of skew quadrupole strength, the decoherence timemore » of the beam is greatly increased, causing a destruction of the echo eects. We found that this is caused by a narrowing of the tune width of the particles. Results from this study will help to provide recommendations to IOTA (Integrable Optics Test Accelerator) for their upcoming echo experiment.« less
The extrudate swell of HDPE: Rheological effects
NASA Astrophysics Data System (ADS)
Konaganti, Vinod Kumar; Ansari, Mahmoud; Mitsoulis, Evan; Hatzikiriakos, Savvas G.
2017-05-01
The extrudate swell of an industrial grade high molecular weight high-density polyethylene (HDPE) in capillary dies is studied experimentally and numerically using the integral K-BKZ constitutive model. The non-linear viscoelastic flow properties of the polymer resin are studied for a broad range of large step shear strains and high shear rates using the cone partitioned plate (CPP) geometry of the stress/strain controlled rotational rheometer. This allowed the determination of the rheological parameters accurately, in particular the damping function, which is proven to be the most important in simulating transient flows such as extrudate swell. A series of simulations performed using the integral K-BKZ Wagner model with different values of the Wagner exponent n, ranging from n=0.15 to 0.5, demonstrates that the extrudate swell predictions are extremely sensitive to the Wagner damping function exponent. Using the correct n-value resulted in extrudate swell predictions that are in excellent agreement with experimental measurements.
NASA Astrophysics Data System (ADS)
Baddari, Kamel; Bellalem, Fouzi; Baddari, Ibtihel; Makdeche, Said
2016-10-01
Statistical tests have been used to adjust the Zemmouri seismic data using a distribution function. The Pareto law has been used and the probabilities of various expected earthquakes were computed. A mathematical expression giving the quantiles was established. The extreme values limiting law confirmed the accuracy of the adjustment method. Using the moment magnitude scale, a probabilistic model was made to predict the occurrences of strong earthquakes. The seismic structure has been characterized by the slope of the recurrence plot γ, fractal dimension D, concentration parameter K sr, Hurst exponents H r and H t. The values of D, γ, K sr, H r, and H t diminished many months before the principal seismic shock ( M = 6.9) of the studied seismoactive zone has occurred. Three stages of the deformation of the geophysical medium are manifested in the variation of the coefficient G% of the clustering of minor seismic events.
On the identification of Dragon Kings among extreme-valued outliers
NASA Astrophysics Data System (ADS)
Riva, M.; Neuman, S. P.; Guadagnini, A.
2013-07-01
Extreme values of earth, environmental, ecological, physical, biological, financial and other variables often form outliers to heavy tails of empirical frequency distributions. Quite commonly such tails are approximated by stretched exponential, log-normal or power functions. Recently there has been an interest in distinguishing between extreme-valued outliers that belong to the parent population of most data in a sample and those that do not. The first type, called Gray Swans by Nassim Nicholas Taleb (often confused in the literature with Taleb's totally unknowable Black Swans), is drawn from a known distribution of the tails which can thus be extrapolated beyond the range of sampled values. However, the magnitudes and/or space-time locations of unsampled Gray Swans cannot be foretold. The second type of extreme-valued outliers, termed Dragon Kings by Didier Sornette, may in his view be sometimes predicted based on how other data in the sample behave. This intriguing prospect has recently motivated some authors to propose statistical tests capable of identifying Dragon Kings in a given random sample. Here we apply three such tests to log air permeability data measured on the faces of a Berea sandstone block and to synthetic data generated in a manner statistically consistent with these measurements. We interpret the measurements to be, and generate synthetic data that are, samples from α-stable sub-Gaussian random fields subordinated to truncated fractional Gaussian noise (tfGn). All these data have frequency distributions characterized by power-law tails with extreme-valued outliers about the tail edges.
Extreme between-study homogeneity in meta-analyses could offer useful insights.
Ioannidis, John P A; Trikalinos, Thomas A; Zintzaras, Elias
2006-10-01
Meta-analyses are routinely evaluated for the presence of large between-study heterogeneity. We examined whether it is also important to probe whether there is extreme between-study homogeneity. We used heterogeneity tests with left-sided statistical significance for inference and developed a Monte Carlo simulation test for testing extreme homogeneity in risk ratios across studies, using the empiric distribution of the summary risk ratio and heterogeneity statistic. A left-sided P=0.01 threshold was set for claiming extreme homogeneity to minimize type I error. Among 11,803 meta-analyses with binary contrasts from the Cochrane Library, 143 (1.21%) had left-sided P-value <0.01 for the asymptotic Q statistic and 1,004 (8.50%) had left-sided P-value <0.10. The frequency of extreme between-study homogeneity did not depend on the number of studies in the meta-analyses. We identified examples where extreme between-study homogeneity (left-sided P-value <0.01) could result from various possibilities beyond chance. These included inappropriate statistical inference (asymptotic vs. Monte Carlo), use of a specific effect metric, correlated data or stratification using strong predictors of outcome, and biases and potential fraud. Extreme between-study homogeneity may provide useful insights about a meta-analysis and its constituent studies.
NASA Astrophysics Data System (ADS)
Rieder, Harald E.; Jancso, Leonhardt M.; Rocco, Stefania Di; Staehelin, Johannes; Maeder, Joerg A.; Peter, Thomas; Ribatet, Mathieu; Davison, Anthony C.; de Backer, Hugo; Koehler, Ulf; Krzyścin, Janusz; Vaníček, Karel
2011-11-01
We apply methods from extreme value theory to identify extreme events in high (termed EHOs) and low (termed ELOs) total ozone and to describe the distribution tails (i.e. very high and very low values) of five long-term European ground-based total ozone time series. The influence of these extreme events on observed mean values, long-term trends and changes is analysed. The results show a decrease in EHOs and an increase in ELOs during the last decades, and establish that the observed downward trend in column ozone during the 1970-1990s is strongly dominated by changes in the frequency of extreme events. Furthermore, it is shown that clear ‘fingerprints’ of atmospheric dynamics (NAO, ENSO) and chemistry [ozone depleting substances (ODSs), polar vortex ozone loss] can be found in the frequency distribution of ozone extremes, even if no attribution is possible from standard metrics (e.g. annual mean values). The analysis complements earlier analysis for the world's longest total ozone record at Arosa, Switzerland, confirming and revealing the strong influence of atmospheric dynamics on observed ozone changes. The results provide clear evidence that in addition to ODS, volcanic eruptions and strong/moderate ENSO and NAO events had significant influence on column ozone in the European sector.
NASA Astrophysics Data System (ADS)
Jacobson, Heather R.; Keller, Stefan; Frebel, Anna; Casey, Andrew R.; Asplund, Martin; Bessell, Michael S.; Da Costa, Gary S.; Lind, Karin; Marino, Anna F.; Norris, John E.; Peña, José M.; Schmidt, Brian P.; Tisserand, Patrick; Walsh, Jennifer M.; Yong, David; Yu, Qinsi
2015-07-01
The SkyMapper Southern Sky Survey is carrying out a search for the most metal-poor stars in the Galaxy. It identifies candidates by way of its unique filter set which allows for estimation of stellar atmospheric parameters. The set includes a narrow filter centered on the Ca ii K 3933 Å line, enabling a robust estimate of stellar metallicity. Promising candidates are then confirmed with spectroscopy. We present the analysis of Magellan Inamori Kyocera Echelle high-resolution spectroscopy of 122 metal-poor stars found by SkyMapper in the first two years of commissioning observations. Forty-one stars have [{Fe}/{{H}}]≤slant -3.0. Nine have [{Fe}/{{H}}]≤slant -3.5, with three at [{Fe}/{{H}}]∼ -4. A 1D LTE abundance analysis of the elements Li, C, Na, Mg, Al, Si, Ca, Sc, Ti, Cr, Mn, Co, Ni, Zn, Sr, Ba, and Eu shows these stars have [X/Fe] ratios typical of other halo stars. One star with low [X/Fe] values appears to be “Fe-enhanced,” while another star has an extremely large [Sr/Ba] ratio: \\gt 2. Only one other star is known to have a comparable value. Seven stars are “CEMP-no” stars ([{{C}}/{Fe}]\\gt 0.7, [{Ba}/{Fe}]\\lt 0). 21 stars exhibit mild r-process element enhancements (0.3≤slant [{Eu}/{Fe}]\\lt 1.0), while four stars have [{Eu}/{Fe}]≥slant 1.0. These results demonstrate the ability to identify extremely metal-poor stars from SkyMapper photometry, pointing to increased sample sizes and a better characterization of the metal-poor tail of the halo metallicity distribution function in the future. This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile.
Situ, Jie; Wu, Jian; Wang, Jing-lin; Zhu, De-xiang; Zhang, Jian-jie; Liu, Wei-wei; Qin, Zhuo-hui
2012-05-01
To study the sympathetic skin response (SSR) to the effects of N-hexane on autonomic nerves function in patients with chronic N-hexane poisoning. The subjects in present study included 30 controls and 37 cases with chronic N-hexane poisoning. Also 37 patients were divided into 3 subgroups (mild, moderate and severe poisoning) according to diagnostic criteria of occupational diseases. All subjects were examined by SSR test and nerve conduction velocity (NCV) test. All patients were reexamined by SSR and NCV every 1 ∼ 2 months. The differences in SSR parameters (latency, amplitude) among groups were observed. In the severe poisoning subgroup, the changes of SSR and NCV parameters (conduction velocity, amplitude) in different poisoning stages were observed. There were significant differences in SSR latency of upper extremity among groups and the significant differences in SSR amplitude of upper and lower extremity among groups (P < 0.05). No significant differences in SSR parameters were found between the adjacent groups (P > 0.05). There were significant differences in SSR latency of upper extremity during different periods and the significant differences in SSR amplitude of upper and lower extremity during different periods among all groups (P < 0.05). The change of SSR parameters consistent with that in NCV. The longest SSR latency of upper extremity and the smallest SSR amplitudes of upper and lower extremity appears 1 - 2 months earlier than that of the smallest action potential amplitude. The damage of autonomic nerves induced by N-hexane increased with poisoning progresses. The damage of autonomic nerves corresponded with the damage of myelin sheath of large myelinated nerves, but which appeared 1 - 2 months earlier than the damage of axon of large myelinated nerves. SSR test may serve as a method to detect the damage of autonomic nerves function in patients with chronic N-hexane poisoning.
Pulsed Electromagnetic Acceleration of Plasmas
NASA Technical Reports Server (NTRS)
Thio, Y. C. Francis; Cassibry, Jason T.; Markusic, Tom E.; Rodgers, Stephen L. (Technical Monitor)
2002-01-01
A major shift in paradigm in driving pulsed plasma thruster is necessary if the original goal of accelerating a plasma sheet efficiently to high velocities as a plasma "slug" is to be realized. Firstly, the plasma interior needs to be highly collisional so that it can be dammed by the plasma edge layer not (upstream) adjacent to the driving 'vacuum' magnetic field. Secondly, the plasma edge layer needs to be strongly magnetized so that its Hall parameter is of the order of unity in this region to ensure excellent coupling of the Lorentz force to the plasma. Thirdly, to prevent and/or suppress the occurrence of secondary arcs or restrike behind the plasma, the region behind the plasma needs to be collisionless and extremely magnetized with sufficiently large Hall parameter. This places a vacuum requirement on the bore conditions prior to the shot. These requirements are quantified in the paper and lead to the introduction of three new design parameters corresponding to these three plasma requirements. The first parameter, labeled in the paper as gamma (sub 1), pertains to the permissible ratio of the diffusive excursion of the plasma during the course of the acceleration to the plasma longitudinal dimension. The second parameter is the required Hall parameter of the edge plasma region, and the third parameter the required Hall parameter of the region behind the plasma. Experimental research is required to quantify the values of these design parameters. Based upon fundamental theory of the transport processes in plasma, some theoretical guidance on the choice of these parameters are provided to help designing the necessary experiments to acquire these data.
OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. BOETTCHER; A. PERCUS
2000-08-01
We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less
The Value of Certainty (Invited)
NASA Astrophysics Data System (ADS)
Barkstrom, B. R.
2009-12-01
It is clear that Earth science data are valued, in part, for their ability to provide some certainty about the past state of the Earth and about its probable future states. We can sharpen this notion by using seven categories of value ● Warning Service, requiring latency of three hours or less, as well as uninterrupted service ● Information Service, requiring latency less than about two weeks, as well as unterrupted service ● Process Information, requiring ability to distinguish between alternative processes ● Short-term Statistics, requiring ability to construct a reliable record of the statistics of a parameter for an interval of five years or less, e.g. crop insurance ● Mid-term Statistics, requiring ability to construct a reliable record of the statistics of a parameter for an interval of twenty-five years or less, e.g. power plant siting ● Long-term Statistics, requiring ability to construct a reliable record of the statistics of a parameter for an interval of a century or less, e.g. one hundred year flood planning ● Doomsday Statistics, requiring ability to construct a reliable statistical record that is useful for reducing the impact of `doomsday' scenarios While the first two of these categories place high value on having an uninterrupted flow of information, and the third places value on contributing to our understanding of physical processes, it is notable that the last four may be placed on a common footing by considering the ability of observations to reduce uncertainty. Quantitatively, we can often identify metrics for parameters of interest that are fairly simple. For example, ● Detection of change in the average value of a single parameter, such as global temperature ● Detection of a trend, whether linear or nonlinear, such as the trend in cloud forcing known as cloud feedback ● Detection of a change in extreme value statistics, such as flood frequency or drought severity For such quantities, we can quantify uncertainty in terms of the entropy which is calculated by creating a set of discrete bins for the value and then using error estimates to assign probabilities, pi, to each bin. The entropy, H, is simply H = ∑i pi log2(1/pi) The value of a new set of observations is the information gain, I, which is I = Hprior - Hposterior The probability distributions that appear in this calculation depend on rigorous evaluation of errors in the observations. While direct estimates of the monetary value of data that could be used in budget prioritizations may not capture the value of data to the scientific community, it appears that the information gain may be a useful start in providing a `common currency' for evaluating projects that serve very different communities. In addition, from the standpoint of governmental accounting, it appears reasonable to assume that much of the expense for scientific data become sunk costs shortly after operations begin and that the real, long-term value is created by the effort scientists expend in creating the software that interprets the data and in the effort expended in calibration and validation. These efforts are the ones that directly contribute to the information gain that provides the value of these data.
NASA Astrophysics Data System (ADS)
Schlögl, Matthias; Laaha, Gregor
2017-04-01
The assessment of road infrastructure exposure to extreme weather events is of major importance for scientists and practitioners alike. In this study, we compare the different extreme value approaches and fitting methods with respect to their value for assessing the exposure of transport networks to extreme precipitation and temperature impacts. Based on an Austrian data set from 25 meteorological stations representing diverse meteorological conditions, we assess the added value of partial duration series (PDS) over the standardly used annual maxima series (AMS) in order to give recommendations for performing extreme value statistics of meteorological hazards. Results show the merits of the robust L-moment estimation, which yielded better results than maximum likelihood estimation in 62 % of all cases. At the same time, results question the general assumption of the threshold excess approach (employing PDS) being superior to the block maxima approach (employing AMS) due to information gain. For low return periods (non-extreme events) the PDS approach tends to overestimate return levels as compared to the AMS approach, whereas an opposite behavior was found for high return levels (extreme events). In extreme cases, an inappropriate threshold was shown to lead to considerable biases that may outperform the possible gain of information from including additional extreme events by far. This effect was visible from neither the square-root criterion nor standardly used graphical diagnosis (mean residual life plot) but rather from a direct comparison of AMS and PDS in combined quantile plots. We therefore recommend performing AMS and PDS approaches simultaneously in order to select the best-suited approach. This will make the analyses more robust, not only in cases where threshold selection and dependency introduces biases to the PDS approach but also in cases where the AMS contains non-extreme events that may introduce similar biases. For assessing the performance of extreme events we recommend the use of conditional performance measures that focus on rare events only in addition to standardly used unconditional indicators. The findings of the study directly address road and traffic management but can be transferred to a range of other environmental variables including meteorological and hydrological quantities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosking, Jonathan R. M.; Natarajan, Ramesh
The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.
Kara, Fatih; Yucel, Ismail
2015-09-01
This study investigates the climate change impact on the changes of mean and extreme flows under current and future climate conditions in the Omerli Basin of Istanbul, Turkey. The 15 regional climate model output from the EU-ENSEMBLES project and a downscaling method based on local implications from geophysical variables were used for the comparative analyses. Automated calibration algorithm is used to optimize the parameters of Hydrologiska Byråns Vattenbalansavdel-ning (HBV) model for the study catchment using observed daily temperature and precipitation. The calibrated HBV model was implemented to simulate daily flows using precipitation and temperature data from climate models with and without downscaling method for reference (1960-1990) and scenario (2071-2100) periods. Flood indices were derived from daily flows, and their changes throughout the four seasons and year were evaluated by comparing their values derived from simulations corresponding to the current and future climate. All climate models strongly underestimate precipitation while downscaling improves their underestimation feature particularly for extreme events. Depending on precipitation input from climate models with and without downscaling the HBV also significantly underestimates daily mean and extreme flows through all seasons. However, this underestimation feature is importantly improved for all seasons especially for spring and winter through the use of downscaled inputs. Changes in extreme flows from reference to future increased for the winter and spring and decreased for the fall and summer seasons. These changes were more significant with downscaling inputs. With respect to current time, higher flow magnitudes for given return periods will be experienced in the future and hence, in the planning of the Omerli reservoir, the effective storage and water use should be sustained.
Predicting Flood Hazards in Systems with Multiple Flooding Mechanisms
NASA Astrophysics Data System (ADS)
Luke, A.; Schubert, J.; Cheng, L.; AghaKouchak, A.; Sanders, B. F.
2014-12-01
Delineating flood zones in systems that are susceptible to flooding from a single mechanism (riverine flooding) is a relatively well defined procedure with specific guidance from agencies such as FEMA and USACE. However, there is little guidance in delineating flood zones in systems that are susceptible to flooding from multiple mechanisms such as storm surge, waves, tidal influence, and riverine flooding. In this study, a new flood mapping method which accounts for multiple extremes occurring simultaneously is developed and exemplified. The study site in which the method is employed is the Tijuana River Estuary (TRE) located in Southern California adjacent to the U.S./Mexico border. TRE is an intertidal coastal estuary that receives freshwater flows from the Tijuana River. Extreme discharge from the Tijuana River is the primary driver of flooding within TRE, however tide level and storm surge also play a significant role in flooding extent and depth. A comparison between measured flows at the Tijuana River and ocean levels revealed a correlation between extreme discharge and ocean height. Using a novel statistical method based upon extreme value theory, ocean heights were predicted conditioned up extreme discharge occurring within the Tijuana River. This statistical technique could also be applied to other systems in which different factors are identified as the primary drivers of flooding, such as significant wave height conditioned upon tide level, for example. Using the predicted ocean levels conditioned upon varying return levels of discharge as forcing parameters for the 2D hydraulic model BreZo, the 100, 50, 20, and 10 year floodplains were delineated. The results will then be compared to floodplains delineated using the standard methods recommended by FEMA for riverine zones with a downstream ocean boundary.
NASA Astrophysics Data System (ADS)
Wang, Cailin; Ren, Xuehui; Li, Ying
2017-04-01
We defined the threshold of extreme precipitation using detrended fluctuation analysis based on daily precipitation during 1955-2013 in Kuandian County, Liaoning Province. Three-dimensional copulas were introduced to analyze the characteristics of four extreme precipitation factors: the annual extreme precipitation day, extreme precipitation amount, annual average extreme precipitation intensity, and extreme precipitation rate of contribution. The results show that (1) the threshold is 95.0 mm, extreme precipitation events generally occur 1-2 times a year, the average extreme precipitation intensity is 100-150 mm, and the extreme precipitation amount is 100-270 mm accounting for 10 to 37 % of annual precipitation. (2) The generalized extreme value distribution, extreme value distribution, and generalized Pareto distribution are suitable for fitting the distribution function for each element of extreme precipitation. The Ali-Mikhail-Haq (AMH) copula function reflects the joint characteristics of extreme precipitation factors. (3) The return period of the three types has significant synchronicity, and the joint return period and co-occurrence return period have long delay when the return period of the single factor is long. This reflects the inalienability of extreme precipitation factors. The co-occurrence return period is longer than that of the single factor and joint return period. (4) The single factor fitting only reflects single factor information of extreme precipitation but is unrelated to the relationship between factors. Three-dimensional copulas represent the internal information of extreme precipitation factors and are closer to the actual. The copula function is potentially widely applicable for the multiple factors of extreme precipitation.
Extremely High Resolution Spectroscopy of Oxide Electronic Systems
2013-01-29
about 0.3-0.4 Bohr Magnetons per unit sell – extremely strong, and it may be indicative of an unusual order parameter in the superconductor . Each of...Publication [1]), and the fact that the enhancement exists in a highly disordered sample. While the origin of the effect may lie in the same exchange...order parameter in the superconductor . Each of these results has lead to interesting questions (detailed below) that we would like to
Nonparametric Regression Subject to a Given Number of Local Extreme Value
2001-07-01
compilation report: ADP013708 thru ADP013761 UNCLASSIFIED Nonparametric regression subject to a given number of local extreme value Ali Majidi and Laurie...locations of the local extremes for the smoothing algorithm. 280 A. Majidi and L. Davies 3 The smoothing problem We make the smoothing problem precise...is the solution of QP3. k--oo 282 A. Majidi and L. Davies FiG. 2. The captions top-left, top-right, bottom-left, bottom-right show the result of the
Drought Dynamics and Food Security in Ukraine
NASA Astrophysics Data System (ADS)
Kussul, N. M.; Kogan, F.; Adamenko, T. I.; Skakun, S. V.; Kravchenko, O. M.; Kryvobok, O. A.; Shelestov, A. Y.; Kolotii, A. V.; Kussul, O. M.; Lavrenyuk, A. M.
2012-12-01
In recent years food security became a problem of great importance at global, national and regional scale. Ukraine is one of the most developed agriculture countries and one of the biggest crop producers in the world. According to the 2011 statistics provided by the USDA FAS, Ukraine was the 8th largest exporter and 10th largest producer of wheat in the world. Therefore, identifying current and projecting future trends in climate and agriculture parameters is a key element in providing support to policy makers in food security. This paper combines remote sensing, meteorological, and modeling data to investigate dynamics of extreme events, such as droughts, and its impact on agriculture production in Ukraine. Two main problems have been considered in the study: investigation of drought dynamics in Ukraine and its impact on crop production; and investigation of crop growth models for yield and production forecasting and its comparison with empirical models that use as a predictor satellite-derived parameters and meteorological observations. Large-scale weather disasters in Ukraine such as drought were assessed using vegetation health index (VHI) derived from satellite data. The method is based on estimation of green canopy stress/no stress from indices, characterizing moisture and thermal conditions of vegetation canopy. These conditions are derived from the reflectance/emission in the red, near infrared and infrared parts of solar spectrum measured by the AVHRR flown on the NOAA afternoon polar-orbiting satellites since 1981. Droughts were categorized into exceptional, extreme, severe and moderate. Drought area (DA, in % from total Ukrainian area) was calculated for each category. It was found that maximum DA over past 20 years was 10% for exceptional droughts, 20% for extreme droughts, 50% for severe droughts, and 80% for moderate droughts. Also, it was shown that in general the drought intensity and area did not increase considerably over past 10 years. Analysis of interrelation between DA of different categories at oblast level with agriculture production will be discussed as well. A comparative study was carried out to assess three approaches to forecast winter wheat yield in Ukraine at oblast level: (i) empirical regression-based model that uses as a predictor 16-day NDVI composites derived from MODIS at the 250 m resolution, (ii) empirical regression-based model that uses as predictors meteorological parameters, and (iii) adapted for Ukraine Crop Growth Monitoring System (CGMS) that is based on WOFOST crop growth simulation model and meteorological parameters. These three approaches were calibrated for 2000-2009 and 2000-2010 data, and compared while performing forecasts on independent data for 2010 and 2011. For 2010, the best results in terms of root mean square error (RMSE, by oblast, deviation of predicted values from official statistics) were achieved using CGMS models: 0.3 t/ha. For NDVI and meteorological models RMSE values were 0.79 and 0.77 t/ha, respectively. When forecasting winter wheat yield for 2011, the following RMSE values were obtained: 0.58 t/ha for CGMS, 0.56 t/ha for meteorological model, and 0.62 t/ha for NDVI. In this case performance of all three approaches was relatively the same. Acknowledgements. This work was supported by the U.S. CRDF Grant "Analysis of climate change & food security based on remote sensing & in situ data sets" (UKB2-2972-KV-09).
Persistence Mapping Using EUV Solar Imager Data
NASA Technical Reports Server (NTRS)
Thompson, B. J.; Young, C. A.
2016-01-01
We describe a simple image processing technique that is useful for the visualization and depiction of gradually evolving or intermittent structures in solar physics extreme-ultraviolet imagery. The technique is an application of image segmentation, which we call "Persistence Mapping," to isolate extreme values in a data set, and is particularly useful for the problem of capturing phenomena that are evolving in both space and time. While integration or "time-lapse" imaging uses the full sample (of size N ), Persistence Mapping rejects (N - 1)/N of the data set and identifies the most relevant 1/N values using the following rule: if a pixel reaches an extreme value, it retains that value until that value is exceeded. The simplest examples isolate minima and maxima, but any quantile or statistic can be used. This paper demonstrates how the technique has been used to extract the dynamics in long-term evolution of comet tails, erupting material, and EUV dimming regions.
NASA Astrophysics Data System (ADS)
Llasat, Maria Carmen; Marcos, Raul; Turco, Marco; Gilabert, Joan; Llasat-Botija, Montserrat
2016-10-01
The aim of this paper is to analyse the potential relationship between flash flood events and convective precipitation in Catalonia, as well as any related trends. The paper starts with an overview of flash floods and their trends in the Mediterranean region, along with their associated factors, followed by the definition of, identification of, and trends in convective precipitation. After this introduction the paper focuses on the north-eastern Iberian Peninsula, for which there is a long-term precipitation series (since 1928) of 1-min precipitation from the Fabra Observatory, as well as a shorter (1996-2011) but more extensive precipitation series (43 rain gauges) of 5-min precipitation. Both series have been used to characterise the degree of convective contribution to rainfall, introducing the β parameter as the ratio between convective precipitation versus total precipitation in any period. Information about flood events was obtained from the INUNGAMA database (a flood database created by the GAMA team), with the aim of finding any potential links to convective precipitation. These flood data were gathered using information on damage where flood is treated as a multifactorial risk, and where any trend or anomaly might have been caused by one or more factors affecting hazard, vulnerability or exposure. Trend analysis has shown an increase in flash flood events. The fact that no trends were detected in terms of extreme values of precipitation on a daily scale, nor on the associated ETCCDI (Expert Team on Climate Change Detection and Indices) extreme index, could point to an increase in vulnerability, an increase in exposure, or changes in land use. However, the summer increase in convective precipitation was concentrated in less torrential events, which could partially explain this positive trend in flash flood events. The β parameter has been also used to characterise the type of flood event according to the features of the precipitation. The highest values correspond to short and local events, usually with daily β values above 0.5, while the minimum threshold of daily β for catastrophic flash floods is 0.31.
Research in Stochastic Processes.
1982-12-01
constant high level boundary. References 1. Jurg Husler , Extremie values of non-stationary sequ-ences ard the extr-rmal index, Center for Stochastic...A. Weron, Oct. 82. 20. "Extreme values of non-stationary sequences and the extremal index." Jurg Husler , Oct. 82. 21. "A finitely additive white noise...string model, Y. Miyahara, Carleton University and Nagoya University. Sept. 22 On extremfe values of non-stationary sequences, J. Husler , University of
A Model of the Pulsating Extremely Low-mass White Dwarf Precursor WASP 0247–25B
DOE Office of Scientific and Technical Information (OSTI.GOV)
Istrate, A. G.; Fontaine, G.; Heuser, C., E-mail: istrate@uwm.edu
We present an analysis of the evolutionary and pulsation properties of the extremely low-mass white dwarf precursor (B) component of the double-lined eclipsing system WASP 0247−25. Given that the fundamental parameters of that star have been obtained previously at a unique level of precision, WASP 0247−25B represents the ideal case for testing evolutionary models of this newly found category of pulsators. Taking into account the known constraints on the mass, orbital period, effective temperature, surface gravity, and atmospheric composition, we present a model that is compatible with these constraints and show pulsation modes that have periods very close to themore » observed values. Importantly, these modes are predicted to be excited. Although the overall consistency remains perfectible, the observable properties of WASP 0247−25B are closely reproduced. A key ingredient of our binary evolutionary models is represented by rotational mixing as the main competitor against gravitational settling. Depending on assumptions made about the values of the degree index ℓ for the observed pulsation modes, we found three possible seismic solutions. We discuss two tests, rotational splitting and multicolor photometry, that should readily identify the modes and discriminate between these solutions. However, this will require improved temporal resolution and higher S/N observations, which are currently unavailable.« less
Using dry and wet year hydroclimatic extremes to guide future hydrologic projections
NASA Astrophysics Data System (ADS)
Oni, Stephen; Futter, Martyn; Ledesma, Jose; Teutschbein, Claudia; Buttle, Jim; Laudon, Hjalmar
2016-07-01
There are growing numbers of studies on climate change impacts on forest hydrology, but limited attempts have been made to use current hydroclimatic variabilities to constrain projections of future climatic conditions. Here we used historical wet and dry years as a proxy for expected future extreme conditions in a boreal catchment. We showed that runoff could be underestimated by at least 35 % when dry year parameterizations were used for wet year conditions. Uncertainty analysis showed that behavioural parameter sets from wet and dry years separated mainly on precipitation-related parameters and to a lesser extent on parameters related to landscape processes, while uncertainties inherent in climate models (as opposed to differences in calibration or performance metrics) appeared to drive the overall uncertainty in runoff projections under dry and wet hydroclimatic conditions. Hydrologic model calibration for climate impact studies could be based on years that closely approximate anticipated conditions to better constrain uncertainty in projecting extreme conditions in boreal and temperate regions.
Geographic Information System and Geoportal «River basins of the European Russia»
NASA Astrophysics Data System (ADS)
Yermolaev, O. P.; Mukharamova, S. S.; Maltsev, K. A.; Ivanov, M. A.; Ermolaeva, P. O.; Gayazov, A. I.; Mozzherin, V. V.; Kharchenko, S. V.; Marinina, O. A.; Lisetskii, F. N.
2018-01-01
Geographic Information System (GIS) and Geoportal with open access «River basins of the European Russia» were implemented. GIS and Geoportal are based on the map of basins of small rivers of the European Russia with information about natural and anthropogenic characteristics, namely geomorphometry of basins relief; climatic parameters, representing averages, variation, seasonal variation, extreme values of temperature and precipitation; land cover types; soil characteristics; type and subtype of landscape; population density. The GIS includes results of spatial analysis and modelling, in particular, assessment of anthropogenic impact on river basins; evaluation of water runoff and sediment runoff; climatic, geomorphological and landscape zoning for the European part of Russia.
Review of literature surface tension data for molten silicon
NASA Technical Reports Server (NTRS)
Hardy, S.
1981-01-01
Measurements of the surface tension of molten silicon are reported. For marangoni flow, the important parameter is the variation of surface tension with temperature, not the absolute value of the surface tension. It is not possible to calculate temperature coefficients using surface tension measurements from different experiments because the systematic errors are usually larger than the changes in surface tension because of temperature variations. The lack of good surface tension data for liquid silicon is probably due to its extreme chemical reactivity. A material which resists attack by molten silicon is not found. It is suggested that all of the sessile drip surface tension measurements are probably for silicon which is contaminated by the substrate materials.
Mutually unbiased bases in six dimensions: The four most distant bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raynal, Philippe; Lue Xin; Englert, Berthold-Georg
2011-06-15
We consider the average distance between four bases in six dimensions. The distance between two orthonormal bases vanishes when the bases are the same, and the distance reaches its maximal value of unity when the bases are unbiased. We perform a numerical search for the maximum average distance and find it to be strictly smaller than unity. This is strong evidence that no four mutually unbiased bases exist in six dimensions. We also provide a two-parameter family of three bases which, together with the canonical basis, reach the numerically found maximum of the average distance, and we conduct a detailedmore » study of the structure of the extremal set of bases.« less
Sun, Xu; Dai, Daoxin; Thylén, Lars; Wosinski, Lech
2015-10-05
A Mach-Zehnder Interferometer (MZI) liquid sensor, employing ultra-compact double-slot hybrid plasmonic (DSHP) waveguide as active sensing arm, is developed. Numerical results show that extremely large optical confinement factor of the tested analytes (as high as 88%) can be obtained by DSHP waveguide with optimized geometrical parameters, which is larger than both, conventional SOI waveguides and plasmonic slot waveguides with same widths. As for MZI sensor with 40μm long DSHP active sensing area, the sensitivity can reach as high value as 1061nm/RIU (refractive index unit). The total loss, excluding the coupling loss of the grating coupler, is around 4.5dB.
Platform of integrated tools to support environmental studies and management of dredging activities.
Feola, Alessandra; Lisi, Iolanda; Salmeri, Andrea; Venti, Francesco; Pedroncini, Andrea; Gabellini, Massimo; Romano, Elena
2016-01-15
Dredging activities can cause environmental impacts due to, among other, the increase of the Suspended Solid Concentration (SSC) and their subsequent dispersion and deposition (DEP) far from the dredging point. The dynamics of the resulting dredging plume can strongly differ in spatial and temporal evolution. This evolution, for both conventional mechanical and hydraulic dredges, depends on the different mechanisms of sediment release in water column and the site-specific environmental conditions. Several numerical models are currently in use to simulate the dredging plume dynamics. Model results can be analysed to study dispersion and advection processes at different depths and distances from the dredging source. Usually, scenarios with frequent and extreme meteomarine conditions are chosen and extreme values of parameters (i.e. maximum intensity or total duration) are evaluated for environmental assessment. This paper presents a flexible, consistent and integrated methodological approach. Statistical parameters and indexes are derived from the analysis of SSC and DEP simulated time-series to numerically estimate their spatial (vertical and horizontal) and seasonal variability, thereby allowing a comparison of the effects of hydraulic and mechanical dredges. Events that exceed defined thresholds are described in term of magnitude, duration and frequency. A new integrated index combining these parameters, SSCnum, is proposed for environmental assessment. Maps representing the proposed parameters allow direct comparison of effects due to different (mechanical and hydraulic) dredges at progressive distances from the dredging zone. Results can contribute towards identification and assessment of the potential environmental effects of a proposed dredging project. A suitable evaluation of alternative technical choices, appropriate mitigation, management and monitoring measure is allowed in this framework. Environmental Risk Assessment and Decision Support Systems (DSS) may take advantage of the proposed tool. The approach is applied to a hypothetical dredging project in the Augusta Harbour (Eastern coast of Sicily Island-Italy). Copyright © 2015 Elsevier Ltd. All rights reserved.
Fast Prediction and Evaluation of Gravitational Waveforms Using Surrogate Models
NASA Astrophysics Data System (ADS)
Field, Scott E.; Galley, Chad R.; Hesthaven, Jan S.; Kaye, Jason; Tiglio, Manuel
2014-07-01
We propose a solution to the problem of quickly and accurately predicting gravitational waveforms within any given physical model. The method is relevant for both real-time applications and more traditional scenarios where the generation of waveforms using standard methods can be prohibitively expensive. Our approach is based on three offline steps resulting in an accurate reduced order model in both parameter and physical dimensions that can be used as a surrogate for the true or fiducial waveform family. First, a set of m parameter values is determined using a greedy algorithm from which a reduced basis representation is constructed. Second, these m parameters induce the selection of m time values for interpolating a waveform time series using an empirical interpolant that is built for the fiducial waveform family. Third, a fit in the parameter dimension is performed for the waveform's value at each of these m times. The cost of predicting L waveform time samples for a generic parameter choice is of order O(mL+mcfit) online operations, where cfit denotes the fitting function operation count and, typically, m ≪L. The result is a compact, computationally efficient, and accurate surrogate model that retains the original physics of the fiducial waveform family while also being fast to evaluate. We generate accurate surrogate models for effective-one-body waveforms of nonspinning binary black hole coalescences with durations as long as 105M, mass ratios from 1 to 10, and for multiple spherical harmonic modes. We find that these surrogates are more than 3 orders of magnitude faster to evaluate as compared to the cost of generating effective-one-body waveforms in standard ways. Surrogate model building for other waveform families and models follows the same steps and has the same low computational online scaling cost. For expensive numerical simulations of binary black hole coalescences, we thus anticipate extremely large speedups in generating new waveforms with a surrogate. As waveform generation is one of the dominant costs in parameter estimation algorithms and parameter space exploration, surrogate models offer a new and practical way to dramatically accelerate such studies without impacting accuracy. Surrogates built in this paper, as well as others, are available from GWSurrogate, a publicly available python package.
Attempting to physically explain space-time correlation of extremes
NASA Astrophysics Data System (ADS)
Bernardara, Pietro; Gailhard, Joel
2010-05-01
Spatial and temporal clustering of hydro-meteorological extreme events is scientific evidence. Moreover, the statistical parameters characterizing their local frequencies of occurrence show clear spatial patterns. Thus, in order to robustly assess the hydro-meteorological hazard, statistical models need to be able to take into account spatial and temporal dependencies. Statistical models considering long term correlation for quantifying and qualifying temporal and spatial dependencies are available, such as multifractal approach. Furthermore, the development of regional frequency analysis techniques allows estimating the frequency of occurrence of extreme events taking into account spatial patterns on the extreme quantiles behaviour. However, in order to understand the origin of spatio-temporal clustering, an attempt to find physical explanation should be done. Here, some statistical evidences of spatio-temporal correlation and spatial patterns of extreme behaviour are given on a large database of more than 400 rainfall and discharge series in France. In particular, the spatial distribution of multifractal and Generalized Pareto distribution parameters shows evident correlation patterns in the behaviour of frequency of occurrence of extremes. It is then shown that the identification of atmospheric circulation pattern (weather types) can physically explain the temporal clustering of extreme rainfall events (seasonality) and the spatial pattern of the frequency of occurrence. Moreover, coupling this information with the hydrological modelization of a watershed (as in the Schadex approach) an explanation of spatio-temporal distribution of extreme discharge can also be provided. We finally show that a hydro-meteorological approach (as the Schadex approach) can explain and take into account space and time dependencies of hydro-meteorological extreme events.
Soil transport parameters of potassium under a tropical saline soil condition using STANMOD
NASA Astrophysics Data System (ADS)
Suzanye da Silva Santos, Rafaelly; Honorio de Miranda, Jarbas; Previatello da Silva, Livia
2015-04-01
Environmental responsibility and concerning about the final destination of solutes in soil, so more studies allow a better understanding about the solutes behaviour in soil. Potassium is a macronutrient that is required in high concentrations, been an extremely important nutrient for all agricultural crops. It plays essential roles in physiological processes vital for plant growth, from protein synthesis to maintenance of plant water balance, and is available to plants dissolved in soil water while exchangeable K is loosely held on the exchange sites on the surface of clay particles. K will tend to be adsorbed onto the surface of negatively charged soil particles. Potassium uptake is vital for plant growth but in saline soils sodium competes with potassium for uptake across the plasma membrane of plant cells. This can result in high Na+:K+ ratios that reduce plant growth and eventually become toxic. This study aimed to obtain soil transport parameters of potassium in saline soil, such as: pore water velocity in soil (v), retardation factor (R), dispersivity (λ) and dispersion coefficient (D), in a disturbed sandy soil with different concentrations of potassium chlorate solution (KCl), which is one of the most common form of potassium fertilizer. The experiment was carried out using soil samples collected in a depth of 0 to 20 cm, applying potassium chlorate solution containing 28.6, 100, 200 and 500 mg L-1 of K. To obtain transport parameters, the data were adjusted with the software STANMOD. At low concentrations, interaction between potassium and soil occur more efficiently. It was observed that only the breakthrough curve prepared with solution of 500 mg L-1 reached the applied concentration, and the solution of 28.6 mg L-1 overestimated the parameters values. The STANMOD proved to be efficient in obtaining potassium transport parameters; KCl solution to be applied should be greater than 500 mg L-1; solutions with low concentrations tend to overestimate parameters values.
NASA Astrophysics Data System (ADS)
Woo, Hye-Jin; Park, Kyung-Ae
2017-09-01
Significant wave height (SWH) data of nine satellite altimeters were validated with in-situ SWH measurements from buoy stations in the East/Japan Sea (EJS) and the Northwest Pacific Ocean. The spatial and temporal variability of extreme SWHs was investigated by defining the 90th, 95th, and 99th percentiles based on percentile analysis. The annual mean of extreme SWHs was dramatically increased by 3.45 m in the EJS, which is significantly higher than the normal mean of about 1.44 m. The spatial distributions of SWHs showed significantly higher values in the eastern region of the EJS than those in the western part. Characteristic seasonality was found from the time-series SWHs with high SWHs (>2.5 m) in winter but low values (<1 m) in summer. The trends of the normal and extreme (99th percentile) SWHs in the EJS had a positive value of 0.0056 m year-1 and 0.0125 m year-1, respectively. The long-term trend demonstrated that higher SWH values were more extreme with time during the past decades. The predominant spatial distinctions between the coastal regions in the marginal seas of the Northwest Pacific Ocean and open ocean regions were presented. In spring, both normal and extreme SWHs showed substantially increasing trends in the EJS. Finally, we first presented the impact of the long-term trend of extreme SWHs on the marine ecosystem through vertical mixing enhancement in the upper ocean of the EJS.
Vargas, Hebert Alberto; Lakhman, Yulia; Sudre, Romain; Do, Richard K. G.; Bibeau, Frederic; Azria, David; Assenat, Eric; Molinari, Nicolas; Pierredon, Marie-Ange; Rouanet, Philippe; Guiu, Boris
2016-01-01
Purpose To determine the diagnostic performance of intravoxel incoherent motion (IVIM) parameters and apparent diffusion coefficient (ADC) to assess response to combined chemotherapy and radiation therapy (CRT) in patients with rectal cancer by using histogram analysis derived from whole-tumor volumes and single-section regions of interest (ROIs). Materials and Methods The institutional review board approved this retrospective study of 31 patients with rectal cancer who underwent magnetic resonance (MR) imaging before and after CRT, including diffusion-weighted imaging with 34 b values prior to surgery. Patient consent was not required. ADC, perfusion-related diffusion fraction (f), slow diffusion coefficient (D), and fast diffusion coefficient (D*) were calculated on MR images acquired before and after CRT by using biexponential fitting. ADC and IVIM histogram metrics and median values were obtained by using whole-tumor volume and single-section ROI analyses. All ADC and IVIM parameters obtained before and after CRT were compared with histopathologic findings by using t tests with Holm-Sidak correction. Receiver operating characteristic curves were generated to evaluate the diagnostic performance of IVIM parameters derived from whole-tumor volume and single-section ROIs for prediction of histopathologic response. Results Extreme values aside, results of histogram analysis of ADC and IVIM were equivalent to median values for tumor response assessment (P > .06). Prior to CRT, none of the median ADC and IVIM diffusion metrics correlated with subsequent tumor response (P > .36). Median D and ADC values derived from either whole-volume or single-section analysis increased significantly after CRT (P ≤ .01) and were significantly higher in good versus poor responders (P ≤ .02). Median IVIM f and D* values did not significantly change after CRT and were not associated with tumor response to CRT (P > .36). Interobserver agreement was excellent for whole-tumor volume analysis (range, 0.91–0.95) but was only moderate for single-section ROI analysis (range, 0.50–0.63). Conclusion Median D and ADC values obtained after CRT were useful for discrimination between good and poor responders. Histogram metrics did not add to the median values for assessment of tumor response. Volumetric analysis demonstrated better interobserver reproducibility when compared with single-section ROI analysis. © RSNA, 2016 Online supplemental material is available for this article. PMID:26919562
Nie, Bingbing; Zhou, Qing
2016-10-02
Pedestrian lower extremity represents the most frequently injured body region in car-to-pedestrian accidents. The European Directive concerning pedestrian safety was established in 2003 for evaluating pedestrian protection performance of car models. However, design changes have not been quantified since then. The goal of this study was to investigate front-end profiles of representative passenger car models and the potential influence on pedestrian lower extremity injury risk. The front-end styling of sedans and sport utility vehicles (SUV) released from 2008 to 2011 was characterized by the geometrical parameters related to pedestrian safety and compared to representative car models before 2003. The influence of geometrical design change on the resultant risk of injury to pedestrian lower extremity-that is, knee ligament rupture and long bone fracture-was estimated by a previously developed assessment tool assuming identical structural stiffness. Based on response surface generated from simulation results of a human body model (HBM), the tool provided kinematic and kinetic responses of pedestrian lower extremity resulted from a given car's front-end design. Newer passenger cars exhibited a "flatter" front-end design. The median value of the sedan models provided 87.5 mm less bottom depth, and the SUV models exhibited 94.7 mm less bottom depth. In the lateral impact configuration similar to that in the regulatory test methods, these geometrical changes tend to reduce the injury risk of human knee ligament rupture by 36.6 and 39.6% based on computational approximation. The geometrical changes did not significantly influence the long bone fracture risk. The present study reviewed the geometrical changes in car front-ends along with regulatory concerns regarding pedestrian safety. A preliminary quantitative benefit of the lower extremity injury reduction was estimated based on these geometrical features. Further investigation is recommended on the structural changes and inclusion of more accident scenarios.
NASA Astrophysics Data System (ADS)
Marani, M.; Zorzetto, E.; Hosseini, S. R.; Miniussi, A.; Scaioni, M.
2017-12-01
The Generalized Extreme Value (GEV) distribution is widely adopted irrespective of the properties of the stochastic process generating the extreme events. However, GEV presents several limitations, both theoretical (asymptotic validity for a large number of events/year or hypothesis of Poisson occurrences of Generalized Pareto events), and practical (fitting uses just yearly maxima or a few values above a high threshold). Here we describe the Metastatistical Extreme Value Distribution (MEVD, Marani & Ignaccolo, 2015), which relaxes asymptotic or Poisson/GPD assumptions and makes use of all available observations. We then illustrate the flexibility of the MEVD by applying it to daily precipitation, hurricane intensity, and storm surge magnitude. Application to daily rainfall from a global raingauge network shows that MEVD estimates are 50% more accurate than those from GEV when the recurrence interval of interest is much greater than the observational period. This makes MEVD suited for application to satellite rainfall observations ( 20 yrs length). Use of MEVD on TRMM data yields extreme event patterns that are in better agreement with surface observations than corresponding GEV estimates.Applied to the HURDAT2 Atlantic hurricane intensity dataset, MEVD significantly outperforms GEV estimates of extreme hurricanes. Interestingly, the Generalized Pareto distribution used for "ordinary" hurricane intensity points to the existence of a maximum limit wind speed that is significantly smaller than corresponding physically-based estimates. Finally, we applied the MEVD approach to water levels generated by tidal fluctuations and storm surges at a set of coastal sites spanning different storm-surge regimes. MEVD yields accurate estimates of large quantiles and inferences on tail thickness (fat vs. thin) of the underlying distribution of "ordinary" surges. In summary, the MEVD approach presents a number of theoretical and practical advantages, and outperforms traditional approaches in several applications. We conclude that the MEVD is a significant contribution to further generalize extreme value theory, with implications for a broad range of Earth Sciences.
Somatotype Variables Related to Muscle Torque and Power in Judoists
Lewandowska, Joanna; Buśko, Krzysztof; Pastuszak, Anna; Boguszewska, Katarzyna
2011-01-01
The purpose of this study was to examine the relationship between somatotype, muscle torque and power output in judoists. Thirteen judoists (age 18.4±3.1 years, body height 178.6±8.2 cm, body mass 82.3±15.9 kg) volunteered to participate in this study. Somatotype was determined using the Heath-Carter method. Maximal muscle torques of elbow, shoulder, knee, hip and trunk flexors as well as extensors were measured under static conditions. Power outputs were measured in 5 maximal cycle ergometer exercise bouts, 10 s each, at increasing external loads equal to 2.5, 5.0, 7.5, 10.0 and 12.5% of body weight. The Pearson’s correlation coefficients were calculated between all parameters. The mean somatotype of judoists was: 3.5-5.9-1.8 (values for endomorphy, mesomorphy and ectomorphy, respectively). The values (mean±SD) of sum of muscle torque of ten muscle groups (TOTAL) was 3702.2±862.9 N x m. The power output ranged from 393.2±79.4 to 1077.2±275.4 W. The values of sum of muscle torque of right and left upper extremities (SUE), sum of muscle torque of right and left lower extremities (SLE), sum of muscle torque of the trunk (ST) and TOTAL were significantly correlated with the mesomorphic component (0.68, 0.80, 0.71 and 0.78, respectively). The ectomorphic component correlated significantly with values of SUE, SLE, ST and TOTAL (−0.69, −0.81, −0.71 and −0.79, respectively). Power output was also strongly correlated with both mesomorphy (positively) and ectomorphy (negatively). The results indicated that the values of mesomorphic and ectomorphic somatotype components influence muscle torque and power output, thus body build could be an important factor affecting results in judo. PMID:23487284
NASA Astrophysics Data System (ADS)
Rupa, Chandra; Mujumdar, Pradeep
2016-04-01
In urban areas, quantification of extreme precipitation is important in the design of storm water drains and other infrastructure. Intensity Duration Frequency (IDF) relationships are generally used to obtain design return level for a given duration and return period. Due to lack of availability of extreme precipitation data for sufficiently large number of years, estimating the probability of extreme events is difficult. Typically, a single station data is used to obtain the design return levels for various durations and return periods, which are used in the design of urban infrastructure for the entire city. In an urban setting, the spatial variation of precipitation can be high; the precipitation amounts and patterns often vary within short distances of less than 5 km. Therefore it is crucial to study the uncertainties in the spatial variation of return levels for various durations. In this work, the extreme precipitation is modeled spatially using the Bayesian hierarchical analysis and the spatial variation of return levels is studied. The analysis is carried out with Block Maxima approach for defining the extreme precipitation, using Generalized Extreme Value (GEV) distribution for Bangalore city, Karnataka state, India. Daily data for nineteen stations in and around Bangalore city is considered in the study. The analysis is carried out for summer maxima (March - May), monsoon maxima (June - September) and the annual maxima rainfall. In the hierarchical analysis, the statistical model is specified in three layers. The data layer models the block maxima, pooling the extreme precipitation from all the stations. In the process layer, the latent spatial process characterized by geographical and climatological covariates (lat-lon, elevation, mean temperature etc.) which drives the extreme precipitation is modeled and in the prior level, the prior distributions that govern the latent process are modeled. Markov Chain Monte Carlo (MCMC) algorithm (Metropolis Hastings algorithm within a Gibbs sampler) is used to obtain the samples of parameters from the posterior distribution of parameters. The spatial maps of return levels for specified return periods, along with the associated uncertainties, are obtained for the summer, monsoon and annual maxima rainfall. Considering various covariates, the best fit model is selected using Deviance Information Criteria. It is observed that the geographical covariates outweigh the climatological covariates for the monsoon maxima rainfall (latitude and longitude). The best covariates for summer maxima and annual maxima rainfall are mean summer precipitation and mean monsoon precipitation respectively, including elevation for both the cases. The scale invariance theory, which states that statistical properties of a process observed at various scales are governed by the same relationship, is used to disaggregate the daily rainfall to hourly scales. The spatial maps of the scale are obtained for the study area. The spatial maps of IDF relationships thus generated are useful in storm water designs, adequacy analysis and identifying the vulnerable flooding areas.
ERIC Educational Resources Information Center
Kinnier, Richard T.
1984-01-01
Examined the resolution of value conflicts in 60 adults who wrote a solution to their conflicts. Compared extreme resolutions with those representing compromise. Compromisers and extremists did not differ in how rationally resolved they were about their solutions but compromisers felt better about their solutions. (JAC)
Modeling extreme PM10 concentration in Malaysia using generalized extreme value distribution
NASA Astrophysics Data System (ADS)
Hasan, Husna; Mansor, Nadiah; Salleh, Nur Hanim Mohd
2015-05-01
Extreme PM10 concentration from the Air Pollutant Index (API) at thirteen monitoring stations in Malaysia is modeled using the Generalized Extreme Value (GEV) distribution. The data is blocked into monthly selection period. The Mann-Kendall (MK) test suggests a non-stationary model so two models are considered for the stations with trend. The likelihood ratio test is used to determine the best fitted model and the result shows that only two stations favor the non-stationary model (Model 2) while the other eleven stations favor stationary model (Model 1). The return level of PM10 concentration that is expected to exceed the maximum once within a selected period is obtained.
Uncertainty and the Social Cost of Methane Using Bayesian Constrained Climate Models
NASA Astrophysics Data System (ADS)
Errickson, F. C.; Anthoff, D.; Keller, K.
2016-12-01
Social cost estimates of greenhouse gases are important for the design of sound climate policies and are also plagued by uncertainty. One major source of uncertainty stems from the simplified representation of the climate system used in the integrated assessment models that provide these social cost estimates. We explore how uncertainty over the social cost of methane varies with the way physical processes and feedbacks in the methane cycle are modeled by (i) coupling three different methane models to a simple climate model, (ii) using MCMC to perform a Bayesian calibration of the three coupled climate models that simulates direct sampling from the joint posterior probability density function (pdf) of model parameters, and (iii) producing probabilistic climate projections that are then used to calculate the Social Cost of Methane (SCM) with the DICE and FUND integrated assessment models. We find that including a temperature feedback in the methane cycle acts as an additional constraint during the calibration process and results in a correlation between the tropospheric lifetime of methane and several climate model parameters. This correlation is not seen in the models lacking this feedback. Several of the estimated marginal pdfs of the model parameters also exhibit different distributional shapes and expected values depending on the methane model used. As a result, probabilistic projections of the climate system out to the year 2300 exhibit different levels of uncertainty and magnitudes of warming for each of the three models under an RCP8.5 scenario. We find these differences in climate projections result in differences in the distributions and expected values for our estimates of the SCM. We also examine uncertainty about the SCM by performing a Monte Carlo analysis using a distribution for the climate sensitivity while holding all other climate model parameters constant. Our SCM estimates using the Bayesian calibration are lower and exhibit less uncertainty about extremely high values in the right tail of the distribution compared to the Monte Carlo approach. This finding has important climate policy implications and suggests previous work that accounts for climate model uncertainty by only varying the climate sensitivity parameter may overestimate the SCM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marous, L; Muryn, J; Liptak, C
2016-06-15
Purpose: Monte Carlo simulation is a frequently used technique for assessing patient dose in CT. The accuracy of a Monte Carlo program is often validated using the standard CT dose index (CTDI) phantoms by comparing simulated and measured CTDI{sub 100}. To achieve good agreement, many input parameters in the simulation (e.g., energy spectrum and effective beam width) need to be determined. However, not all the parameters have equal importance. Our aim was to assess the relative importance of the various factors that influence the accuracy of simulated CTDI{sub 100}. Methods: A Monte Carlo program previously validated for a clinical CTmore » system was used to simulate CTDI{sub 100}. For the standard CTDI phantoms (32 and 16 cm in diameter), CTDI{sub 100} values from central and four peripheral locations at 70 and 120 kVp were first simulated using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which intentional errors were introduced into the input parameters, the effects of which on simulated CTDI{sub 100} were analyzed. Results: At 38.4-mm collimation, errors in effective beam width up to 5.0 mm showed negligible effects on simulated CTDI{sub 100} (<1.0%). Likewise, errors in acrylic density of up to 0.01 g/cm{sup 3} resulted in small CTDI{sub 100} errors (<2.5%). In contrast, errors in spectral HVL produced more significant effects: slight deviations (±0.2 mm Al) produced errors up to 4.4%, whereas more extreme deviations (±1.4 mm Al) produced errors as high as 25.9%. Lastly, ignoring the CT table introduced errors up to 13.9%. Conclusion: Monte Carlo simulated CTDI{sub 100} is insensitive to errors in effective beam width and acrylic density. However, they are sensitive to errors in spectral HVL. To obtain accurate results, the CT table should not be ignored. This work was supported by a Faculty Research and Development Award from Cleveland State University.« less
Scaling of flow and transport behavior in heterogeneous groundwater systems
NASA Astrophysics Data System (ADS)
Scheibe, Timothy; Yabusaki, Steven
1998-11-01
Three-dimensional numerical simulations using a detailed synthetic hydraulic conductivity field developed from geological considerations provide insight into the scaling of subsurface flow and transport processes. Flow and advective transport in the highly resolved heterogeneous field were modeled using massively parallel computers, providing a realistic baseline for evaluation of the impacts of parameter scaling. Upscaling of hydraulic conductivity was performed at a variety of scales using a flexible power law averaging technique. A series of tests were performed to determine the effects of varying the scaling exponent on a number of metrics of flow and transport behavior. Flow and transport simulation on high-performance computers and three-dimensional scientific visualization combine to form a powerful tool for gaining insight into the behavior of complex heterogeneous systems. Many quantitative groundwater models utilize upscaled hydraulic conductivity parameters, either implicitly or explicitly. These parameters are designed to reproduce the bulk flow characteristics at the grid or field scale while not requiring detailed quantification of local-scale conductivity variations. An example from applied groundwater modeling is the common practice of calibrating grid-scale model hydraulic conductivity or transmissivity parameters so as to approximate observed hydraulic head and boundary flux values. Such parameterizations, perhaps with a bulk dispersivity imposed, are then sometimes used to predict transport of reactive or non-reactive solutes. However, this work demonstrates that those parameters that lead to the best upscaling for hydraulic conductivity and head do not necessarily correspond to the best upscaling for prediction of a variety of transport behaviors. This result reflects the fact that transport is strongly impacted by the existence and connectedness of extreme-valued hydraulic conductivities, in contrast to bulk flow which depends more strongly on mean values. It provides motivation for continued research into upscaling methods for transport that directly address advection in heterogeneous porous media. An electronic version of this article is available online at the journal's homepage at http://www.elsevier.nl/locate/advwatres or http://www.elsevier.com/locate/advwatres (see "Special section on vizualization". The online version contains additional supporting information, graphics, and a 3D animation of simulated particle movement. Limited. All rights reserved
Numerical Analysis of Flood modeling of upper Citarum River under Extreme Flood Condition
NASA Astrophysics Data System (ADS)
Siregar, R. I.
2018-02-01
This paper focuses on how to approach the numerical method and computation to analyse flood parameters. Water level and flood discharge are the flood parameters solved by numerical methods approach. Numerical method performed on this paper for unsteady flow conditions have strengths and weaknesses, among others easily applied to the following cases in which the boundary irregular flow. The study area is in upper Citarum Watershed, Bandung, West Java. This paper uses computation approach with Force2 programming and HEC-RAS to solve the flow problem in upper Citarum River, to investigate and forecast extreme flood condition. Numerical analysis based on extreme flood events that have occurred in the upper Citarum watershed. The result of water level parameter modeling and extreme flood discharge compared with measurement data to analyse validation. The inundation area about flood that happened in 2010 is about 75.26 square kilometres. Comparing two-method show that the FEM analysis with Force2 programs has the best approach to validation data with Nash Index is 0.84 and HEC-RAS that is 0.76 for water level. For discharge data Nash Index obtained the result analysis use Force2 is 0.80 and with use HEC-RAS is 0.79.
NASA Astrophysics Data System (ADS)
Toda, M.; Knohl, A.; Herbst, M.; Keenan, T. F.; Yokozawa, M.
2016-12-01
The increase in extreme climate events associated with ongoing global warming may create severe damage to terrestrial ecosystems, changing plant structure and the eco-physiological functions that regulate ecosystem carbon exchange. However, most damage is usually due to moderate, rather than catastrophic, disturbances. The nature of plant functional responses to such disturbances, and the resulting effects on the terrestrial carbon cycle, remain poorly understood. To unravel the scientific question, tower-based eddy covariance data in the cool-temperate forests were used to constrain plant eco-physiological parameters in a persimoneous ecosystem model that may have affected carbon dynamics following extreme climate events using the statistic Bayesian inversion approach. In the present study, we raised two types of extreme events relevant for cool-temperate regions, i.e. a typhoon with mechanistic foliage destraction and a heat wave with severe drought. With appropriate evaluation of parameter and predictive uncertainties, the inversion analysis shows annual trajectory of activated photosynthetic responses following climate extremes compared the pre-disturbance state in each forest. We address that forests with moderate disturbance show substantial and rapid photosynthetic recovery, enhanced productivity, and, thus, ecosystem carbon exchange, although the effect of extreme climatic events varies depending on the stand successional phase and the type, intensity, timing and legacy of the disturbance.
Shen, Shaoshuai; Abe, Takumi; Tsuji, Taishi; Fujii, Keisuke; Ma, Jingyu; Okura, Tomohiro
2017-01-01
[Purpose] The purpose of this study was to investigate which of the four chair-rising methods has low-load and the highest success rate, and whether the GRF parameters in that method are useful for measuring lower extremity function among physically frail Japanese older adults. [Subjects and Methods] Fifty-two individuals participated in this study. The participants voluntarily attempted four types of Sit-to-stand test (one variation without and three variations with the use of their arms). The following parameters were measured: peak reaction force (F/w), two force development rate parameters (RFD1.25/w, RFD8.75/w) and two time-related parameters (T1, T2). Three additional commonly employed clinical tests (One-leg balance with eyes open, Timed up and go and 5-meter walk test) were also conducted. [Results] “Hands on a chair” chair-rising method produced the highest success rate among the four methods. All parameters were highly reliable between testing occasions. T2 showed strongly significant associations with Timed up and go and 5-meter walk test in males. RFD8.75/w showed significant associations with Timed up and go and 5-meter walk test in females. [Conclusion] Ground reaction force parameters in the Sit-to-stand test are a reliable and useful method for assessment of lower extremity function in physically frail Japanese older adults. PMID:28931988
NASA Astrophysics Data System (ADS)
Kawamura, Taichi; Lognonné, Philippe; Nishikawa, Yasuhiro; Tanaka, Satoshi
2017-07-01
While deep moonquakes are seismic events commonly observed on the Moon, their source mechanism is still unexplained. The two main issues are poorly constrained source parameters and incompatibilities between the thermal profiles suggested by many studies and the apparent need for brittle properties at these depths. In this study, we reinvestigated the deep moonquake data to reestimate its source parameters and uncover the characteristics of deep moonquake faults that differ from those on Earth. We first improve the estimation of source parameters through spectral analysis using "new" broadband seismic records made by combining those of the Apollo long- and short-period seismometers. We use the broader frequency band of the combined spectra to estimate corner frequencies and DC values of spectra, which are important parameters to constrain the source parameters. We further use the spectral features to estimate seismic moments and stress drops for more than 100 deep moonquake events from three different source regions. This study revealed that deep moonquake faults are extremely smooth compared to terrestrial faults. Second, we reevaluate the brittle-ductile transition temperature that is consistent with the obtained source parameters. We show that the source parameters imply that the tidal stress is the main source of the stress glut causing deep moonquakes and the large strain rate from tides makes the brittle-ductile transition temperature higher. Higher transition temperatures open a new possibility to construct a thermal model that is consistent with deep moonquake occurrence and pressure condition and thereby improve our understandings of the deep moonquake source mechanism.
Theoretical study of mixing in liquid clouds – Part 1: Classical concepts
Korolev, Alexei; Khain, Alex; Pinsky, Mark; ...
2016-07-28
The present study considers final stages of in-cloud mixing in the framework of classical concept of homogeneous and extreme inhomogeneous mixing. Simple analytical relationships between basic microphysical parameters were obtained for homogeneous and extreme inhomogeneous mixing based on the adiabatic consideration. It was demonstrated that during homogeneous mixing the functional relationships between the moments of the droplets size distribution hold only during the primary stage of mixing. Subsequent random mixing between already mixed parcels and undiluted cloud parcels breaks these relationships. However, during extreme inhomogeneous mixing the functional relationships between the microphysical parameters hold both for primary and subsequent mixing.more » The obtained relationships can be used to identify the type of mixing from in situ observations. The effectiveness of the developed method was demonstrated using in situ data collected in convective clouds. It was found that for the specific set of in situ measurements the interaction between cloudy and entrained environments was dominated by extreme inhomogeneous mixing.« less
Climate Change Impact on Variability of Rainfall Intensity in Upper Blue Nile Basin, Ethiopia
NASA Astrophysics Data System (ADS)
Worku, L. Y.
2015-12-01
Extreme rainfall events are major problems in Ethiopia with the resulting floods that usually could cause significant damage to agriculture, ecology, infrastructure, disruption to human activities, loss of property, loss of lives and disease outbreak. The aim of this study was to explore the likely changes of precipitation extreme changes due to future climate change. The study specifically focuses to understand the future climate change impact on variability of rainfall intensity-duration-frequency in Upper Blue Nile basin. Precipitations data from two Global Climate Models (GCMs) have been used in the study are HadCM3 and CGCM3. Rainfall frequency analysis was carried out to estimate quantile with different return periods. Probability Weighted Method (PWM) selected estimation of parameter distribution and L-Moment Ratio Diagrams (LMRDs) used to find the best parent distribution for each station. Therefore, parent distributions for derived from frequency analysis are Generalized Logistic (GLOG), Generalized Extreme Value (GEV), and Gamma & Pearson III (P3) parent distribution. After analyzing estimated quantile simple disaggregation model was applied in order to find sub daily rainfall data. Finally the disaggregated rainfall is fitted to find IDF curve and the result shows in most parts of the basin rainfall intensity expected to increase in the future. As a result of the two GCM outputs, the study indicates there will be likely increase of precipitation extremes over the Blue Nile basin due to the changing climate. This study should be interpreted with caution as the GCM model outputs in this part of the world have huge uncertainty.
Factorial Design Approach in Proportioning Prestressed Self-Compacting Concrete
Long, Wu-Jian; Khayat, Kamal Henri; Lemieux, Guillaume; Xing, Feng; Wang, Wei-Lun
2015-01-01
In order to model the effect of mixture parameters and material properties on the hardened properties of, prestressed self-compacting concrete (SCC), and also to investigate the extensions of the statistical models, a factorial design was employed to identify the relative significance of these primary parameters and their interactions in terms of the mechanical and visco-elastic properties of SCC. In addition to the 16 fractional factorial mixtures evaluated in the modeled region of −1 to +1, eight axial mixtures were prepared at extreme values of −2 and +2 with the other variables maintained at the central points. Four replicate central mixtures were also evaluated. The effects of five mixture parameters, including binder type, binder content, dosage of viscosity-modifying admixture (VMA), water-cementitious material ratio (w/cm), and sand-to-total aggregate ratio (S/A) on compressive strength, modulus of elasticity, as well as autogenous and drying shrinkage are discussed. The applications of the models to better understand trade-offs between mixture parameters and carry out comparisons among various responses are also highlighted. A logical design approach would be to use the existing model to predict the optimal design, and then run selected tests to quantify the influence of the new binder on the model. PMID:28787990
NASA Astrophysics Data System (ADS)
Maltz, Jonathan S.
2000-11-01
We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mao, Zhu; Wang, Fan; Lin, Jung-Fu
In this study, we performed synchrotron X-ray diffraction (XRD) and Mössbauer spectroscopy (SMS) measurements on two single-crystal bridgmanite samples [ Embedded Image and Embedded Image ] to investigate the combined effect of Fe and Al on the hyperfine parameters, lattice parameters, and equation of state (EoS) of bridgmanite up to 130 GPa. Our SMS results show that Fe2+ and Fe3+ in Bm6 and Al-Bm11 are predominantly located in the large pseudo-dodecahedral sites (A-site) at lower-mantle pressures. The observed drastic increase in the hyperfine quadrupole splitting (QS) between 13 and 32 GPa can be associated with an enhanced local distortion ofmore » the A-site Fe2+ in Bm6. In contrast to Bm6, the enhanced lattice distortion and the presence of extremely high QS values of Fe2+ are not observed in Al-Bm11 at high pressures. Our results here support the notion that the occurrence of the extremely high QS component of approximately 4 mm/s in bridgmanite is due to the lattice distortion in the high-spin (HS) A-site Fe2+, instead of the occurrence of the intermediate-spin state. Both A-site Fe2+ and Fe3+ in Bm6 and Al-Bm11 remain in the HS state at lower-mantle pressures. Together with XRD results, we present the first experimental evidence that the enhanced lattice distortion of A-site Fe2+ does not cause any detectable variation in the EoS parameters, but is associated with anomalous variations in the bond length, tilting angle, and shear strain in the octahedra of Bm6. Analysis of the obtained EoS parameters of bridgmanite at lower-mantle pressures indicates that the substitution of Fe in bridgmanite will cause an enhanced density and a reduced bulk sound velocity (VΦ), whereas the Al and Fe substitution has a reduced effect on density and a negligible effect on VΦ. These experimental results provide new insight into the correlation between lattice, hyperfine, and EoS parameters of bridgmanite in the Earth’s lower mantle.« less
Predicting the cosmological constant with the scale-factor cutoff measure
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Simone, Andrea; Guth, Alan H.; Salem, Michael P.
2008-09-15
It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant {lambda} gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of {lambda} depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes'more » (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of {lambda}, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of {lambda} that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter {omega}, indicating that with this measure there is a possibility of detectable negative curvature.« less
Biological Oxygen Demand in Soils and Litters
NASA Astrophysics Data System (ADS)
Smagin, A. V.; Smagina, M. V.; Sadovnikova, N. B.
2018-03-01
Biological oxygen demand (BOD) in mineral and organic horizons of soddy-podzolic soils in the forest-park belt of Moscow as an indicator of their microbial respiration and potential biodestruction function has been studied. The BOD of soil samples has been estimated with a portable electrochemical analyzer after incubation in closed flasks under optimum hydrothermal conditions. A universal gradation scale of this parameter from very low (<2 g O2/(m3 h)) to extremely high (>140 g O2/(m3 h)) has been proposed for mineral and organic horizons of soil. A physically substantiated model has been developed for the vertical distribution of BOD in the soil, which combines the diffusion transport of oxygen from the atmosphere and its biogenic uptake in the soil by the first-order reaction. An analytical solution of the model in the stationary state has been obtained; from it, the soil oxygen diffusivity and the kinetic constants of O2 uptake have been estimated, and the profile-integrated total BOD value has been calculated (0.4-1.8 g O2/(m2 h)), which is theoretically identical to the potential oxygen flux from the soil surface due to soil respiration. All model parameters reflect the recreation load on the soil cover by the decrease in their values against the control.
Universal inverse power-law distribution for temperature and rainfall in the UK region
NASA Astrophysics Data System (ADS)
Selvam, A. M.
2014-06-01
Meteorological parameters, such as temperature, rainfall, pressure, etc., exhibit selfsimilar space-time fractal fluctuations generic to dynamical systems in nature such as fluid flows, spread of forest fires, earthquakes, etc. The power spectra of fractal fluctuations display inverse power-law form signifying long-range correlations. A general systems theory model predicts universal inverse power-law form incorporating the golden mean for the fractal fluctuations. The model predicted distribution was compared with observed distribution of fractal fluctuations of all size scales (small, large and extreme values) in the historic month-wise temperature (maximum and minimum) and total rainfall for the four stations Oxford, Armagh, Durham and Stornoway in the UK region, for data periods ranging from 92 years to 160 years. For each parameter, the two cumulative probability distributions, namely cmax and cmin starting from respectively maximum and minimum data value were used. The results of the study show that (i) temperature distributions (maximum and minimum) follow model predicted distribution except for Stornowy, minimum temperature cmin. (ii) Rainfall distribution for cmin follow model predicted distribution for all the four stations. (iii) Rainfall distribution for cmax follows model predicted distribution for the two stations Armagh and Stornoway. The present study suggests that fractal fluctuations result from the superimposition of eddy continuum fluctuations.
Guillén, J; Beresford, N A; Baeza, A; Izquierdo, M; Wood, M D; Salas, A; Muñoz-Serrano, A; Corrales-Vázquez, J M; Muñoz-Muñoz, J G
2018-06-01
A system for the radiological protection of the environment (or wildlife) based on Reference Animals and Plants (RAPs) has been suggested by the International Commission on Radiological Protection (ICRP). To assess whole-body activity concentrations for RAPs and the resultant internal dose rates, transfer parameters are required. However, transfer values specifically for the taxonomic families defined for the RAPs are often sparse and furthermore can be extremely site dependent. There is also a considerable geographical bias within available transfer data, with few data for Mediterranean ecosystems. In the present work, stable element concentrations (I, Li, Be, B, Na, Mg, Al, P, S, K. Ca, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, As, Se, Rb, Sr, Mo, Ag, Cd, Cs, Ba, Tl, Pb and U) in terrestrial RAPs, and the corresponding whole-body concentration ratios, CR wo , were determined in two different Mediterranean ecosystems: a Pinewood and a Dehesa (grassland with disperse tree cover). The RAPs considered in the Pinewood ecosystem were Pine Tree and Wild Grass; whereas in the Dehesa ecosystem those considered were Deer, Rat, Earthworm, Bee, Frog, Duck and Wild Grass. The CR wo values estimated from these data are compared to those reported in international compilations and databases. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Extreme Value Analysis of hydro meteorological extremes in the ClimEx Large-Ensemble
NASA Astrophysics Data System (ADS)
Wood, R. R.; Martel, J. L.; Willkofer, F.; von Trentini, F.; Schmid, F. J.; Leduc, M.; Frigon, A.; Ludwig, R.
2017-12-01
Many studies show an increase in the magnitude and frequency of hydrological extreme events in the course of climate change. However the contribution of natural variability to the magnitude and frequency of hydrological extreme events is not yet settled. A reliable estimate of extreme events is from great interest for water management and public safety. In the course of the ClimEx Project (www.climex-project.org) a new single-model large-ensemble was created by dynamically downscaling the CanESM2 large-ensemble with the Canadian Regional Climate Model version 5 (CRCM5) for an European Domain and a Northeastern North-American domain. By utilizing the ClimEx 50-Member Large-Ensemble (CRCM5 driven by CanESM2 Large-Ensemble) a thorough analysis of natural variability in extreme events is possible. Are the current extreme value statistical methods able to account for natural variability? How large is the natural variability for e.g. a 1/100 year return period derived from a 50-Member Large-Ensemble for Europe and Northeastern North-America? These questions should be answered by applying various generalized extreme value distributions (GEV) to the ClimEx Large-Ensemble. Hereby various return levels (5-, 10-, 20-, 30-, 60- and 100-years) based on various lengths of time series (20-, 30-, 50-, 100- and 1500-years) should be analyzed for the maximum one day precipitation (RX1d), the maximum three hourly precipitation (RX3h) and the streamflow for selected catchments in Europe. The long time series of the ClimEx Ensemble (7500 years) allows us to give a first reliable estimate of the magnitude and frequency of certain extreme events.
Pre-equilibrium dynamics and heavy-ion observables
NASA Astrophysics Data System (ADS)
Heinz, Ulrich; Liu, Jia
2016-12-01
To bracket the importance of the pre-equilibrium stage on relativistic heavy-ion collision observables, we compare simulations where it is modeled by either free-streaming partons or fluid dynamics. These cases implement the assumptions of extremely weak vs. extremely strong coupling in the initial collision stage. Accounting for flow generated in the pre-equilibrium stage, we study the sensitivity of radial, elliptic and triangular flow on the switching time when the hydrodynamic description becomes valid. Using the hybrid code iEBE-VISHNU [C. Shen, Z. Qiu, H. Song, J. Bernhard, S. Bass and U. Heinz, Comput. Phys. Commun. 199 (2016) 61] we perform a multi-parameter search, constrained by particle ratios, integrated elliptic and triangular charged hadron flow, the mean transverse momenta of pions, kaons and protons, and the second moment < pT2 > of the proton transverse momentum spectrum, to identify optimized values for the switching time τs from pre-equilibrium to hydrodynamics, the specific shear viscosity η / s, the normalization factor of the temperature-dependent specific bulk viscosity (ζ / s) (T), and the switching temperature Tsw from viscous hydrodynamics to the hadron cascade UrQMD. With the optimized parameters, we predict and compare with experiment the pT-distributions of π, K, p, Λ, Ξ and Ω yields and their elliptic flow coefficients, focusing specifically on the mass-ordering of the elliptic flow for protons and Lambda hyperons which is incorrectly described by VISHNU without pre-equilibrium flow.
Gao, Xun; Li, Qingde; Cheng, Wanli; Han, Guangping; Xuan, Lihui
2016-10-18
The orthogonal design method was used to determine the optimum conditions for modifying poplar fibers through a high temperature and pressurized steam treatment for the subsequent preparation of wood fiber/high-density polyethylene (HDPE) composites. The extreme difference, variance, and significance analyses were performed to reveal the effect of the modification parameters on the mechanical properties of the prepared composites, and they yielded consistent results. The main findings indicated that the modification temperature most strongly affected the mechanical properties of the prepared composites, followed by the steam pressure. A temperature of 170 °C, a steam pressure of 0.8 MPa, and a processing time of 20 min were determined as the optimum parameters for fiber modification. Compared to the composites prepared from untreated fibers, the tensile, flexural, and impact strength of the composites prepared from modified fibers increased by 20.17%, 18.5%, and 19.3%, respectively. The effect on the properties of the composites was also investigated by scanning electron microscopy and dynamic mechanical analysis. When the temperature, steam pressure, and processing time reached the highest values, the composites exhibited the best mechanical properties, which were also well in agreement with the results of the extreme difference, variance, and significance analyses. Moreover, the crystallinity and thermal stability of the fibers and the storage modulus of the prepared composites improved; however, the hollocellulose content and the pH of the wood fibers decreased.